Music and the Brain



Do you like music? One would be hard-pressed to find someone who does not enjoy any type of music. Musicality is a trait that we share with birds and some other mammals such as whales and elephants. Perhaps looking at other traits we share with these species can allow us to understand why it is that humans are musical.

In humans, whales, elephants and birds alike, the brain regions that show activity when sounds are played are organized in what is called a tonotopic arrangement. What this means is that different sounds with neighboring frequencies activate neighboring regions of the brain in that same order (Figure 1).


Figure 1. Schematic representation of the tonotopic arrangement of the human primary auditory cortex.

This type of organization can be very useful to recognize objects in the world that activate auditory brain areas in a particular manner, like a baby’s cry or a lion’s roar. One could expect that musical notes played to match the frequency sequence of a natural sound (such as in Figure 2) would elicit similar feelings as that sound.


Figure 2. Example of guitar strings played to roughly match the cry of a French baby (adapted from reference 1).

Do we like music because it elicits similar feelings in us as nature does? This is probably only true to a certain degree. It would be hard to link every happy song to happy naturalistic sounds. And so we are left wondering: are there other reasons we like music? The answer to this question might be found by looking at the collective behavior of a group of musical animals.

A recent study (2) proposes that animals in a group may want to move together to create synchronized periods of silence (e.g. the time between steps). Synchrony would allow animals to improve their hearing by reducing the amount of noise created during movement, thereby increasing their chances of detecting predators. Next time you walk in a group, see if you naturally match your steps! The rhythmic sounds created during this elementary form of dancing could be the basis of musicality in a species, much like the beat of a drum can be the basis of a song. By increasing the chances of species survival, rhythmic movements and the sounds they produce might have been encouraged or selected for by rewarding them. It has been observed that the rewarding hormone and neuromodulator dopamine is secreted when we listen to music. In humans, language may have played a similar role, encouraging the specific timing of sound production and comprehension.

Human music and language are unquestionably interconnected.

In all languages, we piece together different sounds to convey meaning. In a sense, this is exactly what music is. It is unclear whether early humans playing with their voices to create melodies gave rise to language or if it was the other way around. But most languages share the characteristic that changing the pitch (frequency) and tempo (timing) of how something is said can dramatically change the message being conveyed. The extreme can be appreciated in tonal languages, which have different meanings for the same word depending on how it is intoned. For example, the Chinese word Ma has 4 different meanings: mā means mother (audio), má means hemp (audio), mǎ means horse (audio), and mà means scold (audio). Culture can shape the way we feel about sounds and music as well as the meaning we extract from them.

Music is more than a mean to express emotion: it is a powerful tool to transmit information. You may have already noticed in yourself that while you may not know a single book or even a few poems by heart, you can sing a countless number of songs off the top of your head. From the Australian aboriginal people and the Gauls, all the way until us, humans have taken advantage of the ease we have to memorize songs to tell tales about their origins, praise their gods, instill morals to the young, and even tell the news. That our brains are organized tonotopically probably facilitates the learning of songs over monotone narratives. Although we may not be able to understand music by solely looking at the architecture of our brains, we should try to understand the culture and context in which it is played.

  1. Mampe B, et al. (2009) Newborns’ Cry Melody Is Shaped by Their Native Language. Curr Bio 19(23):1994–1997.
  2. Larsson M (2012) Incidental sounds of locomotion in animal cognition. Anim Cogn 15:1–13.



Roberto Medina
Roberto Medina studied mathematics and physics at the University of Illinois.
As a Mexican-American, he finds himself at home at the Champalimaud Neuroscience Programme in Lisbon while researching the neural basis of auditory perceptual decisions.



Loading Likes...