Music of the Hemispheres

Why can a toddler sing? Why is even the most ordinary human brain a library of melodies?

By James Shreeve
Oct 1, 1996 5:00 AMNov 12, 2019 6:27 AM

Newsletter

Sign up for our email newsletter for the latest science news
 

To look at her, you would never know that Isabelle X is missing a piece of her brain. Ten years ago, a swollen blood vessel burst in her left temporal lobe. When the surgeon opened her skull to excise the damaged tissue, he noticed another dangerously swollen vessel on the right side and prudently snipped that one out too. The operation saved her life, but at the price of a good portion of cerebral cortex. Now she sits in front of a video camera: a poised, attractive woman in her late thirties, wearing a stylish beige jacket over a black chemise. She doesn’t slur her words or stare vacantly. No muscular tic or twitch haunts her perfectly made-up face. What is most astonishing about Isabelle, in fact, is how utterly normal she is. At least until the music starts.

O Tannenbaum, O Tannenbaum, how lovely are your branches!

Plucked out on a piano offscreen, without lyrics, the old Christmas chestnut is instantly recognizable--or should be. When an investigator asks Isabelle to name the tune, she hesitates.

A children’s song? she answers.

Try this one, says the investigator.

Twinkle twinkle little star, how I wonder what you are. . . .

I don’t think I know that one, says Isabelle, a little sheepishly.

The investigator--psychologist Isabelle Peretz of the University of Montreal--asks her to name one more. The piano plays what must surely be North America’s most familiar ditty: Happy birthday to you, happy birthday to you!

Isabelle listens, then shakes her head.

No, she replies. I don’t know it.

Before her operation, Isabelle knew the song only too well; as the manager of a local restaurant, she was obliged to sing it to celebrating diners almost every night. While not a musician herself, Isabelle certainly has some musical background, and her brother is a well- known jazz band conductor. There is nothing wrong with her hearing per se: in other experiments, she easily recognizes people’s voices and has no trouble naming a tune when just a few snatches of its lyrics are read to her. Like other patients suffering from the clinical condition known as amusia, she can easily identify environmental sounds--a chicken clucking, a cock crowing, a baby crying. But no melody in the world--not even Happy Birthday--triggers so much as a wisp of recognition.

This is the most serious case of amusia I have ever seen, says Peretz.

That Isabelle cannot recognize music may be peculiar, but from a broader view, what is truly, profoundly odd is that the rest of us can.

Every child will listen to the Barney song and sing it back again without prompting, says Robert Zatorre, a neuropsychologist at the Montreal Neurological Institute at McGill University. This is very different from an activity like reading, where exposure alone won’t do anything, no matter how long you sit in front of a book.

Such talent, however, may not be too far removed from the abilities that enable an infant to learn to speak. Language and music are both forms of communication that rely on highly organized variations in sound pitches, stress, and rhythm. Both are rich in harmonics: the overtones above the primary frequency of a sound that give it resonance and purity. In language, sounds are combined into patterns--words--that refer to something other than themselves. This makes it possible for us to communicate complexities of information and meaning far beyond the capabilities of other species. But notes, chords, and melodies lack explicit meanings. So why does music exist? Is our appreciation of it a biological universal, or a cultural creation? Why does it have such power to stir our emotions? Does music serve some adaptive purpose, or is it nothing more than an exquisitely pointless epiphenomenon--like a talent for chess, or the ability to taste the overtones of plum or vanilla in a vintage wine?

In Western society we’re inclined to think of music as something extra, says Sandra Trehub, a developmental psychologist at the University of Toronto. But you can’t find a culture that doesn’t have music. Everybody is listening.

What they are listening to is nothing more than organized sound. In the sixth century b.c., the Greek philosopher Pythagoras observed that music pleasing to the ear was produced by plucking lengths of string that bore simple mathematical relationships to one another. The physical basis for this phenomenon, it was later discovered, lies in the frequencies of the sound waves that make up notes. For example, when the frequency of one note is twice that of a second, the two notes will sound like the same note, an octave apart. This principal of octave equivalence is present in all the world’s music systems; the notes that make up the scale between an octave interval do not always correspond to the familiar do re mi of Western music, but they all come back, so to speak, to do.

Other ear-pleasing intervals are also built on notes whose frequencies relate in simple ways. Anyone who plays a little guitar has experienced the supremacy of these perfect consonances in Western music today; whole anthologies of folk songs, blues, rock, and other popular music can be accompanied quite adequately by simply strumming chords that are built on the first, fourth, and fifth tones in a scale--say, C, F, and G. In fact, when the oldest known popular song--written down on a Sumerian clay tablet some 3,400 years ago--was exhumed and performed in 1974, the audience found, to its pleasure, that it sounded utterly familiar because its intervals were much like those found in the seven-tone scale of Western music. Many scales in the world’s major non-Western musical systems are also founded on octaves, fifths, and, to a lesser extent, fourths. One can’t help wondering if our partiality to these simple frequency ratios is based in our biology or if they are learned cultural preferences that just happen to be ancient and ubiquitous.

For several years Trehub has been trying to separate the natural elements of musical systems from the nurtured by using the clean, uncluttered infant mind as a filter. In one experiment, she and her colleagues played a series of repeated intervals to six-month-old babies, raising or lowering the interval occasionally to see if the infant responded to this deviation from the pattern. They found that the infants noticed the change when the test intervals were perfect fifths or fourths but not when they were composed of more complex frequency ratios--the very ones adult ears tend to regard as gratingly dissonant. This does not mean that we come into the world with perfect-interval sensors already in place, but at the very least, it suggests a powerful biological predisposition toward learning them is built into us from birth.

Might this predisposition be somehow linked to our innate capacity for language? The many elements shared by both music and language make such a notion appealing. But the specialization of the brain tells a different story. It has long been known that language is primarily, though not exclusively, a function of the left side of the brain. Patients with damage to a frontal region in the left hemisphere known as Broca’s area typically lose their ability to speak, while those with injuries farther back in the hemisphere, in what is called Wernicke’s area, often relinquish their ability to understand what is being said. Yet paradoxically, people who have suffered left hemisphere damage often retain the ability to sing. For that reason, neuroscientists have historically been tempted to view music too as a lateralized cognitive function, usually attributed to the right hemisphere. In light of the role of the right hemisphere in expressing and interpreting emotion, the notion seems particularly provocative. But the truth may be more complex.

Until recently, the only way to glimpse the underpinnings of music in the normal human brain was to see them ruptured, confused, or exposed in a damaged one. The Russian composer Vissarion Shebalin, for instance, suffered two left hemisphere strokes in the 1950s that left him unable to speak or understand the meaning of words--nonetheless he continued to teach and compose music, including a symphony that Shostakovich believed to be among his most brilliant works. Shebalin’s case is a mirror image of Isabelle X’s loss of music without loss of words, and it would support the notion that music and language play out on separate neural circuits in the brain’s two hemispheres.

Rarely, however, do brain lesions so neatly discriminate one cognitive function from another. The most celebrated case of damage in a musician’s brain is that of Maurice Ravel, who began to make spelling mistakes in 1933 and soon after lost his ability to read or even sign his name. Far worse, he could no longer compose, even though, as he lamented, the music for a new opera was in his head, and he had no trouble playing scales or listening to musical performances. He lived four more years, tormented by music he could hear but no longer express. Precisely where in Ravel’s brain, or even in which hemisphere, the damage occurred is not known. But his case suggests that even if music and language occupy separate cognitive systems, at some other level there must be neural circuits that are shared between them or lie so close together in the cortex that a stroke or traumatic injury could spread its damage over both.

In a more recent case, a composer and professor of music endured a different agony following a stroke in the right side of his brain. Although he retained his ability to orchestrate music, he could no longer summon the emotions that fed his musical creativity, and he felt his compositions had become lifeless and dull.

Music is not a monolithic mental faculty, says Isabelle Peretz. It is composed of many different functions and components. To understand it, we have to devise tasks whereby you can study only one component at a time.

To pinpoint how and where the brain recognizes familiar pieces of music, for example, Peretz and her colleagues asked their subjects to listen first to a simple, unfamiliar tune, then to slightly altered versions of it. After the test, people with normal brains were usually able to tell when the tune had been altered either melodically or rhythmically. Patients with lesions on the left side of the brain had normal scores for melody changes, but those with damage on the right side of the brain scored well below the normal range. And both groups of brain-damaged patients were less able to discern changes in rhythm. Those results, says Peretz, suggest that though we hear a tune’s melody and rhythm as an integrated whole, the brain may be processing the two components separately.

But melody itself is not a monolithic element of music. It can be divided in turn into at least two components: the tune’s sequence of intervals between notes, and its contour--the overall shape of the melody as its intervals rise, fall, or stay the same. Most people can recognize a piece of music even when the intervals between two notes are occasionally altered, but only as long as the tampering does not affect the contour of the tune. According to Sandra Trehub, babies are much more likely to notice an interval change in a melody that disrupts its contour--and other studies have shown that when musically untrained adults hear an unfamiliar tune, they are likely to remember only its contour.

Brain imaging techniques have made it possible for researchers like Zatorre of McGill to tease out the circuits responsible for such elemental components of musical perception. In one series of experiments, Zatorre used pet scan imaging to record activity levels in different parts of the brain while his subjects listened to a series of simple melodies. When he requested that they simply listen to a tune, the pet scans showed a burst of activity in a region of the right temporal lobe called the superior temporal gyrus. This result was hardly unexpected: the region has long been known to be sensitive to auditory stimulation in monkeys as well as humans. But when he asked them to attend to particular pitches within the tunes and make comparisons--a task that would tweak working memory circuits that allow us to make musical sense out of a series of notes--the scans showed patterns of processing involving several regions of the brain.

Asking whether music is a right brain or left brain function isn’t really the right question, says Zatorre. I have very little doubt that when you are listening to a real piece of music, it is engaging the entire brain.

Of course, there are some rare brains that seem to be especially built to be musically engaged. Everyone knows of the precocity of Mozart’s genius, which produced its first musical composition before some children learn to read. Highly gifted children seem to have an abnormal attentiveness to sounds in their environment; the young Arthur Rubinstein, for instance, could recognize people by the tunes they sang to him. While there is much dispute over the degree to which the talent of a Mozart or Rubinstein is inherited, there is little doubt that it must be encouraged early in life if it is to bear fruit. Professional pianists and violinists, for instance, almost always begin to play seriously by the age of seven or eight.

Early musical training, in fact, apparently alters brain anatomy. Using magnetic resonance imaging, a team led by neurologist Gottfried Schlaug of the Heinrich Heine University in Düsseldorf, Germany, found that the corpus callosum, the central bundle of nerve fibers connecting the two brain hemispheres, was significantly larger in musicians who had trained from an early age than in nonmusicians. Nerves controlling motor functions on each side of the body pass through the front half of the corpus callosum. Since playing a musical instrument requires keen coordination between hands, Schlaug thinks that musical training early in life literally lays down either more wiring or better-insulated wiring, which presumably speeds motor communication between the two hemispheres.

Schlaug’s team has also found anatomic differences in the brains of musicians with perfect pitch. In the average human brain, a hunk of cortex called the planum temporale, in the temporal lobe, is larger on the left side of the brain than on the right. This difference has been chalked up to a presumed involvement in language processing. In the musicians Schlaug studied, however, this disparity in size was even more pronounced. According to Schlaug, this suggests that the planum temporale may be devoted to the analytic task of categorizing sound, which may underlie our perception of both music and language.

We think there really isn’t that much difference between the way we perceive language and the way absolute-pitch musicians perceive tones, says Schlaug. What is probably different is the degree to which they apply this analytic skill to a musical task.

On some level language and music lay claim to separate domains, but there are apparently shared cerebral circuits as well. What is the evolutionary relationship between these two distinctive human traits? Did music emerge from language, or was it the reverse? Charles Darwin believed that music arose as an elaboration of mating calls, protohuman males and females endeavoring to charm each other with musical notes and rhythm. Zatorre, for one, thinks this might be putting the musical cart before the verbal horse.

The evolutionary pressure for a highly specialized auditory process in the human brain must have come from language, he says. Any hominid group that developed it would have a huge advantage over others. But to process the complex, rapid-fire demands of language as fast as possible, it would make sense to bring it under the control of one hemisphere. If you accept that’s the case, you end up with a large brain, with unilateral development going on in the left hemisphere. This would leave other regions of the auditory system less busy. So we have it, let’s use it. Music doesn’t necessarily serve a purpose; it may just be fortuitous that it’s there.

Jamshed Bharucha, a cognitive scientist at Dartmouth who is building artificially intelligent computer models of our auditory processes, disagrees. Of course, he says, language would have more adaptive value than music among ancient hominids, but that doesn’t mean music couldn’t have served a purpose. Music as we know it today is a cultural creation that draws on many neural systems. But in all likelihood, there were earlier forms of music that drew on fewer systems, that did indeed have some adaptive value.

For example, says Bharucha, music would have been particularly valuable if it functioned to enhance group cohesion. In fact, it would be hard to find a society today in which music--whether a Sousa march or an aboriginal sacred song cycle--does not serve to reinforce the identity and common interests of the group. Bharucha also points out that even among animals, systems for pitch perception are commonly used to communicate emotion and intent. So, too, the prosody of human language--pitch, rhythm, and the characteristic qualities of sound called timbre--likewise signals a person’s emotional state and intentions, regardless of the meaning of the words being spoken. Since music is linked to the same systems that govern emotional expression, Bharucha sees its roots embedded as well in prelinguistic manipulations of the voice.

Musicians will tell you that the goal of playing an instrument is to make it sing, he says. There is something fundamental about our ability to produce and recognize sounds using our vocal apparatus. There is no doubt in my mind that prelinguistic forms of communication using pitch and rhythmic patterns and timbre would serve to communicate not only emotion and alarm but individual identification and group cohesion. These are probably the very reasons they evolved.

Sandra Trehub thinks music may arise from an even more fundamental bond between group members--the bond between mother and child. Babies cannot understand the meaning of words, but we speak to them anyway, and the baby talk we instinctively use is drenched in musicality: higher pitches; big, sweeping pitch contours; simple, melodic little ups and downs; singsong rhythms; and drawn-out vowels that flaunt their overtones. As noted earlier, infant brains are predisposed to soak up and decode these universal musical structures. The compelling urge to speak in motherese in the presence of a baby appears to be universal, too, especially during emotive interactions. (When the baby begins to smile, for instance, or when it cries for comfort.) Trehub has also found that the actual music sung to infants shows many similarities across cultures: lullabies everywhere employ few notes varying little in pitch; simple, repeated melodic patterns; and rhythms linked to the rocking and swaying motions used to soothe a fussy child. Some studies have even suggested that the rhythms characteristic of a given culture’s music have their roots in the way its infants are carried and rocked.

The very existence of music and important aspects of its structure, says Trehub, may stem from the relevance of music to infants.

Most people continue to be emotionally responsive to music throughout their lives. The conductor Herbert von Karajan once had a pulse meter attached while conducting Beethoven’s Leonora Overture; his pulse rate peaked not in the passages during which he exerted the most physical effort but in those that emotionally moved him most. But you don’t have to be a musician to feel a clutch of the heart when Mimi leaves Rodolpho in Act III of La Bohème, or when Whitney Houston sings And I will always love you about a doomed relationship. Remarkably, even those who can no longer know music still sense its emotional content; Isabelle X, though unable to tell one piece of music from another, still scored a song along a scale of sad to happy the same way normal subjects did. The pull can be irresistible.

I have a recording of Horowitz playing music from Tristan and Isolde that gives me shivers every time, says Robert Zatorre, and I don’t even like Wagner.

Few investigators have taken some tentative first steps toward understanding how music exerts its mysterious appeal. For instance, psychologist John Sloboda of the University of Keele, in England, asked a sample of 83 music listeners to name pieces that had elicited physical sensations--such as shivers, tears, or lumps in the throat--and to identify as closely as possible where in the piece those reactions occurred. Ninety percent of those responding reported that they had experienced shivers down the spine, and almost as many had felt a lump in the throat or been brought to tears or laughter. More important, the musical devices that inspired these reactions were remarkably consistent.

Pieces that make you cry seem to have certain features, and those that send shivers down your spine have others, says Sloboda. Shivers seemed to be provoked by unexpected musical events, such as sudden changes in key, harmony, or sound texture. People were often moved to tears, on the other hand, by repetitions of a melodic theme a step higher or lower than when the listener heard it first, as in Albinoni’s Adagio for Strings. This enduringly popular little dirge also contains numerous appoggiaturas--a tantalizing delay in the resolution of a melodic theme. As a musical device, appoggiaturas proved to be even more reliable at jerking tears. You find them in a lot of these weepy tunes, Sloboda says. (The Beatles’ Yesterday begins with one.)

Jaak Panksepp, a biopsychologist at Bowling Green State University in Ohio, has offered an intriguing hypothesis to explain musical chills. They might derive, he says, from the ability of particular acoustic structures--a high-pitched crescendo, for example, or a solo instrument emerging from the background--to excite primitive mammalian regions of the brain that respond to the distress signal of an infant who has suddenly lost its parents. The effect of that wail of woe is to make the infant’s parents feel a physical chill and thus prompt them to seek the warmth implicit in the reuniting embrace. Sad music may achieve its beauty and its chilling effect by juxtaposing a symbolic rendition of the separation call in the emotional context of potential reunion and redemption, says Panksepp.

Mitch Waterman, a psychologist at the University of Leeds, in England, offers a more down-to-earth perspective. We like being stimulated, and music is very good at that, he says. Like Sloboda and Panksepp, Waterman wants to find out what musical structures arouse stereotypical emotional reactions. But he also wants to understand whether the emotions that music evokes are real emotions. In other words, he asks, Does the sadness one feels listening to Rachmaninoff’s second piano concerto, for instance, have anything to do with the sadness felt at the death of one’s pet dog?

What I actually found was that each person responded uniquely to music, says Waterman. People can feel envy, or guilt, or shame, or disappointment simply because when we interact with music, we aren’t just sitting there and listening. Instead, people carry to the music all the complexity and idiosyncrasy of their own lives and personalities. After listening to Jessye Norman singing one of Strauss’s Four Last Songs, for example, one subject--an amateur soprano--reported that her most immediate, overwhelming emotion was jealousy, though she also reported feeling the chill.

In Waterman’s view, the emotions triggered by just music--like a tear squeezed out by an appoggiatura--might be better characterized as pseudoemotional: a way to stimulate ourselves safely, without the psychological consequences risked with real feeling. In fact, he believes that in literally playing on our emotions, music fulfills an essential, extremely primitive biological role: it arouses our brains to a state of heightened readiness, in which we are better able to deal with our environment in general. Our brains are very, very good at internalizing the consistencies of structure, he says. Whenever those consistencies are tweaked, we like it. It’s almost as if we use music as a resource to make us feel. It helps keep our brains going properly.

A related notion is the Mozart effect. In 1993 a study conducted at the University of California at Irvine by psychologist Frances Rauscher, along with Gordon Shaw and Katherine Ky, suggested that listening to music might somehow enhance the brain’s ability to perform abstract operations immediately afterward. Thirty-six college students were given standard iq spatial reasoning tests, preceded in one trial by ten minutes of silence, in a second trial by ten minutes of listening to a relaxation tape, and in a third one by ten minutes of listening to a Mozart piano sonata. The post-Mozartian iq scores averaged at least eight points higher than those of the other two trials. Rauscher suspects, moreover, that listening to any complex musical piece could produce similar results.

Still more promising, perhaps, is the possibility that music has a more long-term effect on abstract reasoning skills--if the brain is exposed to it early enough. In a pilot study conducted by Rauscher and her colleagues, a small group of three-year-olds in an inner-city day care center were given 30 minutes of singing lessons a day, while another group received piano lessons. After nine months, both groups showed a remarkable improvement in their ability to put together a puzzle, a standard test of their mathematical reasoning skills. And in a larger follow-up study, the researchers found that children who received voice and piano lessons performed 35 percent better than children who received no musical training. Such results lead the investigators to speculate that all the higher brain functions, music included, use a common internal neural language to interact with each other throughout the cortex.

We suggest that music can be used not only as a ‘window’ into examining higher brain functions but as a means to enhance them, say the researchers.

Just this past May, a team led by biophysicist Martin Gardiner of the Music School in Providence announced similar success among a sample of first graders. In the study, some control groups of children received the school system’s standard visual arts and music training, while the experimental groups were given more intensive instruction in music and art. When the study began, the experimental groups tested below the control groups. At the end of seven months, however, they had pulled even with them in reading and had surpassed them considerably in math.

Many investigators will remain skeptical of such results until more is known about how and why music plays so sweetly in the mind. At the very least, however, they add an additional incentive to the quest. Just a few years ago, the only way to probe the neural underpinnings of music perception was to attend to the effects of their destruction, in patients like Isabelle X. Even with the new tools available--brain imaging techniques and artificial intelligence, for example--we are just scratching the surface, as Jamshed Bharucha puts it. But that scratching is a kind of music in itself.

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group