As Ursula Bellugi delivers her talk, an expert in sign language translates for her. The signer's hands hurtle furiously through the air, fingers dancing as though possessed. They swoop through the air, puncture it with staccato stabs, furl and unfurl to form shapes in space. The signer's face is equally animated. Expressions flit rapidly across it, conveying nuance, inflection, and grammatical detail unimaginable to a casual, nonsigning observer. And all at blinding speed.
Bellugi, a bustling neuroscientist, is making a point: anything she can say, the signer can "say." Sign is bona fide language. It's not mime, not a poor, pidgin derivative of spoken tongues--it's a richly endowed language in and of itself. And it is Bellugi herself who over the past two decades has convinced a doubting world of that reality.
But as director of the Laboratory for Cognitive Neuroscience at the Salk Institute for Biological Sciences in La Jolla, California, and as the world's expert in the neurobiology of American Sign Language ("She's the founder of the field, the most important person in the field, the grandmother of the field," as one researcher puts it), the 63-year-old Bellugi is interested in sign for more than its own sake. Sign offers her a window into the brain, a means of discovering the biological foundations of language. She and other investigators around the country are pinpointing areas of the brain that are uniquely suited for linguistic tasks. In fact, they are suggesting that the capacity for language may well be innate, genetically determined, one of our defining characteristics as human beings.
Bellugi's fascination with the roots of language began in 1968, soon after she received her doctorate in linguistics and psychology from Harvard. Jonas Salk invited her to start a lab at the institute that bears his name, a stark concrete complex overlooking the Pacific. At Harvard she'd investigated the ways in which children learn the dizzyingly complex underlying rules of spoken language. Now, with her husband and fellow linguist, Edward Klima, she began to think about language in a neurobiological context. How, she wondered, was language processed in the brain?
In those days language was thought to be contingent on the ability to speak, the product of humans' ability to utter sounds. So Bellugi decided to compare the way hearing children learn to speak with the way deaf children learn to sign. By comparing language with what seemed like a completely different system of communication, she hoped to tease out differences between the two. "We knew nothing about sign," recalls Bellugi. "We just thought comparing speech with sign was a theoretically interesting question, one that might move us toward the biology of language."
Little did she know what she was in for. Not only did Bellugi know nothing about sign, she soon found that virtually no one did. "There was almost nothing in the literature," she recalls, "and what there was was contradictory." Whatever sign was, however, it wasn't treated as real language. Some linguists said that sign was a "loose collection of pictorial gestures"; others that it was merely a crude communication system derived from spoken language--"broken English on the hands." Others declared that it was too vague, still others that it was too concrete, that it "dreaded and avoided the abstract." Bellugi didn't know which was right, but she did know one thing: "All of them couldn't be true--they just didn't fit together."
One concept caught her imagination, though. It was the brainchild of William Stokoe, a renegade teacher at Gallaudet University in Washington, D.C., the world's only liberal arts university for the deaf. Stokoe suspected that sign wasn't merely a collection of unrelated gestures. He perceived that signs were made up from a repertoire of distinct constituent parts: namely, the shapes signers make with their hands, where they place the shapes in space, and the manner in which they move their hands through space. Indeed, these parts reminded him of the modular bits of sound, called phonemes, that are combined to make words in spoken language. (Phonemes are roughly equivalent to the three sound units in c-a-t, or th-a-t, or ch-ea-t; each language has a finite set of these building blocks.) In 1965 Stokoe and his colleagues published a Dictionary of American Sign Language, cataloging thousands of signs and their various constituent parts.
It was only a first, highly controversial step toward understanding sign, but to Bellugi it offered a clue that sign might have structure after all, just like "real" languages. Intrigued, she decided to take a closer look. "Our brains are very good at cataloging--at imposing order even where none exists," she says. "We didn't know if the dictionary was just the result of a zeal for cataloging, or if there was some reality to it." Establishing that reality was crucial if she was going to compare sign with spoken language in her investigations of the language-learning process. "After all," says Bellugi, "you can't ask questions about the way a child learns the underlying structure of something if you don't know if it even has a structure."
So, starting at ground zero, Bellugi and her lab began to find out. It wasn't easy. First she had to master sign. "I used to go to a bowling alley frequented by deaf people," she recalls, "and I'd pester them to teach me. It was an old-fashioned way of learning. Now there are classes in the structure of sign. But in those days, remember, we were trying to find out whether sign even had structure." Second, she had no idea how to do the experiments to answer that question. How do you discern structure in a purely visual-spatial form of communication? ("We didn't have a toolbox," she says with a hearty laugh, recalling the challenges of those early days.) To devise her studies, she asked deaf signers' help. (To visit her lab today is to be struck by its pockets of silence. Not that nothing is going on: researchers are often conversing intently. It's just that much of the communication is in sign.)
Obviously you need an acute understanding of language to find out whether another communication system follows the same principles. For Bellugi, the essence of language is its grammar. A collection of words or signs is just a vocabulary; its use is restricted to pretty crude communiqués. But for the full flower of language--for Shakespeare, for Nabokov, or just for your everyday gossipy chitchat--you need words that relate to one another through certain rules and that lend themselves to modulation. This lets you generate all sorts of meaningful possibilities from a finite vocabulary.
Consider a really simple sentence: "The girl looks at the boy." In spoken English, the meaning of a sentence is determined by the order of its words. "The girl looks at the boy" has a different meaning from "The boy looks at the girl." This order relationship--subject, verb, object--is an aspect of syntax, the part of grammar that arranges words into sentences. Now zoom in on the word looks. In the lingo of linguists, the smallest meaningful chunks in a word are called morphemes. Looks contains two morphemes: the root look and the -s ending, here signifying person and tense. Logically enough, if you change morphemes, you alter meaning. So, for example, switching the -s for an -ing at the end of look conveys a more continuous action. "The girl is looking at the boy" has a distinctly different flavor from "The girl looks at the boy." Syntax and morphology-- these are two of the fundamental characteristics of a grammar.
American Sign Language (ASL), Bellugi found from studying signers, is also defined by a grammar, but its grammar relies on space, hand shapes, and movement. What a speaker gets across in linear sequences of sounds, a signer communicates in three dimensions. In ASL, for example, the basic sign for look resembles a peace sign, but instead of being raised up, the two slightly splayed fingers are bent horizontally and extended toward the looked-at thing. To say "The girl looks at the boy," a signer places the sign for girl at one point in space, and the sign for boy at another, then moves the look sign from one point toward the other. (For "The boy looks at the girl" the look sign moves between the two points in the opposite direction.)
Not only does ASL have its own spatial syntax, Bellugi found, it also has its own vivid morphology. To say "The girl is looking at the boy," the look sign is modified by a morphemic change in hand movement: the continuous action of looking is conveyed by moving the hand like a Ferris wheel from the girl toward the boy and back full circle to the girl again. What's more, the face is a fund of grammatical information. Very particular facial movements (a certain way of raising and lowering the eyebrows, of pursing and clenching the lips) fill in information about relative clauses, questions, and adverbial nuances. These facial markers, for want of a better word, go well beyond the scope of the facial expressions universally used to show emotion.
By 1979, in their groundbreaking book The Signs of Language, Bellugi and Klima had come to the conclusion that sign undoubtedly is language. "The surface form of sign is terribly different," says Bellugi. "But the basic stuff, the underlying organization, is the same as for spoken language." As a language, they found, sign is as rich as the spoken variety, and in some ways richer; for those attuned to it, it has a dazzling visual drama unavailable to speech. "I remember one day I came home and said to my husband, 'It's as if there's a stage. There's a very distinct plane in space, right here' "--Bellugi delineates an area in front of her, from waist level to face, and from shoulder to shoulder--" 'where the action is taking place.' " And the "listener," in effect the audience at the play, watches the action unfold before his or her eyes.
One of Bellugi's experiments underscored the power of this visual world. She fixed small lights to the hands of signers, then placed her subjects in a darkened room to obscure the physical details of their hands and faces: when they signed, their words were expressed purely as dancing patterns of light. What looked like firefly flashes to the uninitiated was clearly language to other signers. Even when words were hard to make out, signers recognized distinct grammatical patterns--like the Ferris-wheel motion that conveys continuous action for looking. Bellugi also ascertained that foreign sign languages follow similar grammatical principles, though superficially they look nothing like ASL. Just as spoken Chinese has different sounds, you find different hand and movement combinations in Chinese Sign Language, she explains. "An ASL signer can't understand a Chinese signer. Even the way you close the hand is different." (A common ASL phoneme consists of a fist with the thumb placed on the index finger; its Chinese equivalent looks rather more like a hitchhiking sign.) "In fact, when a Chinese native signer comes to America and learns ASL, he usually signs with an accent." She laughs with delight. "Gives you shivers, doesn't it?"
Showing that sign was language, however, raised even more profound questions. "Obviously a great deal of human evolution has gone into creating language in the spoken mode," says Bellugi. "Spoken language is really what we were designed for." Yet here was a visual language articulated silently in space, and the human brain could clearly encompass it too. "It became a burning question. I wanted to know how sign language is organized in the brain."
By the late 1970s it was already well known that the brain's two halves are specialized for different purposes. The left side dominates when we talk and listen to speech; the right side lets us perceive spatial relationships. But here was an odd duck--a language that was visual and spatial. How did the brain handle that? "It would have been plausible," says Bellugi, "if sign were processed bilaterally, or more in the right hemisphere because of its spatial nature."
To find out, Bellugi adapted techniques classically used to study spoken language, one of which was to observe the language impairments of people who'd suffered brain damage. In this case, though, the sufferers had to be lifelong deaf signers with an injury specific to one or the other hemisphere--"a rare population indeed," says Bellugi. "But by this time we had a nationwide network of some 500 deaf people. They found us deaf signers who had suffered strokes."
These experiments, begun in the eighties and still ongoing, have been full of surprises. One volunteer had been an artist before a stroke damaged part of her brain's right hemisphere. She could still draw, but she could no longer complete the left side of her pictures--a result of an injury to the part of the right brain that governs attention to the left field of vision. Thus she omitted the left side of figures like elephants, houses, and flowers. Yet when asked to sign sentences that included the words for elephants, houses, and flowers, she was as fluent as ever, "speaking" with fully formed gestures and using the space to the left as well as to the right of her body. "Her signing was absolutely impeccable," says Bellugi. "No difficulties whatsoever, perfectly grammatical, using space on both sides."
Other signers with right-brain damage showed similar quirks. Asked to draw the layout of her bedroom, one woman piled all the furniture to the right of her picture, leaving the left side blank. And when she signed, she described all the furniture as being on the right because her sense of what was where in space was distorted. But although her spatial perception was distorted, her signing abilities per se were intact. She could form signs as accurately as ever, even if they involved three- dimensional space on the left and right sides of her body.
The performance of deaf signers who had suffered strokes on the other side of the brain, the left side, was precisely the opposite. In contrast to right-hemisphere-damaged patients, they were able to draw pictures of their bedrooms with all the furniture in its correct location. But they could not sign effectively--their left-hemisphere damage had impaired their language ability.
The conclusion was inescapable: "Sign language, like spoken language, is predominantly processed in the left side of the brain," says Bellugi. When it comes to sign, the supposed visual-spatial/language dichotomy between the right and left brain doesn't hold.
Now Bellugi is finding that patients' impairments vary according to precisely where the stroke occurred in their left brain. For example, one patient had damage to the left frontal lobe that resulted in halting, labored signing; she could not string signs together into sentences. Damage to other left-brain areas in Bellugi's subjects resulted in different problems--difficulty in forming and comprehending signs, errors in syntax and grammar. One woman with a lesion in the middle of her left hemisphere had trouble with her phonemes, the equivalent in spoken language of substituting "gine" for "fine," or "blass" for "glass."
Sometimes--but not always--the site of damage and the resulting deficit correspond to similar damage-deficit relationships in stroke patients whose use of spoken language is impaired. That suggests that some neural networks for signed and spoken language are shared, and some aren't. With luck, similarities and differences like these should allow researchers to pinpoint the systems in the brain that are essential for different aspects of language. "This is the way we're going to trace the neural systems underlying both spoken and sign language," says Bellugi.
Bellugi's conclusion, that language--regardless of form--emanates from the left brain, flies in the face of accepted wisdom concerning the role of the brain's hemispheres. But her views are backed by other evidence from colleagues across the country. One is her Salk Institute neighbor, neuropsychologist Helen Neville (whose lab occupies the floor directly above Bellugi's).
Neville, who works with both deaf and hearing people, chronicles their brain activity while they perform linguistic tasks. During experiments, her volunteers wear a cap studded with electrodes that measure their brain waves as they read and listen to English sentences, or look at ASL sentences signed on a video screen. She, too, has found that "there's a strong biological bias for the left hemisphere to be the language hemisphere." That's not to say there aren't some interesting exceptions. For example, while nearly all right-handers rely on the left brain for language, only two-thirds of left-handers do. Of the other third, half show language activity in the right side of the brain, half on both sides. "But left-handers make up only about 10 percent of the population, so we're not talking about a lot of people," says Neville.
The dominance of one hemisphere or the other is also a function of the age at which a person learns a tongue. It's long been suspected that the earlier you learn a language, the better you learn it. Early acquisition is strongly associated with the left brain. "If you don't learn English until you're 18, you don't show left-hemisphere specialization for English," says Neville. "And if you don't learn ASL until the late teens, you don't show left-hemisphere specialization for ASL. It's dispersed throughout the brain." However, since most people learn their main language during childhood, the left-brain bias remains primary--for both spoken and signed language. "It's really amazing to think that such different forms of language are mediated by similar brain systems," notes Neville. "It suggests an almost uncanny invariance, as though the capacity for language is inherent."
Bellugi agrees that language function seems to be built into the human brain. "The left hemisphere has an innate predisposition for language," she says firmly, "whatever the mode of expression."
That statement goes to the heart of a long-standing debate in linguistic circles. Is language more a product of nature or nurture? In one corner are the "experience" crowd, who contend that mastering a language is largely a question of mastering the skill, and then practice, practice, practice. The "innate" crowd, on the other hand, point out that children are never taught all the rudiments of language. They are exposed to it haphazardly, in bits and pieces, yet sometime during their second year they begin to talk, displaying a grasp of grammar entirely at odds with the meager language exposure they've had. How can they arrive at so much, having received so little? Because, goes the argument, they have an inborn ability. In effect, language lives within us--it seeks only the opportunity to come out.
Much of the evidence behind the innate school of thought comes from sign. Three years ago, Laura Ann Petitto, a cognitive psychologist at McGill University in Montreal and a former student of Bellugi's, published a captivating study on babbling babies. Deaf babies whose parents sign to them babble just like hearing babies whose parents coo to them--the difference being that deaf infants do their babbling with their hands. Petitto has since shown that hearing children exposed to both speech and sign (because one of their parents is deaf) show no preference for speech. They make babbling sounds and signs, and they go on to learn both speech and sign simultaneously. As for hearing children unexposed to speech (because both their parents are deaf), they learn sign as readily as any deaf child and become fluently bilingual when they're later exposed to speech. What's so striking, says Petitto, is how these babies grasp the essential structure of language, regardless of whether it's spoken or signed. "We humans are born with a mechanism that combs the environment looking for the rhythmic patterns of language, whether these patterns are expressed with the hands or with the tongue."
Further intriguing evidence for a language instinct comes from a study of deaf children in Nicaragua. Before the revolution of 1979, these children were scattered throughout the country--isolated and silenced. Because there's virtually no hereditary deafness in Nicaragua (in contrast to the United States, where 4 to 6 percent of the deaf are children of deaf parents), no sign language tradition had passed from generation to generation. Children used gestures to communicate with hearing relatives, but one child's gestures had little in common with another's.
In 1980 schools for the deaf were established throughout Nicaragua. For the first time, deaf children were thrown together, forming the beginnings of a distinct community. And for the first time, these children began to talk to one another. What transpired is described by Judy Kegl, a behavioral neuroscientist at Rutgers, as "the first documented case of the birth of a language."
The process unfolded like this: Soon after they were brought together, the children devised a shared set of pidgin signs based on gestures they'd used in their families. These signs allowed them to communicate in a rudimentary way but didn't display the properties of grammar that make up a real language. What happened next was magic. "Little kids about the age of three or four got exposed to that makeshift pidgin and absorbed it," says Kegl. "And then, by virtue of their own language- generation capability, they came out with a full-fledged language."
For more than a decade now these children, some 500 of them, have been creating a language out of whole cloth. It displays characteristic rules of grammar such as noun and verb agreement, subject-verb-object sentence construction, and a distinct number of hand-shape and movement building blocks. But in contrast to ASL, which has been handed down for generations, this new language has sprung from nowhere. "There is nothing that they could have used as a model," says Kegl. "It's clear evidence of an innate language capacity."
Could it be, then, that language is in our genes? McGill linguist Myrna Gopnik thinks so. "One or several genetic factors apparently affect the acquisition of language," she says. Gopnik bases her view on the study of otherwise normal, hearing families who display a very specific, inherited language impairment--in essence, the inability to construct and apply grammatical rules. Perhaps up to 3 percent of populations as diverse as Inuit, Greek, Japanese, and American have the problem. Ask these people to formulate the past tense of a simple verb and they can't--or at least not without stopping to run through the gamut of grammatical rules to come up with the appropriate construction. They can't spontaneously select the right word. Their halting speech suggests a more profound disability, when actually they are normal in every other regard.
Patrick Dunne, a geneticist at Baylor College of Medicine in Houston, is collaborating with Gopnik on a search for the gene or genes that may cause the impairment. If they find it--a large if--it could open the door to understanding normal grammatical processing and possibly point the way to other language genes. ("That's the basis of genetics," says Dunne. "You track down the mutated gene responsible for the problem. Then you can see how it functions in a normal situation.")
All this has flowed from Bellugi's realization that the silent world of sign could let her dissect language in the brain. "She had the novel insight, the brilliant insight, that languages that evolved in the absence of sound are an incredibly powerful avenue into our brain," says Petitto.
"Sign tells us a great deal about the human capacity for language," says Bellugi. "I can say this now after 26 years of research. I've had this drive, this curious drive, to understand language and the brain. And I've gotten deaf people curious along with me. It's just been a very exciting quest."