The patients were presented with 36 words that had a relatively simple consonant-vowel-consonant structure, such as bet, bat, beat, and boot. They were asked to say the words out loud and then to simply imagine saying them. Those instructions were conveyed visually (written on a computer screen) with no audio, and again vocally with no video. The electrodes provided a precise map of the resulting neural activity.
Schalk was intrigued by the results. As one might expect, when the subjects vocalized a word, the data indicated activity in the areas of the motor cortex associated with the muscles that produce speech. The auditory cortex and an area in its vicinity long believed to be associated with speech, called Wernicke’s area, were also active.
When the subjects imagined words, the motor cortex went silent while the auditory cortex and Wernicke’s area remained active. Although it was unclear why those areas were active, what they were doing, and what it meant, the raw results were an important start. The next step was obvious: Reach inside the brain and try to pluck out enough data to determine, at least roughly, what the subjects were thinking.
Schmeisser presented Schalk’s data to the Army committee the following year and asked it to fund a formal project to develop a real mind-reading helmet. As he conceived it, the helmet would function as a wearable interface between mind and machine. When activated, sensors inside would scan the thousands of brain waves oscillating in a soldier’s head; a microprocessor would apply pattern recognition software to decode those waves and translate them into specific sentences or words, and a radio would transmit the message. Schmeisser also proposed adding a second capability to the helmet to detect the direction in which a soldier was focusing his attention. The function could be used to steer thoughts to a specific comrade or squad, just by looking in their direction.
The words or sentences would reach a receiver that would then “speak” the words into a comrade’s earpiece or be played from a speaker, perhaps at a distant command post. The possibilities were easy to imagine:
“Look out! Enemy on the right!”
“We need a medical evacuation now!”
“The enemy is standing on the ridge. Fire!”
Any of those phrases could be life-saving.
This time the committee signed off.
Grant applications started piling up in Schmeisser’s office. To maximize the chance of success, he decided to split the Army funding between two university teams that were taking complementary approaches to the telepathy problem.
The first team, directed by Schalk, was pursuing the more invasive ECOG approach, attaching electrodes beneath the skull. The second group, led by Mike D’Zmura, a cognitive scientist at the University of California, Irvine, planned to use electroencephalography (EEG), a noninvasive brain-scanning technique that was far better suited for an actual thought helmet. Like ECOG, EEG relies on brain signals picked up by an array of electrodes that are sensitive to the subtle voltage oscillations caused by the firing of groups of neurons. Unlike ECOG, EEG requires no surgery; the electrodes attach painlessly to the scalp.
For Schmeisser, this practicality was critical. He ultimately wanted answers to the big neuroscience questions that would allow researchers to capture complicated thoughts and ideas, yet he also knew that demonstrating even a rudimentary thought helmet capable of discerning simple commands would be a valuable achievement. After all, soldiers often use formulaic and reduced vocabulary to communicate. Calling in a helicopter for a medical evacuation, for instance, requires only a handful of specific words.
“We could start there,” Schmeisser says. “We could start below that.” He noted, for instance, that it does not require a terribly complicated message to call for an air strike or a missile launch: “That would be a very nice operational capability.”
The relative ease with which EEG can be applied comes at a price, however. The exact location of neural activity is far more difficult to discern via EEG than with many other, more invasive methods because the skull, scalp, and cerebral fluid surrounding the brain scatter its electric signals before they reach the electrodes. That blurring also makes the signals harder to detect at all. The EEG data can be so messy, in fact, that some of the researchers who signed on to the project harbored private doubts about whether it could really be used to extract the signals associated with unspoken thoughts.
In the initial months of the project, back in 2008, one of D’Zmura’s key collaborators, renowned neuroscientist David Poeppel, sat in his office on the second floor of the New York University psychology building and realized he was unsure even where to begin. With his research partner Greg Hickok, an expert on the neuroscience of language, he had developed a detailed model of audible speech systems, parts of which were widely cited in textbooks. But there was nothing in that model to suggest how to measure something imagined.
For more than 100 years, Poeppel reflected, speech experimentation had followed a simple plan: Ask a subject to listen to a specific word or phrase, measure the subject’s response to that word (for instance, how long it takes him to repeat it aloud), and then demonstrate how that response is connected to activity in the brain. Trying to measure imagined speech was much more complicated; a random thought could throw off the whole experiment. In fact, it was still unclear where in the brain researchers should even look for the relevant signals.
Solving this problem would call for a new experimental method, Poeppel realized. He and a postdoctoral student, Xing Tian, decided to take advantage of a powerful imaging technique called magnetoencephalography, or MEG, to do their reconnaissance work. MEG can provide roughly the same level of spatial detail as ECOG but without the need to remove part of a subject’s
skull, and it is far more accurate than EEG.
Poeppel and Tian would guide subjects into a three-ton, beige-paneled room constructed of a special alloy and copper to shield against passing electromagnetic fields. At the center of the room sat a one-ton, six-foot-tall machine resembling a huge hair dryer that contained scanners capable of recording the minute magnetic fields produced by the firing of neurons. After guiding subjects into the device, the researchers would ask them to imagine speaking words like athlete, musician, and lunch. Next they asked them to imagine hearing the words.
When Poeppel sat down to analyze the results, he noticed something unusual. As a subject imagined hearing words, his auditory cortex lit up the screen in a characteristic pattern of reds and greens. That part was no surprise; previous studies had linked the auditory cortex to imagined sounds. However, when a subject was asked to imagine speaking a word rather than hearing it, the auditory cortex flashed an almost identical red and green pattern.
Poeppel was initially stumped by the results. “That is really bizarre,” he recalls thinking. “Why should there be an auditory pattern when the subjects didn’t speak and no one around them spoke?” Over time he arrived at an explanation. Scientists had long been aware of an error-correction mechanism in the brain associated with motor commands. When the brain sends a command to the motor cortex to, for instance, reach out and grab a cup of water, it also creates an internal impression, known as an efference copy, of what the resulting movement will look and feel like. That way, the brain can check the muscle output against the intended action and make any necessary corrections.
Poeppel believed he was looking at an efference copy of speech in the auditory cortex. “When you plan to speak, you activate the hearing part of your brain before you say the word,” he explains. “Your brain is predicting what it will sound like.”