It makes intuitive sense, Koch says, that our brains should dedicate some cells to people and things frequently in our thoughts. He adds that his findings might seem less surprising if one realizes that neurons are much more than simple threshold switches that fire whenever incoming pulses from other neurons exceed a certain level. A typical neuron receives input from thousands of other cells, some of which inhibit rather than encourage the neuron’s firing. The neuron may in turn encourage or suppress firing by some of those same cells in complex positive or negative feedback loops.
The uniqueness of each individual represents a barrier to science’s attempts to understand and control the mind.
In other words, a single neuron may resemble less a simple switch than a customized minicomputer, sophisticated enough to distinguish your grandmother from Grandma Moses. If this view is correct, meaningful messages might be conveyed not just by hordes of neurons screaming in unison but by a small group of cells whispering, perhaps in a terse temporal code. Discerning such faint signals within the cacophony of the brain will be “incredibly difficult,” Koch says, no matter how far neurotechnology advances.
Efforts to detect the whispers amid the cacophony are further complicated by the improvisational dexterity of the brain. Studies of the motor cortex, which underlies body movement, have shown that the brain invents entirely new coding schemes for novel situations. In the 1980s researchers discovered neurons in a monkey’s motor cortex that peaked in their firing rate when the monkey moved its hand in a specific direction. Rather than falling silent when the hand diverged even slightly from its so-called preferred direction, the cells’ firing rate diminished in proportion to the angle of divergence.
Several teams, including one led by Andrew Schwartz of the University of Pittsburgh, have sought to exploit these findings to create neural prostheses for paralyzed patients. These teams have demonstrated that electrodes implanted in a monkey’s motor cortex can detect signals accompanying a specific arm movement; these same signals, after being processed by a computer, can be used to manipulate a robot arm that might be in another room—or even, in one experiment, a robot’s legs on another continent. If the monkey’s arm is tied down, the monkey learns to control the robot arm through pure thought—but with an entirely different set of neural signals. In a June 2008 study published in Nature, Schwartz and his team used monkeys’ cortical signals to control a multijointed prosthetic device as it interacted with the physical environment. With thoughts alone, the monkeys were able to maneuver the mechanical arm to reach for and grab food located in front of them. The food reached their mouths about two-thirds of the time.
These findings dovetail with others showing that neurons’ coding behavior shifts in different contexts. “What you’re aiming at is sort of a moving target,” Schwartz explains. “If you make an estimate of something at one point in time, that doesn’t mean it’s going to stay that way.”
The mutability of the neural code is not necessarily bad news for neural-prosthesis designers. In fact, the brain’s capacity for inventing new information-processing schemes is thought to explain the success of artificial cochleas, which have been implanted in the ears of approximately 100,000 hearing-impaired people around the world in the past few decades. Commercial versions typically employ an array of electrodes, each of which channels electrical signals corresponding to a different pitch toward the auditory nerve. Like an old telephone party line, the electrodes can stimulate not just a single neuron but many simultaneously.
When cochlear implants were introduced in the mid-1980s, many neuroscientists expected them to work poorly, given their crude design. But the devices work well enough for some deaf people to converse over the telephone, particularly after an adjustment period during which channel settings are tweaked to provide the best reception. Patients’ brains somehow figure out how to make the most out of the strange signals.
There are surely limits to the brain’s ability to make up for scientists’ ignorance, as the poor performance of other neural prostheses suggests. Artificial retinas, light-sensitive chips that mimic the eye’s signal-processing ability and stimulate the optic nerve or visual cortex, have been tested in a handful of blind subjects who usually “see” nothing more than phosphenes, or flashes of light. And like Schwartz’s monkeys, a few paralyzed humans have learned to transmit commands to computers via chips embedded in their brains, but the associated prostheses are still slow and unreliable.
Nevertheless, the surprising effectiveness of artificial cochleas —together with other evidence of the brain’s adaptability and opportunism—has fueled optimism about the prospects for brain/?machine interfaces. “This is very relevant to why we think we are going to be successful,” says Ted Berger of the University of Southern California in Los Angeles, who is leading a project to create implantable brain chips that can restore or enhance memory. “We don’t need a perfectly accurate model of a memory cell,” he says. “We probably just have to be close, and the rest of the brain will adapt around it.”
With thoughts alone, monkeys were able to maneuver a mechanical arm to grab for food in front of them.
Berger’s experiments use slices of rat brain in petri dishes. For more than a decade, he has embedded electrodes in slices of the hippocampus—which plays a role in learning and memory—and recorded neurons’ responses to a wide range of electrical stimuli. His observations have made him a firm believer in temporal codes; hippocampal cells seem to be exquisitely sensitive not only to the rate but also to the timing of incoming pulses. “The evidence for temporal coding is indisputable,” Berger says.
He and his team have created a prototype of the world’s first memory implant chip. In order to create the chip, Berger bombarded live rat hippocampal neurons with electric impulses and recorded the electrical responses from the cells, collecting the “vocabulary” of the tissue. This information was programmed into the chip, enabling it to “listen” to brain signals, decode them, and respond appropriately with its own chain of electrical pulses, just like a network of neurons would. Experimentally, the chip responds to neural signals in exactly the same way as does the living brain tissue, suggesting that the chip may have the ability to communicate with living brain cells.
Berger boldly predicts that someday chips like his might restore memory capacity to stroke victims. But in some respects Berger is quite modest. He acknowledges that his memory chip could not be used to identify and manipulate specific memories. Rather, it can simulate “how neurons in a particular part of the brain change inputs into outputs. That’s very different from saying that I can identify a memory of your grandmother in a particular series of impulses.” To achieve this sort of mind reading, scientists would have to compile a “dictionary” for translating specific neural patterns into specific memories, perceptions, and thoughts. “I don’t know that it’s not possible,” Berger says. “It’s certainly not possible with what we know at the moment.”
“Don’t count on it in the 21st century, or even in the 22nd,” says neuroscientist Bruce McNaughton of the University of Arizona. With arrays of as many as 50 electrodes, McNaughton has monitored neurons in the hippocampus of rats as they move through a maze. Once a rat learns to navigate a maze, its neurons discharge in the same patterns whenever it goes through it. Remarkably, when the rat sleeps after a hard day of maze running, the same firing pattern often unfolds; the rat is presumably dreaming of the maze. This pattern could be said to represent—at least partially—the rat’s memory of the maze.
McNaughton emphasizes that the same maze generates different firing patterns in different rats; even in the same rat, the pattern changes if the maze is moved to a different room. He thus doubts whether science can compile a dictionary for decoding the neural signals corresponding to human memories, which are surely more complex, variable, and context sensitive than those of rats. At best, McNaughton suggests, one might construct a dictionary for a single person by monitoring the output of all her neurons for years while recording all her behavior and her self-described thoughts. Even then, the dictionary would be imperfect, and it would have to be constantly revised to account for the individual’s ongoing experiences. This dictionary would not work for anyone else.
Delgado hinted at the problem more than 30 years ago in Physical Control of the Mind when he raised the knotty question of meaning. With improved stimoceivers and a better understanding of the neural code, he said, scientists might determine what we are perceiving—a piece of music, say—based on our neural output. But no conceivable technology will be subtle enough to discern all the memories, emotions, and meanings aroused in us by our perceptions, because these emerge from “the experiential history of each individual.” You hear a stale pop tune, I hear my wedding song.
This is one point on which many neuroscientists agree: The uniqueness of each individual represents a fundamental barrier to science’s attempts to understand and control the mind. Although all humans share a “universal mode of operation,” Freeman says, even identical twins have divergent life histories and hence unique memories, perceptions, and predilections. The patterns of neural activity underpinning our selves keep changing throughout our lives as we learn to play checkers, read Thus Spake Zarathustra, fall in love, lose a job, win the lottery, get divorced, take Prozac.
Freeman thinks the prospects are good for developing relatively simple neural prostheses, such as devices that improve vision in the blind. But he suspects that our brains’ complexity and diversity rule out more ambitious projects, such as mind reading. If artificial-?intelligence engineers ever succeed in building a truly intelligent machine based on a neural coding scheme similar to ours, “we won’t be able to read its mind either,” Freeman says. We and even our cyborg descendants will always be “beyond Big Brother, and I’m very grateful for that.”