As everyone who buys a computer learns, electrical engineerssomehow manage to keep creating smaller and smaller microprocessors that double the speed of PCs about every 18 months. Keeping perspective can be difficult, but consider that the power of the first room-sized mainframe computer of 36 years ago is now dwarfed by any run-of-the-mill laptop. So what's wrong with this picture? Biomedical engineer William Ditto points out that today's processors may be a lot faster, but they're not a bit smarter than they were 40 years ago.
The dream of artificial intelligence that would allow a computer to learn, and thus get really smart, has proven to be something of a nightmare so far. That failure has led Ditto and his team of researchers at the Georgia Institute of Technology and Emory University to look beyond silicon and even beyond light chips. "First there were beads on an abacus, then vacuum tubes and integrated circuits," says Ditto. "Now we can use living tissue."
These days his neurons of choice are taken from leeches because "they are really big and easy to use." And they learn quickly. Not long ago, Ditto and his team coached two living leech neurons to perform very simple addition a humble beginning, but one that might lead to harnessing millions of similar neurons into a computer that solves problems using the nonlinear pattern-finding logic of the human brain. Although seemingly far-fetched, Ditto's belief that neurons could be the basis of the next great computer wave can be infectious. "Bill is our spiritual leader," says Georgia Tech neuroengineer and collaborator Steve DeWeerth.
Brains derive awesome problem-solving abilities from two characteristics of their individual cells. First, a neuron can be in any one of thousands of different states, allowing it to store more information than a transistor, which has only two states, on and off. Second, neurons can choose which other neurons to talk to by rearranging their own synaptic connections. Neurobiologists call this self-organization.
Although scientists have developed software that attempts to mimic the brain's learning process using only the yes-no binary logic of digital computers, all the connections in a personal computer are wired back at the factory. Breaking a single one of these connections usually crashes the computer.
That is not a problem for a neurocomputer. "Dynamic chaotic systems like these naturally self-organize," Ditto says. Take the human heart. An isolated heart neuron simply sparks chaotically, without apparent intelligence. But when it is a part of the neuronal network in a living heart, it synchronizes with all the other neurons to create a steady heartbeat. A neurocomputer might work in a similar way. If a computer programmer could pose a problem to a collection of neurons, such as "create a regular heartbeat," the neurons might then figure out through trial and error how to rewire their own circuits to produce a steady rhythmic beat.
Of course, figuring out how to pose complex questions to neurons is a monumental programming challenge. Neurons speak a terrifically complicated language. Each "word" in the neuron lexicon is a repeatable pattern of electrical impulses. And when neurons talk to each other, these electric words are transmitted across synapses, electrical connections that link neurons into a network. Each synaptic connection can have as many as 200,000 channels, and every channel carries information about a different aspect of cell life, a bit like the way your television simultaneously receives cable programming on different channels.
Until a few years ago, untangling and interpreting so many intercellular conversations seemed impossible: Imagine trying to translate every word said by news anchors broadcasting entirely in Latin over 200,000 channels of cable television. Ironically, it's the advancing power of the modern digital computer that has made such problems solvable. Using the speed of microprocessors to crunch differential equations, Eve Marder, a neurobiologist at Brandeis University, has developed a computer program called the dynamic clamp that can translate neuron-speak in real time.
Electrical impulses are transmitted to the computer through probes inserted into the neuron. The dynamic clamp "reads the cell's voltage, then uses the voltage and an equation that calculates the current that would flow at that voltage," says Marder. Then it computes and generates a response that is sent back though the probe. By controlling the strength of the reply pulse, the program mimics a neuron conduction channel, and the neuron reacts as if it were communicating with another neuron, not a computer.
Ditto's team uses the dynamic clamp in the opposite way: to give orders. The computer sends a stimulating electric signal to the neuron, thereby instructing the cell which state to adopt. To add a pair of numbers, for example, Ditto "tells" two cells to go into states corresponding to two numbers. The two cells are then electrically linked through the computer and told to "add." They reply with the answer. Even Ditto admits that this is a very simple success. "We have loaded the deck," he says. "We know the information is there."
Next the team wants to build a neurocomputer sophisticated enough to learn tasks, such as how to move the legs of a robot walking over a boulder-strewn landscape or to recognize abstract spatial patterns, including stick-figure drawings of people. Either accomplishment will be difficult to pull off.
"Compared to learning how to walk, calculus is easy," DeWeerth says. And harder problems require more neurons. "We need hundreds of thousands of neurons to solve these complicated tasks," he says. Which presents a major challenge: "How do we program them all?"
In one sense, it should be easy. "Very simple rules can generate complex behavior," Ditto says. Forager ants, for example, create elaborate civilizations out of a mere handful of very simple rules. But how do you figure out the fundamental set of simple rules?
It is a question that may never have to be answered. "We don't know how a biological system self-organizes," DeWeerth says, "but we might not have to understand it to exploit it." Instead of linking every neuron via computer, the team plans to connect a computer to a small number of neurons and allow them to communicate with a much larger network of neurons. The computer interface will stimulate the neurocomputer in the same way that our eyes, ears, noses, and hands provide sensory stimulation to our brains. By sending information and feedback through the interface, "we will teach the neurons to make the right connections themselves," DeWeerth says.
Constant repetition may be the key. "The brain adapts continuously, so we keep getting better at tasks that we repeat," DeWeerth says. For example, when a novice tennis player lofts a ball above his head and hits it, the brain gradually learns to coordinate the muscles needed to serve the ball. But teaching neurons takes time. Just imagine how many serves Pete Sampras had to hit before he won at Wimbledon.
Fortunately, neurons love to practice, so Ditto's team is working hard to get them started. "We are now gearing up to use two- and three-dimensional pieces of neural tissue for computing," he says. Within seven years, he hopes to teach a millimeter-sized cube of neurons to do arithmetic and recognize patterns. Because it is impossible to insert a computer moderator between all the different nerve layers, this will be the first attempt at letting the neurons make their own interconnections. Ditto acknowledges that "there are still lots of engineering headaches. But once you get the neurons started, you almost can't stop them from computing."
For more about computing with leeches, see www.physics.gatech.edu/chaos