In search of a master neural pattern, Modha discovered that European researchers had come up with a mathematical description of what appeared to be the same as the circuit Sur investigated in ferrets, but this time in cats.
If you unfolded the cat cortex and unwrinkled it, you would find the same six layers repeated again and again. When connections were drawn between different groups of neurons in the different layers, the resulting diagrams looked an awful lot like electrical circuit diagrams.
Modha and his team began programming an artificial neural network that drew inspiration from these canonical circuits and could be replicated multiple times. The first step was determining how many of these virtual circuits they could they link together and run on IBM’s traditional supercomputers at once.
Would it be possible to reach the scale of a human cortex?
At first Modha and his team hit a wall before they reached 40 percent of the number of neurons present in the mouse cerebral cortex: roughly 8 million neurons, with 6,300 synaptic connections apiece. The truncated circuitry limited the learning, memory and creative intelligence their simulation could achieve.
So they turned back to neuroscience for solutions. The actual neurons in the brain, they realized, only become a factor in the organ’s overall computational process when they are activated. When inactive, neurons simply sit on the sidelines, expending little energy and doing nothing. So there was no need to update the relationship between 8 million neurons 1,000 times a second. Doing so only slowed the system down.
Instead, they could emulate the brain by instructing the computer to focus attention only on neurons that had recently fired and were thus most likely to fire again. With this adjustment, the speed at which the supercomputer could simulate a brain-based system increased a thousandfold. By November 2007, Modha had simulated a neural network on the scale of a rat cortex, with 55 million neurons and 442 billion synapses.
Two years later his team scaled it up to the size of a cat brain, simulating 1.6 billion neurons and almost 9 trillion synapses. Eventually they scaled the model up to simulate a system of 530 billion neurons and 100 trillion synapses, a crude approximation of the human brain.
Building a Silicon Brain
The researchers had simulated hundreds of millions of repetitions of the kind of canonical circuit that might one day enable a new breed of cognitive computer. But it was just a model, running at a maddeningly slow speed on legacy machines that could never be brainlike, never step up to the cognitive plate.
In 2008, the federal Defense Advanced Research Projects Agency (DARPA) announced a program aimed at building the hardware for an actual cognitive computer. The first grant was the creation of an energy-efficient chip that would serve as the heart and soul of the new machine — a dream come true for Modha.
With DARPA’s funding, Modha unveiled his new, energy-efficient neural chips in summer 2011. Key to the chips’ success was their processors, chip components that receive and execute instructions for the machine. Traditional computers contain a small number of very fast processors (modern laptops usually have two to four processors on a single chip) that are almost always working. Every millisecond, these processors scan millions of electrical switches, monitoring and flipping thousands of circuits between two possible states, 1 and 0 — activated or not.
To store the patterns of ones and zeros, today’s computers use a separate memory unit. Electrical signals are conveyed between the processor and memory over a pathway known as a memory bus. Engineers have increased the speed of computing by shortening the length of the bus.
Some servers can now loop from memory to processor and back around a few hundred-million times per second. But even the shortest buses consume energy and create heat, requiring lots of power to cool.
The brain’s architecture is fundamentally different, and a computer based on the brain would reflect that. Instead of a small number of large, powerful processors working continuously, the brain contains billions of relatively slow, small processors — its neurons — which consume power only when activated. And since the brain stores memories in the strength of connections between neurons, inside the neural net itself, it requires no energy-draining bus.
The processors in Modha’s new chip are the smallest units of a computer that works like the brain: Every chip contains 256 very slow processors, each one representing an artificial neuron (By comparison, a roundworm brain consists of about 300 neurons.) Only activated processors consume significant power at any one time, making energy consumption low.
But even when activated, the processors need far less power than their counterparts in traditional computers because the tasks they are designed to execute are far simpler: Whereas a traditional computer processor is responsible for carrying out all the calculations and operations that allow a computer to run, Modha’s tiny units only need to sum up the number of signals received from other virtual neurons, evaluate their relative weights and determine whether there are enough of them to prompt the processor to emit a signal of its own.
Modha has yet to link his new chips and their processors in a large-scale network that mimics the physical layout of a brain. But when he does, he is convinced that the benefits will be vast. Evolution has invested the brain’s anatomy with remarkable energy efficiencies by positioning those areas most likely to communicate closer together; the closer neurons are to one another, the less energy they need to push a signal through. By replicating the big-picture layout of the brain, Modha hopes to capture these and other unanticipated energy savings in his brain-inspired machines.
He has spent years poring over studies of long-distance connections in the rhesus macaque monkey brain, ultimately creating a map of 383 different brain areas, connected by 6,602 individual links. The map suggests how many cognitive computing chips should be allocated to the different regions of any artificial brain, and which other chips they should be wired to.
For instance, 336 links begin at the main vision center of the brain. An impressive 1,648 links emerge from the frontal lobe, which contains the prefrontal cortex, a centrally located brain structure that is the seat of decision-making and cognitive thought. As with a living brain, the neural computer would have most connections converging on a central point.
Of course, even if Modha can build this brainiac, some question whether it will have any utility at all. Geoff Hinton, a leading neural networking theorist, argues the hardware is useless without the proper “learning algorithm” spelling out which factors change the strength of the synaptic connections and by how much. Building a new kind of chip without one, he argues, is “a bit like building a car engine without first figuring out how to make an explosion and harness the energy to make the wheels go round.”
But Modha and his team are undeterred. They argue that they are complementing traditional computers with cognitive-computing-like abilities that offer vast savings in energy, enabling capacity to grow by leaps and bounds. The need grows more urgent by the day. By 2020, the world will generate 14 times the amount of digital information it did in 2012. Only when computers can spot patterns and make connections on their own, says Modha, will the problem be solved.
Creating the computer of the future is a daunting challenge. But Modha learned long ago, halfway across the world as a teen scraping the paint off of chairs, that if you tap the power of the human brain, there is no telling what you might do.
[This article originally appeared in print as "Mind in the Machine."]