(Inside Science) — Artificial brains may need deep sleep in order to keep stable, a new study finds, much as real brains do.
In the artificial neural networks now used for everything from identifying pedestrians crossing streets to diagnosing cancers, components dubbed neurons are supplied data and cooperate to solve a problem, such as recognizing images. The neural network repeatedly adjusts the interactions between its neurons and sees if these new patterns of behavior are better at solving the problem. Over time, the network discovers which patterns seem best at computing solutions. It then adopts these as defaults, mimicking the process of learning in the human brain.
In most artificial neural networks, a neuron's output is a number that alters continuously as the input it is fed changes. This is roughly analogous to the number of signals a biological neuron might fire over a span of time.
In contrast, in a spiking neural network, a neuron "spikes," or generates an output signal, only after it receives a certain amount of input signals over a given time, more closely mimicking how real biological neurons behave.
Since spiking neural networks only rarely fire spikes, they shuffle around much less data than typical artificial neural networks and in principle require much less power and communication bandwidth. One way to implement spiking neural networks is to use neuromorphic hardware, electronics that mimic neurons and their connections.
However, conventional techniques used to rapidly train standard artificial neural networks do not work on spiking neural networks. "We are still learning how to train spiking neural networks to perform useful tasks," said study lead author Yijing Watkins, a computer scientist at Los Alamos National Laboratory in New Mexico.
Watkins and her colleagues experimented with programming neuromorphic processors to learn to reconstruct images and video based on sparse data, a bit like how the human brain learns from its environment during childhood development. "However, all of our attempts to learn eventually became unstable," said study senior author Garrett Kenyon, also a computer scientist at Los Alamos.
The scientists ran computer simulations of a spiking neural network to find out what happened. They found that although it could learn to identify the data it was trained to look for, when such training went uninterrupted long enough, its neurons began to continuously fire no matter what signals they received.
Watkins recalled that "almost in desperation," they tried having the simulation essentially undergo deep sleep. They exposed it to cycles of oscillating noise, roughly corresponding to the slow brain waves seen in deep sleep, which restored the simulation to stability. The researchers suggest this simulation of slow-wave sleep may help "prevent neurons from hallucinating the features they're looking for in random noise," Watkins said.
These findings may help explain why all known biological neural systems capable of learning from their environment, from fruit flies to humans, undergo slow-wave sleep. Everyone needs slow-wave sleep, Kenyon said. "Even aquatic mammals -- whales, dolphins and so on -- require periods of slow-wave sleep, despite the obvious evolutionary pressure to find some alternative. Instead, dolphins and whales sleep with half their brain at a time."
"Why is slow-wave sleep so indispensable?" Kenyon said. "Our results make the surprising prediction that slow-wave sleep may be essential for any spiking neural network, or indeed any organism with a nervous system, to be able to learn from its environment."
Future research could test these ideas with real neuromorphic processors in response to a source of environmental data, such as cameras that mimic the light-sensitive retinas within eyes, Watkins said.
"Adding in noise periodically can hopefully stabilize the ability of these networks to learn and prevent them from becoming more brittle and degrading their operations," said Mike Davies, director of Intel's neuromorphic computing lab in Hillsboro, Oregon, who did not take part in this research. "I really see huge promise in neuromorphic devices that can adapt themselves to wherever they are deployed in the real world to perform some behavior you may not be able to train it for perfectly in advance in the factory."
The scientists are scheduled to present their findings virtually June 14 as part of the Conference on Computer Vision and Pattern Recognition.
This article originally appeared on Inside Science. Read the original here.