But once you get up close, the similarities break down. Cameras are boringly euclidean. Typically engineers build photodiodes as tiny square elements and spread them out in regularly spaced grids. Most existing artificial retinas have the same design, with impulses conveyed from the photodiodes to neurons through a rectangular grid of electrodes. The network of neurons in the retina, on the other hand, looks less like a grid than a set of psychedelic snowflakes, with branches upon branches filling the retina in swirling patterns. This mismatch means that when surgeons position the grid on the retina, many of the wires fail to contact a neuron. As a result, their signals never make it to the brain.
Some engineers have suggested making bigger electrodes that are more tightly spaced, creating a larger area for contact, but that approach faces a fundamental obstacle. In the human eye, neurons sit in front of the photoreceptors, but due to the snowflake-like geometry, there is still lots of space for light to slip through. An artificial retina with big electrodes, by contrast, would block out the very light it was trying to detect.
Natural photoreceptors are quirky in another way, too: They are bunched up. Much of what we see comes through a pinhead-size patch in the center of the retina known as the fovea. The fovea is densely packed with photoreceptors. The sharp view of the world that we simply think of as “vision” comes from light landing there; light that falls beyond the fovea produces blurry peripheral images. A camera, by contrast, has light-trapping photodiodes spread evenly across its entire image field.
The reason we don’t feel as if we are looking at the world through a periscope is that our eyes are in constant motion; our focus jumps around so that our foveas can capture different parts of our field of view. The distances of the jumps our eyes make have a hidden mathematical order: The frequency of a jump goes up as distance gets shorter. In other words, we make big jumps from time to time, but we make more smaller jumps, and far more even smaller jumps. This rough, fragmented pattern, known as a fractal, creates an effective means of sampling a large space. It bears a striking resemblance to the path of an insect flying around in search of food. Our eyes, in effect, forage for visual information.
Once our eyes capture light, the neurons in the retina do not relay information directly to the brain. Instead, they process visual information before it leaves the eye, inhibiting or enhancing neighboring neurons to adjust the way we see. They sharpen the contrast between regions of light and dark, a bit like photoshopping an image in real time. This image processing most likely evolved because it allowed animals to perceive objects more quickly, especially against murky backgrounds. A monkey in a forest squinting at a leopard at twilight, struggling to figure out exactly what it is, will probably never see another leopard. Unlike a camera that passively takes in a picture, our eyes are honed to actively extract the most important information we need to make fast decisions.
Right now scientists can only speculate what it might be like to wear an artificial retina with millions of photoreceptors in a regular grid, but such a device would not restore the experience of vision—no matter how many electrodes it contains. Without the retina’s sophisticated image processing, it might just supply a rapid, confusing stream of information to the brain.
Taylor, the Oregon vision researcher, argues that simplistic artificial eyes could also cause stress. He reached this conclusion after asking subjects to look at various patterns, some simple and some fractal, then describe how the images made them feel. He also measured physiological signs of stress, like electrical activity in the skin. Unlike simple images, fractal images lowered stress levels by up to 60 percent. Taylor suspects the calming effect has to do with the fact that our eye movements are fractal too. It is interesting to note that natural images—such as forests and clouds—are often fractal as well. Trees have large limbs off which sprout branches, off which grow leaves. Our vision is matched to the natural world.
An artificial retina that simply mirrors the detector in a digital camera would presumably allow people to see every part of their field of view with equal clarity. There would be no need to move their eyes around in fractal patterns to pick up information, Taylor notes, so there would be no antistress effect.
The solution, Taylor thinks, involves artificial retinas that are more like real eyes. Light sensors could be programmed with built-in feedbacks to sharpen the edges on objects or clumped together to provide more detail at the center. It may be possible to overcome the mismatch between regular electrodes and irregular neurons. Taylor is developing new kinds of circuits that he hopes to incorporate into next-generation artificial eyes. His team builds these circuits so that they spontaneously branch, creating structures that Taylor dubs nanoflowers. Although nanoflowers do not exactly match the eye’s neurons, their geometry would similarly admit light and allow circuits to contact far more neurons than can a simple grid.
Taylor’s work is an important reminder of how much progress scientists are making toward restoring lost vision, but also of how far they still have to go. The secret to success will be remembering not to take the camera metaphor too seriously: There is a lot more to the eye than meets the eye.