Stuart Anstis sat in his living room in the dark, wearing a pink visor that held up a hood made of thick black paper with eye holes cut out. He couldn’t see anything but the flickering images on the TV set, which he had rigged to play everything in negative. He’d been watching a movie for some time--There was this fellow dancing and miming and flirting, he recalls--when a friend, who happened to know the film, stopped by. Oh, Bob Hope, the friend said. And I said, ‘Bob Hope! Good Lord!’ I’d been looking at him all that time and didn’t know who it was.
Vision researchers like Anstis--along with photographers--have known for decades that faces are nearly impossible to identify when light and dark are reversed. But why that’s so is not well understood.
Curious about the difficulty of interpreting negative images, Anstis, a perceptual psychologist at the University of California at San Diego, decided last year to plunge into a negative world. He connected a set of goggles to a video camera that reversed black and white and converted colors to their complements--green to purple, yellow to blue, and so on--then put them over his eyes. For three days Anstis saw nothing in positive. He removed the goggles only at night, and then he slept blindfolded; he showered in the dark. The experiment was a variation on earlier studies by researchers who had worn glasses designed to turn the world upside down or shift it sideways. They had found that a surprising degree of adaptation occurred; somehow the visual system compensated, put things right, and allowed a person to function. Anstis wanted to find out if the same thing would happen when he traded black for white.
Through the goggles, faces of his friends and colleagues took on a black-toothed, menacing quality. Their pupils became white; the light glinting off their eyes appeared black. I went on falsely seeing the highlight as the pupil, Anstis says, so I constantly misread people’s eye movements. He could never be quite sure when they were looking at him. Their blinking became a peculiar flicker that he found depersonalizing. Emotional expressions were hard to read, he says. Pictures of celebrities were unrecognizable. By daylight--when the sky was a very dark yellow, almost black--a woman’s sharply etched shadow, now rendered in white, looked like a paper cutout or even another person. Fuzzier shadows--cast by a hand held over a table, for instance--translated into a vague, eerie glow.
Objects were no easier to deal with than people. Meals in complementary colors--blue scrambled eggs, for instance--became so unappetizing that Anstis puckishly recommends negative goggles to dieters. Outdoors, sunlight converted to shadow made a flight of stairs a frightening experience. The risers became confused with the treads. I lost my sense of reality, as if I’d been up too late, he recalls. At the curb, cars whizzing by didn’t look real. They looked like toys coasting along on white platforms, which were actually their shadows. I would have been quite happy to walk in front of them if it hadn’t been for the roaring sound of the traffic. He felt as if his other senses were taking over his consciousness, to compensate for the lack of meaningful visual input. The scent of a laundry room, for instance, became remarkably intense.
Over the course of three days, he says, there was very little adaptation. He did begin reinterpreting sharp shadows, but the fuzzy, glowing ones continued to trick him. A postdoctoral student who wore the goggles for eight days reacted the same way. I was amazed at how difficult it is to deal with, Anstis says. All the information is still there, just like reversing the signs in an equation, so we’re really surprised that the brain has so much trouble.
Vision, of course, is more than recording what meets the eye: it’s the ability to understand, almost instantaneously, what we see. And that happens in the brain. The brain, explains neurobiologist Semir Zeki of the University of London, has to actively construct or invent our visual world. Confronted with an overwhelming barrage of visual information, it must sort out relevant features and make snap judgments about what they mean. It has to guess at the true nature of reality by interpreting a series of clues written in visual shorthand; these clues help distinguish near from far, objects from background, motion in the outside world from motion created by the turn of the head. Assumptions are built into the clues--for example, that near things loom larger, or that lighting comes from above.
The brain must process an immense amount of information as fast as it can, using any shortcuts it can, says Anstis. It has to find a minimum hypothesis to cover a maximum amount of data. So it’s got to use any trick it can. His experiment reveals one of those tricks: We think the brain is programmed to use brightness the way it is in the world. That means shadows are always darker, and light comes from above.
A negative world, with light pouring down from the sky like black paint, shatters those basic assumptions. And when we violate the assumptions, confusion reigns. Reverse brightness, as Anstis did, and critical clues about the world, such as facial features and expressions, and shape and depth, are subverted. The result is illusion.
Seeing, in short, is a form of sensory reasoning. When the assumptions on which that reasoning is based are destroyed, seeing becomes senseless. Even though all the necessary visual information is there, we are reduced to groping around.
Everyday vision encompasses an extraordinary range of abilities. We see color, detect motion, identify shapes, gauge distance and speed, and judge the size of faraway objects. We see in three dimensions even though images fall on the retina in two. We fill in blind spots, automatically correct distorted information, and erase extraneous images that cloud our view (our noses, the eyes’ blood vessels).
The machinery that accomplishes these tasks is by far the most powerful and complex of the sensory systems. The retina, which contains 150 million light-sensitive rod and cone cells, is actually an outgrowth of the brain. In the brain itself, neurons devoted to visual processing number in the hundreds of millions and take up about 30 percent of the cortex, as compared with 8 percent for touch and just 3 percent for hearing. Each of the two optic nerves, which carry signals from the retina to the brain, consists of a million fibers; each auditory nerve carries a mere 30,000.
The optic nerves convey signals from the retinas first to two structures called the lateral geniculate bodies, which reside in the thalamus, a part of the brain that functions as a relay station for sensory messages arriving from all parts of the body. From there the signals proceed to a region of the brain at the back of the skull, the primary visual cortex, also known as V1. They then feed into a second processing area, called V2, and branch out to a series of other, higher centers-- dozens, perhaps--with each one carrying out a specialized function, such as detecting color, detail, depth, movement, or shape or recognizing faces.
The goal of much current research is to find out not only how those individual centers function but also how they interact with one another. For now, no one even knows where the higher centers ultimately relay their information. There’s no little green man up there looking at this stuff, says Harvard neurobiologist Margaret Livingstone. In a sense, your perception is what’s going on in those areas. Researchers would like to know just where and how the rules for seeing are stored, how they’re acquired, which assumptions are built up from experience and which are hardwired. Anstis thinks his failure to adapt to his negative world might be evidence that brightness clues are built in, but there’s not enough evidence to say for sure. I don’t know what would have happened, he says, if we’d gone on in negative for ages and ages.
There are several approaches to analyzing the visual system. Anatomical studies look at neural wiring to learn what’s connected to what. Physiological studies determine how individual cells and groups of cells react when a particular segment of the visual field is presented with a certain type of stimulus. Perceptual psychologists like Anstis start at the behavior end: they show subjects doctored, tricky images, including optical illusions, and use the responses to figure out just what elements of the environment the brain is responding to and how it’s sorting out those elements for processing.
During the past two decades the separate avenues of visual research have come together in a striking way. All point to a fundamental division of labor in the visual system: color, motion, and form appear to be processed independently, though simultaneously, through different pathways in the brain. Physicians and researchers have long suspected that the visual system broke down certain tasks, because strokes and head injuries can leave people with highly specific deficits: loss of color vision, motion perception, or the ability to recognize faces, for instance. It thus seems likely that there are a number of separate systems involved in analyzing visual information, but what’s not clear is just how many exist. Everybody will give you a different answer, Livingstone says. She suspects two or three, but Zeki suggests four.
One thing researchers do agree on is that motion is processed separately from form and color. The system that picks up motion also registers direction and detects borders as defined by differences in brightness. It reacts quickly but briefly when stimulated, and so it doesn’t register sharp detail. Another system sees color, but whether it also recognizes shape is not clear. There may be a pure color system, says Livingstone, but it’s not known. In addition, there may be a second color system, one sensitive to both shape and color but not concerned with movement. It would react slowly but be capable of scrutinizing an object for a relatively long time, thus picking up detail. Zeki believes there’s still another system, which perceives the shape of moving objects but not their color.
These systems usually work in concert to give us a more or less accurate rendition of the visual world. But because they process information differently, they can sometimes lead to conflict in the eye- brain system about what’s really going on. Clues can be misread or misinterpreted. At the same time, the joining of different kinds of clues in the same pathway--say, brightness, motion, and depth--can mean that depth is impossible to read correctly when brightness is absent, or vice versa. By manipulating the way the eye-brain receives information from these pathways, vision researchers have revealed a variety of interesting and instructive phenomena. We might call them illusions. But to the eye- brain system, they’re merely the result of following sensory reasoning to its logical conclusion.
For example, proof that motion and color are processed separately can be derived from a clever experiment conducted several years ago at the University of Montreal by psychologist Patrick Cavanagh (now at Harvard) and his colleagues. He found that when a moving pattern of red and green stripes was adjusted so that the red and green were equally bright, or equiluminant, the motion became almost undetectable. In other words, when brightness differences between the stripes were eliminated, color alone was not enough to carry information about movement. The motion system, unable to perceive enough contrast between the stripes, had nothing to see move.
That experiment led Cavanagh and Anstis to devise a new test for color blindness, based on the knowledge that, depending on the type of color blindness, either red or green will appear brighter to a subject. By measuring the amount of brightness the subjects had to add to the red stripes to make the pattern stop moving--that is, to achieve equiluminance- -the researchers were able to detect some forms of color blindness.
Similarly, it’s relatively easy to prove that shading and depth are processed together by altering or eliminating brightness information and watching what happens to depth. In a portrait of President Eisenhower created by Margaret Livingstone, the shadows and highlights on his face have been replaced with patches of color; the brightness of the colors doesn’t correspond to the relative brightness of shadows and highlights, however. Since shadows cast by the eyebrows, nose, and lips help define a person’s face, putting bright color in the shadow regions effectively erases and even inverts all three-dimensional information. The features become unrecognizable, says Livingstone. You can barely even tell it’s a person.
Perspective drawings lose their depth if rendered in colors of equal brightness, as do drawings of geometric figures. The color scale used on the Eisenhower portrait is the one commonly used to produce contour maps and CAT scans; it’s based on the visible spectrum, which has red at one end. The color scale therefore also uses red at one end, to code for the brightest regions; it uses yellow for comparatively darker ones. This irritates Livingstone no end. People who read these color-coded CAT scans think they can interpret them, she says. They think they know red is highest. But really, yellow is brighter. What the brain does is see the red areas as darker and less prominent than the yellow ones; the result can be an image whose depth is difficult to interpret. This is a hardwired system, and no matter how much you try to override it intellectually, tell it what it should see, it will tell you what it really does see.
Recently, Livingstone has begun to suspect that artists have unique ways of exploiting the fact that motion, depth, and brightness are processed by the same pathway. If you stare at something for a while, something three-dimensional, it goes sort of flat, Livingstone says. That occurs, she suspects, because the pathway is geared to detect changes in the environment, such as movement, and to respond quickly and briefly. Fixed on one image, its response dies out, and the impression of depth disappears.
I think some artists might be stereo-blind, Livingstone says, meaning that they lack binocular depth perception. I’ve talked to several who are. Everything looks somewhat flatter to them already. That makes it easier, she says, to translate a three-dimensional object into a flat drawing that nonetheless conveys depth. Artists with normal vision have told her that they’ve trained themselves to stare at their subject, wait for it to go flat, and then draw it flat, or to close one eye and eliminate stereovision automatically. A normal person gets totally screwed up trying to draw 3-D, she says. There are so many perspective, stereo, and occlusion cues. You need to get rid of that level of processing or you can’t draw a flat picture.
Cavanagh has also been studying paintings, but for insight into how the eye-brain creates richly embroidered visual images from sparse clues. Flat art is a treasure trove of information about how vision works, he says. When we look at a flat picture, we recover the depth, the spatial arrangements of the objects depicted. We could easily make a 3-D model of what we’re seeing in the picture.
That ability to translate between 2-D and 3-D suggests something about the way we process images. We look around and see things that look reassuringly solid and three-dimensional, a nice euclidean world, Cavanagh says. But it’s likely that our internal representation is nowhere near that complete. What we store, he suspects, is more like a set of two- dimensional images of what we’re looking at, seen from many different angles. It’s a more intelligent representation, he says, because it’s sparse, an abstraction of the real, solid world, and yet it can still give us the impression that it is real and solid. It’s like getting CD-quality music from a hand-cranked phonograph.
Cavanagh thinks that even the most primitive art forms, such as the line drawings found in the Lascaux caves, have much to tell about how information is encoded--for example, by the use of lines. You have to ask why lines would ever be sufficient to see the depth structure of an object that’s being drawn, he says. Even if you see a line drawing of something you’ve never seen before, you get it instantly. Infants get it. Why do lines work at all if they don’t exist in the real world? No objects in the real world have lines drawn around them.
Somewhere in the visual system, he says, there must be a code for contours that separate objects from their backgrounds. The boundary could be a difference in texture or color or even motion. Sure enough, an area has been found in the brain that responds to contours. Cavanagh describes it as contour-specific and attribute-independent. It ended up, maybe by chance, responding to lines, he says. Which is lucky for artists, or otherwise all drawings would have to be filled in. Instead, the brain fills in for them.
Actually, filling in is a well-known strategy that the brain uses to see more than meets the eye. It’s a form of shortcutting that deals not with the information that is present but with the information that is lacking. The best-known example occurs at the natural blind spot in each eye. At the point where the optic nerve is connected in a normal eye, there’s a patch in the retina that doesn’t respond to light. A blind spot results in the part of the visual field that would normally be monitored by that patch. The blind spot is off center, toward the side of the visual field, and it’s big enough to swallow up a golf ball held at arm’s length, or even, at a bit more distance, a person’s head. But we’re unaware of it. For one thing, when both eyes are open, they cancel out each other’s blind spots. For another, the constant motion of the eye prevents the spot from lingering at any one place. The blind spot doesn’t become apparent until we close one eye and stare; even then, we’ve got to resort to tricks to detect it.
The interesting thing about the blind spot isn’t so much what we don’t see as what we do. The fact that there’s no visual information there doesn’t lead the brain to leave a blank in your visual field; instead, it paints in whatever background is likely to be there. If the blind spot falls on a dragonfly resting on a sandy beach, your brain doesn’t blot it out with a dark smudge; it fills it in with sand.
But how? It’s long been a subject of argument among psychologists and philosophers. Some argue that the process is a cognitive one carried out by some brain region at a higher level than the visual cortex. That process might be called logical inference: since we know the background is sandy and textured, we assume the blank spot must be, too, just as we might assume that if there’s flowered wallpaper in front of us, the wallpaper behind us will have the same flowered pattern.
But filling in is different from assuming, says neuroscientist Vilayanur Ramachandran of the University of California at San Diego. It’s most likely carried out in the visual cortex, by cells near the ones that the blind spot is depriving of input. It is, he says, an active, physical process: There’s neural machinery, and when it’s confronted with an absence of input, it fills it in. The brain, in other words, somehow creates a representation of the background and sticks it into the blind spot.
We think there’s a separate system in the brain for edges and contours, and then one for surface features, Ramachandran says, and this is the one doing completion of texture. It says, ‘If there’s a certain texture on the edge, it must be everywhere.’ He refers to the process as surface interpolation.
Curiously, there’s a visual phenomenon that’s almost the converse of filling in. It’s called blindsight, and it occurs in some patients who have gaps in their visual fields because of brain injuries. Whereas many of the people Ramachandran studies have blind spots they don’t notice, these people have vision they don’t notice. They are somehow able to identify objects presented to their blind areas--without being consciously aware that they are seeing. Blindsight suggests not only that aspects of vision are processed separately, but that vision is processed separately from awareness. Seeing, and knowing that we see, appear to be handled differently.
Blindsight has been most extensively studied by Lawrence Weiskrantz, a psychologist at Oxford University. Twenty years ago Weiskrantz and his colleagues found that a young patient who had lost the left half of his vision because of damage to his visual cortex could nonetheless identify things in the blind field: he could distinguish between an X and an O and tell whether a line of light was vertical or horizontal; he could locate objects even though he couldn’t identify them.
But the odd thing was that the patient didn’t think he was seeing. He would guess at what was being presented only when the researchers urged him to, and then be astonished when shown how many of his answers were correct. In subsequent years Weiskrantz studied more patients with blindsight, as did other researchers. Again and again the patients appeared to have a primitive sort of vision in their blind fields but denied any awareness of it. I couldn’t see anything, not a darn thing, Weiskrantz’s patient insisted.
How can a person see and not know it? The phenomenon of blindsight has raised as many questions about the nature of consciousness as it has about visual processing. Weiskrantz suggests that blindsight is produced in parts of the brain other than the primary visual cortex. He points out that fat bundles of fibers from each optic nerve never reach the visual cortex but instead travel to the midbrain, which controls involuntary, unconscious actions. Still other fibers bypass the primary visual cortex and enter different cortical regions. These regions may produce the unconscious vision that characterizes blindsight; if they do, it means that the visual cortex is essential not only for normal vision but also for awareness of what’s being seen. If seeing takes place outside the visual cortex, it apparently doesn’t register in our consciousness.
Late last year, however, a respected neuroscientist challenged the idea that blindsight is derived from visual pathways that are diverted to the mid-brain. Michael Gazzaniga, of the University of California at Davis, reported that he and his colleagues had discovered that a patient with blindsight actually had live, functioning neurons in the portion of his visual cortex that supposedly had been destroyed. Those islands of healthy tissue produce blindsight, Gazzaniga argues.
Asked why patients would remain unconscious of their vision if the processing is going on in the visual cortex, Gazzaniga suggests that because the preserved areas are so small, the signals patients get may just be too small to trigger a conscious reaction. Moreover, he doesn’t find it surprising that we might be unaware of things going on in the cortex. Lots of studies suggest that things we’re not consciously aware of go on in the cortex, probably a good deal of our psychological life.
The debate over blindsight is simple, Gazzaniga says: Weiskrantz thinks it’s an alternative pathway, and we think it’s the original one. More studies will be done. I have three people working around the clock on it, and it will be worked out.
An extreme form of filling in that has eerie echoes of blindsight may have afflicted American writer James Thurber, known for his humorous essays, drawings, and stories, including The Secret Life of Walter Mitty. Thurber’s experience illustrates the lengths the visual system will go to in order to see, vision or no. Thurber lost one eye as a boy when his brother accidentally shot him with an arrow; the remaining eye began gradually to fail. By the time Thurber turned 40, his world had become a blur--something he made light of his in work. Once he wrote about how he frightened a woman off a city bus by mistaking the purse in her lap for a chicken. As his eyesight worsened, the images he saw progressed from the slapstick to the surreal. Ordinary things underwent wild transformations. Dirt on his windshield looked like admirals in uniform or crippled apple women; he would whirl out of their way.
Thurber’s fantastic visions, though not diagnosed at the time, fit the description of a disorder called Charles Bonnet syndrome, in which people who are blind or partly so--because of eye diseases or certain types of brain damage--see vivid, intensely realistic images of things that aren’t there. Ramachandran and his colleague Leah Levi have taken a special interest in the syndrome. One of Ramachandran’s patients, a 32-year-old San Diego man who sustained brain damage in a car accident several years ago, has lost the lower half of his visual field. He doesn’t see a black band or sense a border between the sighted field and the blind one, any more than the rest of us detect boundaries at the periphery of our vision. The extraordinary thing about this patient, Ramachandran says, is that he hallucinates constantly. These hallucinations occur only in the blind field. He sees little children, and zoo animals and domestic ones, creeping up from below. He might say to me, ‘As I’m talking to you, I see a monkey sitting on my lap.’
Charles Bonnet syndrome, Ramachandran says, is a more sophisticated type of filling in. It’s the next level up. It’s a response to visual deprivation. These hallucinations are phantom visual images, analogous to phantom limbs. He believes they originate in portions of the brain that store visual memories and generate visual imagery. In other words, they are yet another example of the puzzling array of phenomena that emerge from the complex entanglement of eye and brain.
Let me try to give you a sense of where we are, says Margaret Livingstone, in an effort to assess the status of visual research today. Take form perception. Human beings are very good at it. We recognize contours, faces, words, a lot of really complicated things. What we understand is that in the retinas, the lateral geniculate bodies, and the first layer of the visual cortex, we code for changes in brightness or color. In the next stage, cells become selective for the orientation of the change--that is, they code for contours, or edges. In some places cells select for the length of the contour. Then, if you go up very high, you find cells selective for certain faces. Livingstone pauses. We know remarkably little about what happens in between. It’s frightening how big a gap there is. But we do think we understand a lot about visual processing in spite of that gap.