Over the past few years, I’ve developed two habits that have made me an increasingly unpopular movie date. One is a strong preference for 3-D movies, undeterred by low artistic value or by sensations commonly associated with brain tumors and food poisoning, not to mention the big, dorky, blinking plastic glasses. (I can’t wait to upgrade my home TV to 3-D—my family, bless them, having assured me that blinking glasses are the least of my problems when it comes to looking dorky.) The other is that I’ve come to like sitting ever closer to the theater screen, advancing at the rate of approximately one row every six months.
See, I’m trying to go beyond watching movies to being inside movies. I don’t get why everyone doesn’t feel this way. People make a big deal about how big and bright and sharp the iPad screen is. Well, sure, compared with the murky, teeny phone screens we all spend half our lives
peering at. But compared with real life, it’s still a pretty murky, teeny screen, and one that imprisons flat images. If you find watching Avatar on an iPad an immersive experience, more iPower to you. As for me, when I’m in media-consumption mode, I want to experience the you-are-there feeling you get when you are, well, there. I want freedom from screens.
Researchers feel my pain, apparently, because some of them have been working on peeling video off glass displays so that filmed objects appear to hang out in the thin air around us. There’s a long way to go, but a reasonable first step toward fully immersive 3-D entertainment would be better, less nauseating 3-D effects. The essential ingredient of a
3-D image is stereoscopic photography, in which each eye receives an image representing a view from a slightly different angle. A simple way to achieve this is to develop two screens, one for each eye. You could probably go to Brookstone and get thick glasses lensed with individual TV screens, but that would be taking you in the wrong direction, dorkiness-wise. Nanobiotechnologist Babak Parviz and his team at the University of Washington are developing a much cooler approach: display screens built into contact lenses.
Transforming televisions into
contact lenses turns out to be a difficult feat even in our age of micro-
miniaturization. The list of problems is impressive. First, it requires microlenses that sit on top of the main lens to properly focus images. It needs a way to adhere electrical components to the lenses without distorting picture quality. It needs a power source.
(Parviz is experimenting with wireless radio-frequency energy.) And all this must happen on 1.5 square centimeters of polymer that’s transparent, flexible, nonirritating, fluid-friendly, and free of all the toxic materials normally used in glowing microelectronic components. As
Parviz says, “It’s a pretty intricate optical system for a contact lens.”
He has come close to solving every one of these problems. His prototype contact lenses do not seem to ruffle the rabbits that have worn them. Granted, these test subjects have not yet been subjected to actual television; so far Parviz has managed to incorporate just a single blinking LED on the lens. Then again, you’d be surprised how much information a single dot can deliver. Imagine a lens that blinks to notify the hearing-impaired of an incoming call or to signal you when your mother-in-law is pulling into the driveway. Video lenses are inestimably far off, Parviz concedes, but in the next few years he expects to build contacts with preprinted, illuminable characters and icons as well as an eight-by-eight array of LEDs. If networked, even a rudimentary display could deliver useful visual cues, such as turn signals from your gps so you can keep your eyes on the road.
But a screen is still a screen even if it’s plastered to your eyeballs. What I really want is to ditch solid displays altogether and see images popping out in thin air. As it happens, thin air may be fine for breathing, but it’s a lousy medium for image projection; there’s very little to bounce light off, let alone a way to control how it bounces to make sure it finds its way into your eye. Thick, humid air turns out to be a different story, though, and nothing thickens air so reliably (as anyone in London or San Francisco could tell you) as water vapor. Conveniently, water can both reflect and transmit light, a property known as transflection. A company called FogScreen in Helsinki, Finland, has figured out how to take advantage of all these facts to project fairly crisp, bright images onto—yes—a screen of fog. FogScreen’s machine enlists an array of tiny nozzles to spit out row after row of near-microscopic drops of water, forming a thick slab of fog onto which a projector can shine a surprisingly bright, clear image.
The advantage of a screen made of more or less nothing is that you can direct any part of your body right through it without the usual side effects of entering glass. Like regular fog, FogScreen fog doesn’t even feel wet. If the motivation for punching through a display seems elusive, think about all the prime image-display space around you that has too much foot traffic for a conventional screen: hallways, sidewalks, doorways, and the area smack in the middle of your living room, your office, or a mall shop. That’s why we put computer monitors, TVs, and other electronic displays near walls, on furniture, up above our heads, or in our hands: so we won’t bang into them. An immaterial screen removes this arbitrary limitation. For six years now, FogScreen has been installing its technology in clubs, concert halls, and shopping areas as a kicky way to flash images or get a message across right under the noses—indeed, right up the noses—of people who are free to plow right on through the image.