Over the past few years, I’ve developed two habits that have made me an increasingly unpopular movie date. One is a strong preference for 3-D movies, undeterred by low artistic value or by sensations commonly associated with brain tumors and food poisoning, not to mention the big, dorky, blinking plastic glasses. (I can’t wait to upgrade my home TV to 3-D—my family, bless them, having assured me that blinking glasses are the least of my problems when it comes to looking dorky.) The other is that I’ve come to like sitting ever closer to the theater screen, advancing at the rate of approximately one row every six months.
See, I’m trying to go beyond watching movies to being inside movies. I don’t get why everyone doesn’t feel this way. People make a big deal about how big and bright and sharp the iPad screen is. Well, sure, compared with the murky, teeny phone screens we all spend half our lives peering at. But compared with real life, it’s still a pretty murky, teeny screen, and one that imprisons flat images. If you find watching Avatar on an iPad an immersive experience, more iPower to you. As for me, when I’m in media-consumption mode, I want to experience the you-are-there feeling you get when you are, well, there. I want freedom from screens.
Researchers feel my pain, apparently, because some of them have been working on peeling video off glass displays so that filmed objects appear to hang out in the thin air around us. There’s a long way to go, but a reasonable first step toward fully immersive 3-D entertainment would be better, less nauseating 3-D effects. The essential ingredient of a 3-D image is stereoscopic photography, in which each eye receives an image representing a view from a slightly different angle. A simple way to achieve this is to develop two screens, one for each eye. You could probably go to Brookstone and get thick glasses lensed with individual TV screens, but that would be taking you in the wrong direction, dorkiness-wise. Nanobiotechnologist Babak Parviz and his team at the University of Washington are developing a much cooler approach: display screens built into contact lenses.
Transforming televisions into contact lenses turns out to be a difficult feat even in our age of micro- miniaturization. The list of problems is impressive. First, it requires microlenses that sit on top of the main lens to properly focus images. It needs a way to adhere electrical components to the lenses without distorting picture quality. It needs a power source. (Parviz is experimenting with wireless radio-frequency energy.) And all this must happen on 1.5 square centimeters of polymer that’s transparent, flexible, nonirritating, fluid-friendly, and free of all the toxic materials normally used in glowing microelectronic components. As Parviz says, “It’s a pretty intricate optical system for a contact lens.”
He has come close to solving every one of these problems. His prototype contact lenses do not seem to ruffle the rabbits that have worn them. Granted, these test subjects have not yet been subjected to actual television; so far Parviz has managed to incorporate just a single blinking LED on the lens. Then again, you’d be surprised how much information a single dot can deliver. Imagine a lens that blinks to notify the hearing-impaired of an incoming call or to signal you when your mother-in-law is pulling into the driveway. Video lenses are inestimably far off, Parviz concedes, but in the next few years he expects to build contacts with preprinted, illuminable characters and icons as well as an eight-by-eight array of LEDs. If networked, even a rudimentary display could deliver useful visual cues, such as turn signals from your gps so you can keep your eyes on the road.
But a screen is still a screen even if it’s plastered to your eyeballs. What I really want is to ditch solid displays altogether and see images popping out in thin air. As it happens, thin air may be fine for breathing, but it’s a lousy medium for image projection; there’s very little to bounce light off, let alone a way to control how it bounces to make sure it finds its way into your eye. Thick, humid air turns out to be a different story, though, and nothing thickens air so reliably (as anyone in London or San Francisco could tell you) as water vapor. Conveniently, water can both reflect and transmit light, a property known as transflection. A company called FogScreen in Helsinki, Finland, has figured out how to take advantage of all these facts to project fairly crisp, bright images onto—yes—a screen of fog. FogScreen’s machine enlists an array of tiny nozzles to spit out row after row of near-microscopic drops of water, forming a thick slab of fog onto which a projector can shine a surprisingly bright, clear image.
The advantage of a screen made of more or less nothing is that you can direct any part of your body right through it without the usual side effects of entering glass. Like regular fog, FogScreen fog doesn’t even feel wet. If the motivation for punching through a display seems elusive, think about all the prime image-display space around you that has too much foot traffic for a conventional screen: hallways, sidewalks, doorways, and the area smack in the middle of your living room, your office, or a mall shop. That’s why we put computer monitors, TVs, and other electronic displays near walls, on furniture, up above our heads, or in our hands: so we won’t bang into them. An immaterial screen removes this arbitrary limitation. For six years now, FogScreen has been installing its technology in clubs, concert halls, and shopping areas as a kicky way to flash images or get a message across right under the noses—indeed, right up the noses—of people who are free to plow right on through the image.
FogScreen’s two-dimensional floating displays are fine as static billboards, but the technology is not yet suited for watching movies or updating your Facebook status. The reason is simple: Turbulence gets in the way. The tiniest eddies in air send ripples through the fog, rendering the slab too bumpy and jittery to support images detailed enough for watching sharp video or reading text. “I don’t see these screens coming to homes next year,” says Ismo Rakkolainen, the user-interface scientist at the University of Tampere in Finland who coinvented the fog-display technology. On a more hopeful note, Rakkolainen adds that it should in coming years be possible to create thicker slabs of image-embellished fog that can be made to resemble 3-D objects. “It could be the first technology that comes close to the Star Wars holographic images.”
Now we’re talking. Rakkolainen is referring to the iconic Princess Leia hologram that emerges from R2-D2’s navel (video) and other holographic projections like it in the Star Wars movies, only actual holography requires no wispy clouds of moisture. A hologram is an image most easily created when two beams of laser light are reflected from an object or scene onto some sort of light-sensitive film or plate. The beams of light cancel out in some spots on the plate and reinforce each other in other spots, creating a distinctive “interference pattern” that imprints on the plate. When laser light is later shone through the interference pattern, the object or scene appears to float in space with sharp, vivid realness that can be hard to distinguish from real realness. You can even view the object from different angles as you move your head or walk around it—no dorky glasses required.
Video holograms are a bear, however, because they require recording an interference pattern in all its incredible detail while the pattern is constantly changing. Fancy computer tricks have proved dead ends, but researchers at the University of Arizona may be onto something with a novel organic polymer known as PATPD/CAAN:FDCST:ECZ:PCM. (You can drag the full name out of this paper on your own time.) A sheet of PATPD-etc. will faithfully record a single interference pattern, just like a photographic plate. But PATPD-so-forth’s big trick is that it can erase the pattern, almost Etch A Sketch style, and immediately record a new one, creating a moving image one frame at a time. It has taken University of Arizona physicist Pierre-Alexandre Blanche and colleagues some 15 years to come up with a film that faithfully records sharp patterns and quickly auto-fades. “We tested thousands and thousands of different formulations, always looking for any tiny improvement we could get,” Blanche says. “Over time we’ve improved the sensitivity by a factor of 100.” Sadly both for him and for us Princess Leiagram fans, his work is far from over: The film currently requires lasers powerful enough to take down your neighbor’s TV, and it can show only two frames a second, far from the 30 frames per second needed for video. Part of the performance leap required for a commercial version could be covered by new generations of more efficient lasers already hitting the market, but the film still needs to be several times more sensitive. “It won’t be in Walmart in two years,” Blanche says. “Maybe 10.”
Perfect by me. That’s just when my 3-D TV will be ready for an upgrade, and I’ll be first in line for that system.
Then Blanche got me thinking about the ultimate kind of immersion. Would it be possible to create interference patterns in thin air so that we could have holographic images all around us without having to risk banging into plates of PATPD-whatever? Or if not thin air, how about thick air?
Sure enough, FogScreen’s chief technology officer, Arttu Laitinen, says his company has been closely following developments in the field of holography with an eye to creating interference patterns on fog. Laitinen adds that a commercial product along those lines is just a glimmer on the event horizon at this point, but just think: floating, screenless images that you can walk around right in the middle of your room. I absolutely intend to upgrade my personal robot with that capability as soon as my starship makes port.
David H. Freedman is a freelance journalist, author, and longtime contributor to DISCOVER. You can follow him on Twitter: @dhfreedman.