Emerging Technology: Computer Animations

How computer animators illuminate the evanescent complexity of the natural world

By Steven Johnson|Wednesday, April 21, 2004
RELATED TAGS: COMPUTERS

During one scene in the animated movie Shrek 2, scheduled for release next month, a valiant knight in full armor crosses a fiery moat, throws open an oak door, and enters the vast antechamber of a castle. With a quick flip of the hand he pulls off his helmet to reveal a mane of golden hair. Then, filmed in luxurious slow motion, the knight flips his do back like a medieval Breck Girl, and the flaxen strands shimmer in the light.

It’s not only a great sight gag but also an amazing technical breakthrough. In recent years moviegoers and video-game players have grown accustomed to seeing digital buildings, cars, and stage sets that are indistinguishable from images of the real things—the ones built out of atoms, not just digital bits. Yet a few frontiers remain. Computers can simulate objects created by natural processes, but even an untrained human eye will usually detect them as forgeries. Clouds, fire, trees, human skin—all these natural forms confound the algorithms of software. As does hair.

“We learned more about hairstyling than perhaps we wanted to know,” reports Ken Bielenberg, the visual-effects supervisor for both Shrek movies. “It’s relatively easy to model, say, a telephone, which is a hard, solid object. But with something like hair that’s made up of 10,000 or 20,000 strands—it’s hard to manage that on the computer.”

Computer modeling begins with two primary elements: the geometric shape of the object and the way light bounces off it. Animators typically define the shape of an object by creating digitized wire frames and then adding information about surface textures. Does the object absorb light evenly, or does it scatter it in specific ways? Turning that raw data into photo-realistic images involves a process in which a computer calculates the trajectories of millions of individual photons of light, then determines how those trajectories would find their way back to the pupils of an imagined viewer.

Ten years ago this type of modeling—called ray tracing—was possible only on a supercomputer. Now ray-traced images that are highly sophisticated, if not quite photo-realistic, can be created on a $200 Xbox video-game console. Nonetheless, ray tracing runs into difficulties with objects that do unpredictable things with light. Imagine the way light interacts with a shiny plastic ball, compared with its appearance on a rumpled velvet blanket. The blanket’s overall shape is more complex, of course, and to render the velvety texture you have to account for light bouncing off thousands of tiny fibers.

To capture the subtleties of human and animal hair for Shrek, Bielenberg’s team found that one of the technical obstacles was keeping track of where light didn’t go: the shadows. When you look at someone’s hair, part of the texture you detect comes from the thousands of darker areas blocked from the light by other strands. “Without shadowing, the hair often looks like it’s glowing, like it’s a light source itself,” Bielenberg says. Eliminating that artificial glow turned out to be the critical challenge Bielenberg and his colleagues faced in animating one of Shrek 2’s new characters: a talking cat named Puss in Boots, who looks at first glance like an adorable little kitten but turns out to have a Zorro complex. (The character’s voice is provided by Antonio Banderas.) Viewers will most likely take for granted that Puss in Boots has a believable coat of hair. But calculating all the shadows cast by individual strands of hair as they shift with the slightest movement—not to mention more dramatic gestures like the heroic knight’s Breck Girl flip—can take dozens of hours to process, even on high-powered computers.

Another challenge comes from objects on which the light doesn’t stop at borders but penetrates in a process known as subsurface scattering. When Alfred Hitchcock famously buried a lightbulb in a glass of poisoned milk in Suspicion to heighten its eeriness, he was exploiting the fact that light flows through milk in a distinct way. The natural world turns out to be filled with subsurface scattering. The distinct look of human skin, for instance, is determined by how light penetrates its surface. Eliminate subsurface scattering in animated faces and everyone looks like a porcelain doll.

Bouncing light rays, intricate shadows, and subsurface scattering all complicate the computer rendering of anything in nature and require the development of algorithms that provide a shortcut for modeling an object’s visual appearance—a technique called phenomenology.

Occasionally computer animators can seem more like virtual gardeners than illustrators. Trees, for instance, have long been a challenge to re-create convincingly. “It’s difficult to have models for veins and bark and for the interaction of light with the leaves, which are somewhat translucent,” says Przemyslaw Prusinkiewicz, a professor of computer science at the University of Calgary in Alberta. “And the overall shape is very complicated.” Too often, computer-rendered trees looked like topiaries, stripped of the free-form variation that real trees possess.

To create a believable tree, you need both true-to-life textures—Prusinkiewicz and his colleagues have recently created a tool for simulating the tiny hairs on the surface of a leaf—and you need a realistic branch structure. Branches, of course, are an iterated phenomenon: A branch develops, then sprouts new branches, which sprout even more. The exact size and position of each branch along the chain affects all the others, as gravity and available light shape its growth. If you try to simulate the end results, you may get trees that have an artificiality to them. In order to make convincing virtual trees, some animators now simulate the entire growth process. The trunks of the trees in Shrek 2’s lush forests were predefined by computer animators, but the branches were all grown organically from digital seeds.

Clouds and fire also pose significant hurdles for machine rendering. David Ebert, director of the Purdue University Rendering and Perceptualization Lab, has been exploring cloud simulations for more than a decade. “Clouds are a very amorphous phenomenon,” he says. “You have all these tiny particles of water, ice, and snow. Light enters the cloud and is scattered around, and some of it is directed to your eye. And while it’s being directed to your eye, it’s passing through the atmosphere, which has air particles scattering light along that direction. So you really have a very complex 3-D collection of small particles that you need to simulate.” Fire is even more chaotic. “There you have actual combustion occurring—so instead of just having light being reflected by all these particles, you actually have light being emitted by particles,” Ebert says. “Then you’ve got gas that gives off light that’s transparent but also dust and soot particles that are opaque—if you shine a bright light on a flame, you’ll actually see a shadow behind it. So you’ve really got a lot of complexity.”

Thanks to our visual acuity and our gift for pattern recognition, we humans have an extraordinary capacity to detect small aberrations in simulated nature. Ebert tells a story of working with storm experts from the University of Oklahoma. “We were creating renderings based on data they’d supplied us, and we had an atmospheric scientist come over and look at one of the renderings we’d done.” With one glance she could tell that there was something wrong with the image: The cloud tower at the very back of the storm was too smooth; it lacked the telltale cauliflower shape that you’d normally see in a supercell formation. “Oh, there’s no medium-scale turbulence in that model—something’s wrong,” she said. In fact, a software bug had corrupted the original data that Ebert had been sent, but it took another week for the researchers to discover the problem. Yet someone could see it in the clouds in a single glance.

Ebert has also discovered that looking at clouds from both sides—their real incarnations and their simulated doubles on the screen—can reveal new things about the complexity of the natural world. One of the scientists working with his animation team is an expert on cumulus cloud formations. “We’ll be looking at a cloud, and we’ll say: ‘How do we simulate that really hard edge there?’” Ebert says. “And our colleague will say, ‘Well, I’m not really sure.’ So to improve our models we’ve started to ask questions that even atmospheric scientists don’t know the answers to.”

As more of our entertainment comes from computer-rendered worlds, through movies or video games or other online social environments, replicating the complexity of nature will become an increasingly commonplace computer task. At this year’s Academy Awards, computer scientist Henrik Wann Jensen received a technical achievement Oscar for his pioneering research in subsurface scattering. The prize itself is a measure of the pace of technology. Special-effects awards used to be all about spaceships, explosions, and robots from the future. Now they’re giving prizes for capturing the subtleties of human skin and hair.

Next Page
1 of 2
Comment on this article
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

ADVERTISEMENT
ADVERTISEMENT
Collapse bottom bar
DSCMayCover
+

Log in to your account

X
Email address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it emailed to you.

Not registered yet?

Register now for FREE. It takes only a few seconds to complete. Register now »