David Brin works out of his home office in San Diego County, but he spends much of his day in invisible worlds—ones hidden from us because we can’t perceive them or because they don’t even exist yet. For the past three decades, the Hugo Award–winning author has been mapping out his vision of the future in dozens of works, both nonfiction and sci-fi. His 1998 book, The Transparent Society, explores how technological innovations force us to choose between privacy and security, foreshadowing the era of YouTube and ubiquitous surveillance cameras. His 1990 novel, Earth, anticipates so many of today’s trends—from the World Wide Web to global warming—that there is a Web site devoted entirely to its prognostications.
How did this 56-year-old father of three, who lives mostly outside of academia, get so adept at parsing the future? By keeping his journeys of imagination grounded in the real world. After getting a master’s in electrical engineering at the University of California at San Diego, Brin completed a Ph.D. in space physics and worked as a postdoc at the Jet Propulsion Laboratory. Today, in addition to churning out novels that chart his fictional Uplift universe, he continues to work closely with the people developing technologies that will transform our lives.
Why do you have such a good track record as a prognosticator?
When prediction serves as polemic, it nearly always fails. Our prefrontal lobes can probe the future only when they aren’t leashed by dogma. The worst enemy of agile anticipation is our human propensity for comfy self-delusion.
Peering ahead is mostly art. We all have tricks. One of mine is to look for “honey-pot ideas” drawing lots of fad attention. Whatever’s fashionable, try to poke at it. Maybe 1 percent of the time you’ll find a trend or possibility that’s been missed. Another method is even simpler: Respect the masses. Nearly all futuristic movies and novels—even sober business forecasts—seem to wallow in the same smug assumption that most people are fools. This stereotype led content owners to envision the Internet as a delivery conduit to sell movies to passive couch potatoes. Even today, many of the social-net and virtual-world companies treat their users like giggling 13-year-olds incapable of expressing more than a sentence at a time of actual discourse.
A contrarian trick that has served me well is to ponder a coming technology and then imagine, What if everybody gets to use it? In really smart ways? Most of those imaginings have come true.
What’s the biggest trend you’ve failed to spot or the biggest prediction you think you got wrong?
Back in 1999, I forecast that people would shrug off future shock when the big millennium rolled around. At first it seemed that way, as people blithely went about their routines. Now I suspect there really was a 21st-century trauma. Romantic nostalgia is rampant. You find very little interest in the modernist agenda of confident problem solving. Robert Heinlein predicted this, but I didn’t. I also expected a few technologies that never came. For example, lie detection based on involuntary eye movements, a method that ought to work even during a televised interview or press conference. A potential nightmare for deceitful politicians! But I was misled by hope. Other disappointed forecasts include rapid understanding of the immune system and big advances in computerized teaching.
At the other end, some trends exceeded anticipation. I did not expect the “age of amateurs” to progress so far, so fast. Fifty million hobbyists are demanding that professionals, from doctors to scientists to movie directors, accept a new world where expertise is not limited to the licensed.
You have said that science’s ability to look beyond the familiar is subject to our psychological, as well as physical, ability to perceive. What do you mean?
There is a famous, but much-debated, anthropological myth that the Carib Indians were unable to perceive the first European ships offshore until one of their shamans sat and contemplated for a while and then explained it to them. I think the people are smarter than that, but as an oversimplifying metaphor, it does point out that what we are able to see depends upon a variety of things. Let me give you a real way to put it into perspective. In the 15th century, we got the printing press. Printing is a way of augmenting human memory. Printing not only vastly expanded the ability to convey human knowledge and memory to other people but also made it more robust.
People tend to assume that when things like this happen, it automatically results in an improved humanity. This is what you’re hearing from the techno-transcendentalists on the Internet. It is a religious statement that what we’re seeing on the Internet today is improving discourse and improving democracy and improving markets. I’m very skeptical of that because at the beginning of any of these revolutions, always what is empowered is demagoguery. The immediate outcome of the printing press was the Thirty Years’ War. The immediate outcome of radio was the empowerment of demagogues like Huey Long and especially Adolf Hitler. It always takes a while for the people to learn how to use the new media critically, to be able to perceive the good from the bad.
Now we have computing and databases, expanded memory, television and mass media. We’re headed toward the day that databases become a knowledge mesh; we’re going to have super memory and super vision. But what is going to enable us to perceive better?
Are there some examples of how science is helping us perceive better right now?
There is the basic, ever-increasing power of instrumentation. You have electron microprobes, which are involved in the cutting edge of nanotechnology, for instance. You’re able to measure the field of individual atoms. You are able to come up with wonderful crackpot notions like Wil McCarthy’s concept of Programmable Matter—that if you were to adjust the electrons on the surface of a sheet of silicon and control them through simple voltages, you could effectively make that surface of silicon behave like iron. We wouldn’t have been able to imagine this concept without the ability to operate on an atom-by-atom level.
But the rate at which we are seeing better with telescopes and probing better with microscopes is not where the real action is. Sure, every year we can see smaller; sure, every year we can see farther, but the real breakthroughs are coming in our ability to make more of these observations and do it faster. For instance, look at the recent use of the Cosmic Evolution Survey, using the Hubble Space Telescope to study gravitational lensings [in which the gravitational pull of galaxies and dark matter bends the light from more distant objects] in an area of the sky nine times the apparent surface area of the full moon. To be able to take a patch of sky and unleash computers to find so many gravitational lensings you could then make a three-dimensional depth map billions of light-years deep so you can find the patchiness of dark matter—that is very impressive. That’s the difference between seeing a pixel and deriving information about things that are far away from that pixel. That’s a matter of perception.