If you’ve just been cornered by Martha Stewart at an interdisciplinary science conference and chastised for being a wimp, you could only be at one event: Sci Foo, an experimental, invitation-only, wikilike annual conference that takes place at Google headquarters in Mountain View, California. There is almost no preplanned agenda. Instead, there’s a moment early on when the crowd of scientists rushes up to blank poster-size calendars and scrawls on them to reserve rooms and times for talks on whatever topic comes to mind. For instance, physicist Lee Smolin, sci-fi author Neal Stephenson, and I talked about the relationship between time and math (touching on ideas presented in my October 2006 column).
The wimp comment was directed at me, and Martha was right. I hadn’t stood up for myself in a group interaction. I’ve always been the shy one in the schoolyard. Back in the 1980s, I was drawn to the possibility that virtual reality would help extend the magical, creative qualities of childhood into adulthood. Indeed, the effect of digital technology on culture has been exactly that, but childhood is not entirely easy. If Lee hadn’t forged through the crowd to create our session, I never would have done it. What made Martha’s critique particularly memorable, though, is that her observation was directly relevant to what emerged from Sci Foo as the big idea about the future of science.
It wasn’t official, of course, but the big idea kept popping up: Science as a whole should consider adopting the ideals of “Web 2.0,” becoming more like the community process behind Wikipedia or the open-source operating system Linux. And that goes double for synthetic biology, the current buzzword for a superambitious type of biotechnology that draws on the techniques of computer science. There were more sessions devoted to ideas along these lines than to any other topic, and the presenters of those sessions tended to be the younger ones, indicating that the notion is ascendant.
It’s a trend that seems ill-founded to me, and to explain why, I’ll tell a story from my early twenties. Visualize, if you will, the most transcendentally messy, hirsute, and otherwise eccentric pair of young nerds on the planet. One was me; the other was Richard Stallman. Richard was distraught to the point of tears. He had poured his energies into a celebrated project to build a radically new kind of computer called the LISP Machine. It wasn’t just a regular computer running LISP, a programming language beloved by artificial intelligence researchers. Instead it was a machine patterned on LISP from the bottom up, making a radical statement about what computing could be like at every level, from the underlying architecture to the user interface. For a brief period, every hot computer-science department had to own some of these refrigerator-size gadgets.
It came to pass that a company called Symbolics became the sole seller of LISP machines. Richard realized that a whole experimental subculture of computer science risked being dragged into the toilet if anything happened to that little company—and of course everything bad happened to it in short order.
So Richard hatched a plan. Never again would computer code, and the culture that grew up with it, be trapped inside a wall of commerce and legality. He would instigate a free version of an ascendant, if rather dull, program: the Unix operating system. That simple act would blast apart the idea that lawyers and companies could control software culture. Eventually a kid named Linus Torvalds followed in Richard’s footsteps and did something related, but using the popular Intel chips instead. His effort yielded Linux, the basis for a vastly expanded open-software movement.
But back to that dingy bachelor pad near MIT. When Richard told me his plan, I was intrigued but sad. I thought that code was important in more ways than politics can ever be. If politically correct code was going to amount to endless replays of dull stuff like Unix instead of bold projects like the LISP Machine, what was the point? Would mere humans have enough energy to carry both kinds of idealism?
Twenty-five years later, that concern seems to have been justified. Open wisdom-of-crowds software movements have become influential, but they haven’t promoted the kind of radical creativity I love most in computer science. If anything, they’ve been hindrances. Some of the youngest, brightest minds have been trapped in a 1970s intellectual framework because they are hypnotized into accepting old software designs as if they were facts of nature. Linux is a superbly polished copy of an antique, shinier than the original, perhaps, but still defined by it.
Before you write me that angry e-mail, please know I’m not anti–open source. I frequently argue for it in various specific projects. But a politically correct dogma holds that open source is automatically the best path to creativity and innovation, and that claim is not borne out by the facts.