There’s an intriguing new book out called Dreaming in Code by Scott Rosenberg (a cofounder of the Salon Web site), which centers on a group of engineers struggling to create a piece of personal productivity software called Chandler. I make an appearance in the tale, although I wasn’t involved with this particular project. The book’s title refers to a problem I used to have after intense periods of programming: I remember waking up to find I had been dreaming in computer code—eight-bit machine language, no less.
There wouldn’t be a story worthy of a whole book if Chandler came together easily and everyone went home happy. Indeed, Dreaming is an examination of the stressful mysteries of software. Why do some software projects sail to completion while so many others seem cursed? Why must software development be so difficult to plan?
These questions should concern everyone interested in science, not just programmers, because computer code is increasingly the language we use to describe and explore the dynamic aspects of reality that are too complicated to solve with equations. A comprehensive model of a biological cell, for instance, could lead to major new insights in biology and drug design. But how will we ever make such a model if the engineering of a straightforward thing like a personal productivity package confounds us?
For decades I have been chasing another kind of dream, that there will eventually be a less dismal way to think about software, and therefore about complexity. I have been calling this dream “phenotropics,” a word that roughly translates to “surfaces relating to each other.”
One way to understand phenotropics is to start by noticing that computer science has been divided by a fundamental schism. One side is characterized by precisely defined software structures. This is the approach to computers that requires you to make up a boundless number of silly names for abstract things, like the files on your hard drive. This was the only kind of computer science that was possible on the slow computers we were stuck with until fairly recently.
While advances in “high-level” languages (ones that use specialized syntax to carry out a lot of smaller, fussier commands—JavaScript, for example) have made this approach more human friendly over the years, the core of what we do when we program has barely changed. It still echoes the mathematical constructions of John von Neumann and Alan Turing from over a half century ago. While these math ideas are fundamental, there is no reason to believe they’re the best framework to use when creating a complicated program like a video game or modeling a complex scientific phenomenon.
This brings us to the other side of the schism. There is an emerging kind of programming that has been practiced by diverse people like robot builders, experimental user-interface designers, and machine-vision experts. These people had to find ways for a computer to interface with the physical world, and it turns out that doing so demands a very different, more adaptable approach.
Back in the 1950s and 1960s, when computer science was young, it was not clear that the two types of programming would be so distinct. There were occasional quixotic attempts to bridge the gap—for instance, by finding a compact set of rules to define the English language or the way a human brain recognizes an object by sight. Unfortunately, such rules don’t exist (see "Sing a Song of Evolution"). Instead, computer scientists who confronted the outside world had to develop new techniques that perform statistical analysis on large streams of data. These techniques are based on fuzziness instead of perfection. This is the approach that allows a rover to navigate across the surface of Mars without an exact map of what it is doing. You don’t have to name the shape of every hump of dust on the ground the way you have to name every file on a hard disk.
The core idea of phenotropics is that it might be possible to apply statistical techniques not just to robot navigation or machine vision but also to computer architecture and general programming. Right now, however, computer interiors are made of a huge number of logical modules that connect together through traditional, explicitly defined protocols, a very precise but rigid approach. The dark side of formal precision is that tiny changes can have random, even catastrophic effects. Flipping just one bit in a program might cause it to crash.
For a dramatic illustration of the limitations of current techniques, browse around on MySpace. You will see lots of pages with weird broken elements, like missing pictures or text and images wildly out of place. What happened is that the protocols connecting different elements together weren’t perfect. Probably they looked right initially, but some little detail changed and the mistake couldn’t be unwound easily enough for an amateur programmer to manage.
The phenotropic approach would be closer to what happens in biological evolution. If tiny flips in an organism’s genetic code frequently resulted in huge, unpredictable physical changes, evolution would cease. Instead there is an essential smoothness in the way organisms are related to their genes: A small change in DNA yields a small change in a creature—not always, but often enough that gradual evolution is possible. No wonder software engineers have such a hard time. When they do an experiment by changing a piece of code, the results they get are shockingly random. As a result, they generally cannot use code itself as a tool for learning about code.