What is the value in creating an artificial intelligence that thinks like a 3-year-old?
The history of AI is sort of funny because the first real accomplishments were beautiful things, like a machine that could do proofs in logic or do well in a calculus course. But then we started to try to make machines that could answer questions about the simple kinds of stories that are in a first-grade reader book. There's no machine today that can do that. So AI researchers looked primarily at problems that people called hard, like playing chess, but they didn't get very far on problems people found easy. It's a sort of backwards evolution. I expect with our commonsense reasoning systems we'll start to make progress pretty soon if we can get funding for it. One problem is people are very skeptical about this kind of work.
Usually AI refers to an exploration of the utilitarian uses of the brain, like understanding speech or solving problems. Yet so much of what humans do isn't clearly utilitarian, like watching TV, fantasizing, or joking. Why is all that behavior necessary?
Watching sports is my favorite. Pleasure, like pain, is thought of as being a sort of simple, absolute, innate, basic thing, but as far as I can see, pleasure is a piece of machinery for turning off various parts of the brain. It's like sleep. I suspect that pleasure is mainly used to turn off parts of the brain so you can keep fresh the memories of things you're trying to learn. It protects the short-term memory buffers. That's one theory of pleasure. However, it has a bug, which is, if you gain control of it, you'll keep doing it: If you can control your pleasure center, then you can turn off your brain. That's a very serious bug, and it causes addiction. That's what I think the football fans are doing—and the pop music fans and the television watchers, and so forth. They're suppressing their regular goals and doing something else. It can be a very serious bug, as we're starting to see in the young people who play computer games until they get fat.
Many people feel that the field of AI went bust in the 1980s after failing to deliver on its early promise. Do you agree?
Well, no. What happened is that it ran out of high-level thinkers. Nowadays everyone in this field is pushing some kind of logical deduction system, genetic algorithm system, statistical inference system, or a neural network—none of which are making much progress because they're fairly simple. When you build one, it'll do some things and not others. We need to recognize that a neural network can't do logical reasoning because, for example, if it calculates probabilities, it can't understand what those numbers really mean. And we haven't been able to get research support to build something entirely different, because government agencies want you to say exactly what you'll do each month of your contract. It's not like the old days when the National Science Foundation could fund people rather than proposals.
Why has the landscape changed for funding scientific research?
Funders want practical applications. There is no respect for basic science. In the 1960s General Electric had a great research laboratory; Bell Telephone's lab was legendary. I worked there one summer, and they said they wouldn't work on anything that would take less than 40 years to execute. CBS Laboratories, Stanford Research Lab—there were many great laboratories in the country, and there are none now.
The Emotion Machine reads like a book about understanding the human mind, but isn't your real intent to fabricate it?
The book is actually a plan for how to build a machine. I'd like to be able to hire a team of programmers to create the Emotion Machine architecture that's described in the book—a machine that can switch between all the different kinds of thinking I discuss. Nobody's ever built a system that either has or acquires knowledge about thinking itself, so that it can get better at problem solving over time. If I could get five good programmers, I think I could build it in three to five years.
It sounds like you could make a very smart computer, but is your ultimate goal to actually reproduce a human being?
Or better. We humans are not the end of evolution, so if we can make a machine that's as smart as a person, we can probably also make one that's much smarter. There's no point in making just another person. You want to make one that can do things we can't.
To what purpose?
Well, the birthrate is going down, but the population is still going up. Then we're going to have old people, and we'll need smart people to do their housework, and take care of things and grow the vegetables. So we need smart robots. There are also problems we can't solve. What if the sun dies out or we destroy the planet? Why not make better physicists, engineers, or mathematicians? We may need to be the architects of our own future. If we don't, our culture could disappear.
Has science fiction influenced your work?
It's about the only thing I read. General fiction is pretty much about ways that people get into problems and screw their lives up. Science fiction is about everything else.
What did you do as consultant on 2001: A Space Odyssey?
I didn't consult about the plot but about what the [HAL 9000] computer would look like. They had a very fancy computer with all sorts of colored labels and so forth. Stanley Kubrick said, "What do you think of that?" I said, "It's very beautiful." And he said, "What do you really think?" I said, "Oh, I think this computer would actually just be lots of little black boxes, because the computer would know what's in them by sending signals through its pins." So he scrapped the whole set and made the simpler one, which is more beautiful. He wanted everything technological to be plausible. But he wouldn't tell me what HAL would do.
If we developed the perfect artificial brain, what would be the difference between that and the real thing?
Well, it wouldn't die. Some people believe that you should die, and some people think dying is a nuisance. I'm one of the latter. So I think we should get rid of death.