Discover Interview: Marvin Minsky

The legendary pioneer of artificial intelligence ponders the brain, bashes neuroscience, and lays out a plan for superhuman robot servants.

By Susan Kruglinski
Jan 13, 2007 6:00 AMNov 12, 2019 4:45 AM
marvin-minsky224.jpg
Image courtesy of Donna Coveney/MIT

Newsletter

Sign up for our email newsletter for the latest science news
 

Marvin Minsky straddles the worlds of science and sci-fi. The MIT professor and artificial intelligence guru has influenced everyone from Isaac Asimov to the digital chess champ Deep Blue to computer movie star HAL of

2001: A Space Odyssey.

He may be known around campus as "Old Man Minsky," but the scientist is just as active in AI research today as he was when he helped pioneer the field as a young man in the 1950s.

Although educated in mathematics, Minsky has always thought in terms of mind and machine. For his dissertation at Princeton University in the 1950s, he analyzed a "learning machine," meant to simulate the brain's neural networks, that he had constructed as an undergrad. In his early career he was also an influential inventor, creating the first confocal scanning microscope, a version of which is now standard in labs worldwide. In 1959 Minsky cofounded the Artificial Intelligence Laboratory at MIT, where he designed and built robotic hands that could "feel" and "see" and manipulate objects, a watershed in the field.

Throughout, Minsky has written philosophically on the subject of AI, culminating in the 1985 book

Society of Mind,

which summarizes his theory of how the mind works. He postulates that the complex phenomenon of thinking can be broken down into simple, specialized processes that work together like individuals in a society. His latest book,

The Emotion Machine,

continues ideas begun in Society of Mind, reflecting twenty-some additional years of thought. It is a blueprint for a thinking machine that Minsky would like to build—an artificial intelligence that can reflect on itself—taking us a step forward into a future that may seem as if out of an Asimov story.

What are your latest ideas about the mind, as set out inThe Emotion Machine?

The theme of the book is that humans are uniquely resourceful because they have several ways to do everything. If you think about something, you might think about it in terms of language, or in logical terms, or in terms of diagrams, pictures, or structures. If one method doesn't work, you can quickly switch to another. That's why we're so good at dealing with so many situations. Animals can't imagine what the room would look like if you change that couch from black to red. But a person has ways of constructing mental images or sentences or bits of logic.

Neuroscientists' quest to understand consciousness is a hot topic right now, yet you often pose things via psychology, which seems to be taken less seriously. Are you behind the curve?

I don't see neuroscience as serious. What they have are nutty little theories, and they do elaborate experiments to confirm them and don't know what to do if they don't work. This book presents a very elaborate theory of consciousness. Consciousness is a word that confuses possibly 16 different processes. Most neurologists think everything is either conscious or not. But even Freud had several grades of consciousness. When you talk to neuroscientists, they seem so unsophisticated; they major in biology and know about potassium and calcium channels, but they don't have sophisticated psychological ideas. Neuroscientists should be asking: What phenomenon should I try to explain? Can I make a theory of it? Then, can I design an experiment to see if one of those theories is better than the others? If you don't have two theories, then you can't do an experiment. And they usually don't even have one.

So as you see it, artificial intelligence is the lens through which to look at the mind and unlock the secrets of how it works?

Yes, through the lens of building a simulation. If a theory is very simple, you can use mathematics to predict what it'll do. If it's very complicated, you have to do a simulation. It seems to me that for anything as complicated as the mind or brain, the only way to test a theory is to simulate it and see what it does. One problem is that often researchers won't tell us what a simulation didn't do. Right now the most popular approach in artificial intelligence is making probabilistic models. The researchers say, "Oh, we got our machine to recognize handwritten characters with a reliability of 79 percent." They don't tell us what didn't work.

Neuroscientists like Oliver Sacks and V. S. Ramachandran study people who have brain injuries; to them, what is not happening in the brain is more informative than what is happening.

Is that similar to what you're saying?

Yes. In fact, those are just about the two best thinkers in that field. Antonio Damasio is pretty good, but Ramachandran and Sacks are more sophisticated than most. They consider alternative theories instead of trying to prove one particular theory.

Is there other work in neuroscience or AI that interests you?

Very little. There are 20,000 or 30,000 people working on neuronetworks, and there are 40,000 or 50,000 people working on statistical predictors. There are several thousand people trying to get logical systems to do commonsense thinking, but as far as I know, almost none of them can do much reasoning by analogy. This is important because the way people solve problems is first by having an enormous amount of commonsense knowledge, like maybe 50 million little anecdotes or entries, and then having some unknown system for finding among those 50 million old stories the 5 or 10 that seem most relevant to the situation. This is reasoning by analogy. I know of only three or four people looking at this, but they're not well-known because they don't make grandiose claims of looking for a theory of everything.

Can artificial intelligence have human-style common sense?

There are several large-scale projects exploring that issue. There's the one that Douglas Lenat in Texas has been pursuing since 1984. He has a couple of million items of commonsense knowledge, such as "People live in houses" or "When it rains, you get wet," which are very carefully classified. But what we don't have are the right kind of answers to questions that a 3-year-old child would be filled with. So we're trying to collect those now. If you ask a childlike question like, "Why, when it rains, would somebody want to stay dry?" it's confusing to a computer, because people don't want to get wet when it rains but they do when they take a shower.

What is the value in creating an artificial intelligence that thinks like a 3-year-old?

The history of AI is sort of funny because the first real accomplishments were beautiful things, like a machine that could do proofs in logic or do well in a calculus course. But then we started to try to make machines that could answer questions about the simple kinds of stories that are in a first-grade reader book. There's no machine today that can do that. So AI researchers looked primarily at problems that people called hard, like playing chess, but they didn't get very far on problems people found easy. It's a sort of backwards evolution. I expect with our commonsense reasoning systems we'll start to make progress pretty soon if we can get funding for it. One problem is people are very skeptical about this kind of work.

Usually AI refers to an exploration of the utilitarian uses of the brain, like understanding speech or solving problems. Yet so much of what humans do isn't clearly utilitarian, like watching TV, fantasizing, or joking. Why is all that behavior necessary?

Watching sports is my favorite. Pleasure, like pain, is thought of as being a sort of simple, absolute, innate, basic thing, but as far as I can see, pleasure is a piece of machinery for turning off various parts of the brain. It's like sleep. I suspect that pleasure is mainly used to turn off parts of the brain so you can keep fresh the memories of things you're trying to learn. It protects the short-term memory buffers. That's one theory of pleasure. However, it has a bug, which is, if you gain control of it, you'll keep doing it: If you can control your pleasure center, then you can turn off your brain. That's a very serious bug, and it causes addiction. That's what I think the football fans are doing—and the pop music fans and the television watchers, and so forth. They're suppressing their regular goals and doing something else. It can be a very serious bug, as we're starting to see in the young people who play computer games until they get fat.

Many people feel that the field of AI went bust in the 1980s after failing to deliver on its early promise. Do you agree?

Well, no. What happened is that it ran out of high-level thinkers. Nowadays everyone in this field is pushing some kind of logical deduction system, genetic algorithm system, statistical inference system, or a neural network—none of which are making much progress because they're fairly simple. When you build one, it'll do some things and not others. We need to recognize that a neural network can't do logical reasoning because, for example, if it calculates probabilities, it can't understand what those numbers really mean. And we haven't been able to get research support to build something entirely different, because government agencies want you to say exactly what you'll do each month of your contract. It's not like the old days when the National Science Foundation could fund people rather than proposals.

Why has the landscape changed for funding scientific research?

Funders want practical applications. There is no respect for basic science. In the 1960s General Electric had a great research laboratory; Bell Telephone's lab was legendary. I worked there one summer, and they said they wouldn't work on anything that would take less than 40 years to execute. CBS Laboratories, Stanford Research Lab—there were many great laboratories in the country, and there are none now.

The Emotion Machine reads like a book about understanding the human mind, but isn't your real intent to fabricate it?

The book is actually a plan for how to build a machine. I'd like to be able to hire a team of programmers to create the Emotion Machine architecture that's described in the book—a machine that can switch between all the different kinds of thinking I discuss. Nobody's ever built a system that either has or acquires knowledge about thinking itself, so that it can get better at problem solving over time. If I could get five good programmers, I think I could build it in three to five years.

It sounds like you could make a very smart computer, but is your ultimate goal to actually reproduce a human being?

Or better. We humans are not the end of evolution, so if we can make a machine that's as smart as a person, we can probably also make one that's much smarter. There's no point in making just another person. You want to make one that can do things we can't.

To what purpose?

Well, the birthrate is going down, but the population is still going up. Then we're going to have old people, and we'll need smart people to do their housework, and take care of things and grow the vegetables. So we need smart robots. There are also problems we can't solve. What if the sun dies out or we destroy the planet? Why not make better physicists, engineers, or mathematicians? We may need to be the architects of our own future. If we don't, our culture could disappear.

Has science fiction influenced your work?

It's about the only thing I read. General fiction is pretty much about ways that people get into problems and screw their lives up. Science fiction is about everything else.

What did you do as consultant on 2001: A Space Odyssey

?

I didn't consult about the plot but about what the [HAL 9000] computer would look like. They had a very fancy computer with all sorts of colored labels and so forth. Stanley Kubrick said, "What do you think of that?" I said, "It's very beautiful." And he said, "What do you really think?" I said, "Oh, I think this computer would actually just be lots of little black boxes, because the computer would know what's in them by sending signals through its pins." So he scrapped the whole set and made the simpler one, which is more beautiful. He wanted everything technological to be plausible. But he wouldn't tell me what HAL would do.

If we developed the perfect artificial brain, what would be the difference between that and the real thing?

Well, it wouldn't die. Some people believe that you should die, and some people think dying is a nuisance. I'm one of the latter. So I think we should get rid of death.

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group