When discussing consciousness, it is easy to consider only the observable and measurable attributes that we associate with being conscious. But this approach misses the essence of this ineffable concept. Our ability to express a loving sentiment, to get a joke, or to be sexy are simply types of performances—impressive and intelligent perhaps, but skills that can be observed and measured. Although it is difficult to figure out how the brain accomplishes these sorts of tasks, and what is going on in the brain when it does (indeed, that represents perhaps the most difficult and important scientific quest of our era), that still misses the true idea of consciousness.
My own view is that consciousness is an emergent property of a complex physical system. In this view, a dog is also conscious but somewhat less so than a human. An ant has some level of consciousness, but much less than that of a dog. The ant colony, on the other hand, could be considered to have a higher level of consciousness than the individual ant; it is certainly more intelligent.
By this reckoning, a sufficiently complex machine can also be conscious. A computer that successfully emulates the complexity of a human brain would also have the same emergent consciousness as a human.
My objective prediction is that machines in the not-so-distant future will appear to be conscious. They will be convincing when they speak of their qualia—the fundamental experiences of consciousness, like the sense of the color red or the feeling of diving into water. They will exhibit the full range of familiar emotional cues; they will make us laugh and cry; and they will get mad at us if we say we don’t believe they are conscious. (They will also be very smart, so we won’t want that to happen.) We will accept that they are conscious persons. My subjective leap of faith is this: Once machines succeed in being convincing when they speak of their conscious experiences, they will indeed be conscious persons.
I have come to my position via the following thought experiment. Imagine that you meet an entity in the future—a robot or an avatar—that is completely convincing in her emotional reactions. She convinces you of her sincerity when she speaks of her fears and longings. In every way, she seems conscious. She seems, in fact, like a person. Would you accept her as a conscious person? If this entity were threatened with destruction and responded, as a human would, with terror, would you react in the same empathetic way that you would if you witnessed such a scene involving a human? For me, the answer is yes, and I believe the answer would be the same for most if not virtually all other people, regardless of what they might assert in a philosophical debate.
There is certainly disagreement among scientists and philosophers on when, or even whether, we will encounter such a nonbiological entity. My own consistent prediction is that this will first take place by 2029 and become routine in the 2030s. I base that prediction on my “law of accelerating returns,” which I describe in my book The Singularity Is Near. In short, an evolutionary process, in biology or technology, inherently accelerates as a result of its increasing levels of abstraction, and its products grow exponentially in price-
performance and capability.
But putting the time frame aside, I firmly
believe that we will eventually come to regard such entities as conscious. Consider how we already treat them when exposed to them as characters in stories and movies: R2D2 from the Star Wars movies, Data from the TV series Star Trek: The Next Generation, WALL-E, and Rachael the Replicant from Blade Runner (who, by the way, is not aware that she is not human). We empathize with these characters even though we know they are nonbiological. We regard them as conscious persons, just as we do biological human characters. We share their feelings and fear for them when they get into trouble. If that is how we treat fictional nonbiological characters today, then that is how we will treat real-life intelligences in the future that don’t happen to have a biological substrate.
If you accept the leap of faith that a nonbiological entity that is convincing in its reactions to qualia is conscious, then you accept my conclusion that consciousness is an emergent property of the overall pattern of an entity, not the substrate it runs on.
There is a conceptual gap between science, which stands for objective measurement and the conclusions we draw from it, and consciousness, which is synonymous with subjective experience. Some observers question whether consciousness itself has any basis in reality. But we would be well advised not to dismiss the concept as a polite debate between philosophers.
The idea of consciousness underlies our moral system, and our legal system in turn is built on those moral beliefs. If a person extinguishes someone’s consciousness, as in the act of murder, we consider that to be immoral and, with some exceptions, a high crime. Those exceptions are also relevant to consciousness: We might authorize police or military forces to kill certain conscious people to protect a greater number of other conscious people. If I destroy my own property, it is probably acceptable. If I destroy your property without your permission, it is probably not acceptable—not because I am causing suffering to your property but rather to you as the owner of the property. If my property includes a conscious being such as an animal, then I as the owner of that animal do not necessarily have free moral or legal reign to do with it as I wish; there are laws against animal cruelty.