Evolving A Conscious Machine

Some computer scientists think that by letting chips build themselves, the chips will turn out to be stunninglyefficient, complex, effective, and weird—kind of like our brains.

By Gary Taubes
Jun 1, 1998 5:00 AMNov 12, 2019 6:47 AM

Newsletter

Sign up for our email newsletter for the latest science news
 

There are some subjects of discourse, politics probably foremost among them, that are best discussed in a pub, where any seemingly definitive judgment is unlikely to be taken too seriously. One such subject is computer consciousness: whether a suitably intelligent computer can achieve a sense of I, as can, allegedly, a suitably intelligent human. This is the topic on a rainy Saturday afternoon in November, at a pub called The Swan in Falmer, in a village outside Brighton, England, about an hour’s train ride south of London. The pub is a few hundred yards (or rather, meters) from the campus of the University of Sussex, home of the Center for Computational Neuroscience and Robotics.

My companions are two Sussex computer scientists, Inman Harvey (Inman is my first name and Harvey is my last, which confuses Americans quite a lot) and Adrian Thompson. Harvey is bearded, heavyset, and pushing 50. A former student of philosophy, he ran an import-export business out of Afghanistan for two decades before returning to academia to become a computer scientist, or, as he calls himself, an evolutionary roboticist. Thompson is in his late twenties and has been programming since he was an adolescent, although he was not the archetypal computer nerd, he says, because he wasn’t interested in computer games, only computers. He arrived at the pub in a flannel shirt and mud-splattered jeans, since his car is in disrepair and he had to walk over the South Downs. The three of us are sitting in a corner of the pub, eating lunch and drinking the local ale, while Harvey, who has just written a chapter for a textbook on evolving robot consciousness, holds forth on the computer sense of self.

The gist of Harvey’s argument seems to be that we cannot assess such concepts as consciousness free of our prejudices, our belief, based on past experience, that humans are conscious but machines and other inanimate objects are not. To support his point, Harvey suggests that if you were walking across a road and saw a car-size block of stone tumbling down a hillside in your direction, you would give it considerably more leeway, with greater urgency, than you would a car-size car driving toward you at the same speed. If you see a car coming toward you, he explains, you assume it has a human being in it who wants to avoid you. You may be a bit careful about stepping right in front of it, but you don’t worry about it veering toward you.

So it is with humans and machines. We think humans are conscious because we believe in our own sense of self. What happens in our brains is mysterious, even incomprehensible, and the mystery seems to leave room for the glamour of consciousness. However, machines or robots—Harvey doesn’t like the word computer, because its technical meaning is a machine that uses symbols to compute, which is nothing like the way human or animal brains work—are typically wired by design, with circuit diagrams that can be reduced to their individual logic elements and software programs that can be analyzed and understood, so it’s hard to imagine how a complex thinking machine could have the kind of conscious I we do. But what if you could evolve a robot, made of the usual silicon, wire, and transistors, that appeared to act consciously, with thought processes as unfathomable as our own? With evolved robots, says Harvey, you can’t analyze how they work, and so you’re forced much more easily toward taking the stance that will, at the end of the day, attribute consciousness to a robot. That’s the way it goes.

Thompson isn’t buying this. Indeed, he doesn’t even want to discuss it. He eats his fish and chips dutifully and seems to be wishing the conversation would meander elsewhere. His attitude is mildly ironic because, for the past few years, Thompson has been playing with computers in which the hardware evolves to solve problems, rather the way our own neurons evolved to solve problems and to contemplate ourselves. He is one of the founding members of a field of research known as evolvable hardware or evolutionary electronics. Thompson uses a type of silicon processor that can change its wiring in a few billionths of a second, taking on a new configuration. He gives the processor a task to solve: for instance, distinguishing between a human voice saying stop or go. Each configuration of the wiring is graded on how well it did, and then those configurations that scored high are mated together to form new circuit configurations. Since all this manipulation is carried out electronically, the wiring of the processor can evolve for thousands of generations, eventually becoming a circuit that Thompson describes as flabbergastingly efficient at solving the task.

How this circuit does what it does, however, borders on the incomprehensible. It just works. Listening to Thompson describe it is like listening to someone describe the emergence of consciousness in a primitive brain. But Thompson is an experimentalist, not a philosopher, and this is actual experimental data. If sometime over the next few hundred years computers should finally evolve consciousness, the people who achieve this work, or maybe the machines themselves, may look back at Thompson’s work as the glimmer of the beginning. Thompson won’t.

I find myself incapable of taking part in a discussion of consciousness, he says adamantly. I don’t think the work I’m doing says anything about it. I’m just thinking about how I can use evolution to explore new ways of computing. When you start trying to extrapolate to matters relating to humans, you end up in philosophically dodgy areas.

The University of Sussex, where Thompson does his work, is the kind of place that encourages thinking in such dodgy areas. It was founded in the 1960s, says Thompson, as an interdisciplinary institution. Rather than break up the curriculum into the usual departments of biology, philosophy, history, and so on, the founding fathers lumped them together into schools. The most interdisciplinary of these—the School of Cognitive and Computing Sciences, affectionately called cogs—includes computer science, philosophy, psychology, and linguistics. The Center for Computational Neuroscience and Robotics is even more interdisciplinary because it’s a joint venture of cogs and the biology school.

The CCNR shares a two-story building with a few start-up biotech companies and Internet consultants. The lab in which Thompson works is a loftlike space, scattered with contraptions that might enchant a 12-year-old for the whole of his adolescence. There are various primitive robots, such as Maggie, for instance, a purple lobsterlike creation that would crawl around the room and avoid obstacles if all its legs worked. Harvey refers to it dismissively as a horse designed by a committee. A few of the other robots look as if they might grow up in a few thousand years to show all the lifelike verisimilitude of R2-D2. The interdisciplinary nature of the place is emphasized by a large colony of wood ants living in one corner of the lab. A few of the ccnr researchers are studying the mechanisms of ant navigation. One of the resident researchers is studying the feeding mechanisms of pond snails, I’m told, but no snails are to be seen.

Thompson’s latest work can be found sitting on his workbench against one wall. Its heart is a green circuit board that looks suspiciously like the guts of a build-it-yourself transistor radio for beginners. The name, at least, is impressive: it is a Xilinx XC6216. This chip, known in the business as a Field Programmable Gate Array (fpga), is the piece of silicon that makes it all possible.

Unlike the typical commercial processor—an Intel Pentium Two, for instance—the Xilinx chip can be reconfigured by its users. The logic elements that constitute its most elementary workings can be changed at will by reprogramming the bits in the chip’s memory, known as configuration bits. or gates, for example, can be changed to and gates or not gates, input wires can be reprogrammed to be output wires, and so on.

This gives an fpga extraordinary flexibility, although at a cost in efficiency. An fpga will never be as fast as a chip in which the circuit diagram is cast irrevocably in silicon, but its flexibility serves a purpose, which is why Xilinx, located in San Jose, California, invented the chips more than a decade ago. You’ll find them in the prototypes of telecommunications equipment and in the first few hundred products a company might ship out of its factory. Since these initial products are always likely to have a few bugs, a company can save a lot of money by using a chip that can be reconfigured once a bug is found; otherwise, the company would have to spend a few hundred thousand dollars making a whole new chip for every model of router or server every time an error cropped up. Once the bugs are ironed out and the product is selling well, the company can create a special-purpose chip to do the same job. They can also quickly modify the fpga from one product to the next, which means getting new products to market quicker. Soon fpgas will be used in new models of cell phones and digital pagers too.

The idea that hardware could be evolved in a robot or a computer the way nature evolved humans or other living creatures has been around since the 1960s. That fpgas would do the trick was the intuition of a computer scientist named Hugo de Garis, in 1992. De Garis, the field’s controversial visionary, is Australian by birth and British by citizenship. He now works as a visiting scientist with the Advanced Telecommunications Research Institute in Kyoto, Japan, while keeping an affiliation with George Mason University in Virginia. Before embarking on his present career (which he describes as a builder of brains, which is one of the reasons he is controversial), de Garis had dabbled in a programming technology called genetic algorithms. Just as evolution works blindly on living creatures in the wild, this technology attempts to do the same to problems that can be described by a computer program.

The blueprint for creatures—for man, bacteria, and everything else alive—is encoded in lengthy strings of dna. Higher animals, like us, keep this dna in compact units called chromosomes; we have 46 of them. When sexual creatures mate, the genetic information stored in the chromosomes of the two parents is intermingled. The offspring inherit a combination of genes from both, and nature throws in a few mutations that provide an opportunity for more advantageous characteristics to come along in the next generation. Species evolve because the offspring best suited to thrive in their environment are those most likely to breed successfully and pass on their genes to the next generation. After several thousand or million years, the result will be creatures uniquely adapted for living in particular environments. In animal husbandry, the breeders do the selecting on the basis of their personal preferences. They mate those chosen, then select and mate the offspring, creating faster racehorses or beefier cattle or collies with snouts so long that their skulls no longer have enough room for brains.

Because evolution has found solutions to extremely difficult problems in the natural world (bats that use their hearing to navigate in the dark, for example), computer scientists have tried to enlist evolution to solve difficult computational problems, using genetic algorithms. These algorithms start by encoding a potential solution to a given problem as a string of 0’s and 1’s, the computer equivalent of describing the potential solution as a series of yes or no answers to tens or hundreds or thousands of simple questions. This bit string becomes the artificial chromosome of the solution to be evolved. The genetic algorithm generates numerous slight variations of the bit string, and then these individuals are tested to see which perform best under some fitness scale. The game is more like animal husbandry than evolution because the computer scientist running the genetic algorithm knows exactly what he or she wants to accomplish eventually. (For instance, if a genetic algorithm is used to solve a scheduling problem, the measure of fitness might be how quickly tasks are completed in each individual’s final version of the schedule.) The bit strings that score highest on the designated fitness test are mated in a way that is loosely inspired by how chromosomes combine in sexual reproduction, with parts of each bit string combining to produce the bit string of the offspring. Mutations are added for good luck in the next generation. These new offspring are tested and the best are mated, and on it goes. The process might be repeated for thousands of generations, until the problem is solved. Genetic algorithms have been used successfully in designing communication networks, and better turbines, and even in solving some mathematical problems that seemed otherwise intractable.

Hugo de Garis was working with genetic algorithms in the summer of 1992 when he visited George Mason University and one of the resident electrical engineers told him about fpgas. I’d never heard of them, says de Garis, so he started explaining that they were programmable hardware. You can send in software instruction and it tells this programmable hardware how to wire itself up. I had this flash of an idea, that if you could send in software instruction to a piece of hardware to wire it up, maybe you could look on that software instruction as a genetic algorithm chromosome. You could mutate it randomly and maybe evolve the hardware. And the idea started from there. It may have been the field’s eureka moment, although, says Thompson, who came up with the idea independently a few years later, applying evolution to fpgas was no big idea: making it work is the tricky bit.

The gist of evolvable hardware is simple enough. The configuration bits that program the wiring and circuitry of the fpga become the chromosomes of the individuals to undergo the trial of survival of the fittest. The chip is configured, then set to do some task, and its performance is measured. The bit streams representing the best-performing configurations are mated together, mutations are added, and new individuals are tested, measured, and either discarded or mated. Eventually a configuration should emerge that’s very good at accomplishing a specific task, although it might take thousands of generations to appear.

Not everyone uses actual fpgas to do this work. The entire process can be simulated on a computer—the fpga cells and circuits and the breeding and testing and everything else—and when the ideal configuration is reached, it can be programmed onto the real fpga. The evolvable hardware crowd calls such simulations extrinsic evolution, because the evolution is done off chip. In intrinsic evolution, the fpga is reconfigured for each of the thousands of chromosomelike bit streams tested. This requires buying the chip and moving from a software realm to a hardware one, but it has the advantage of taking into account the reality of the chip itself, and whatever subtle phenomena might be happening therein that aren’t included in the software simulation. Unique among hardware evolutionists, Thompson is using those phenomena to his advantage.

Thompson says that when he embarked on this line of work, he decided he didn’t want to constrain himself by making assumptions about how evolution should work on a computer based on how it works in nature. The kind of resources you have in integrated circuits are very different from what you get in an animal’s head, he says. He decided to use fpgas, which can be thought of as blank evolutionary slates, and let evolution fiddle around with the fine details.

Determined not to build any preconceptions into his experiment, Thompson resolved not even to tell his genetic algorithm that it was dealing with a digital device, in which the various elements of the circuitry behaved as though the only acceptable states were the equivalent of on and off, or true and false, or 1 and 0. In a digital computer (also known as a binary computer), an electronic signal on a wire represents a 1, while the absence of a signal represents a 0. Thompson, however, was willing to let evolution work on the circuitry of the fpga as though it were an analog device, in which the signals that pass down a wire could take on any value between 0 and 1; they could be varying degrees of maybe, if that’s the way evolution wanted it.

I was looking for the best way of letting evolution loose on the electronics, he says. So it’s not told anything about what’s good and what’s bad or how it achieves the behavior. Evolution just plays around making changes, and if the changes produce an improvement, then fine. It doesn’t matter whether it’s changing the circuit design or using just about any weird, subtle bit of physics that might be going on. The only thing that matters to evolution is the overall behavior. This means you can explore all kinds of ways of building things that are completely beyond the scope of conventional methods. I allow evolution to write all the design rules.

With this laissez-faire philosophy, Thompson has evolved a circuit that distinguishes between two tones, two electric signals that, if fed into a stereo speaker, would produce two notes. One has a frequency of 1 kilohertz, the other 10 kilohertz. If you were going to hear them, says Thompson, they sound like medium high pitched and very high pitched. To make it difficult, he gave evolution only a tiny part of the fpga to play with—of the 4,096 cells (or logic elements) available on the chip, evolution would be allowed to use only 100, with no clock and no timing components. Thompson chose this particular problem because differentiating between two tones is a first step toward speech recognition, and because it is so difficult. The logic elements in the fpga work very quickly, on the scale of a billionth of a second, while the tones come with a frequency a million times slower, making the task something like trying to evolve a human who could tell the difference between a year and a decade while using only the second hand of his watch and without counting to himself as he did it.

Thompson’s genetic algorithm created 50 random bit strings of artificial dna, each 1,800 bits long, the number of configuration bits needed to describe fully the wiring in those 100 cells. These bit strings constituted the initial 50 individuals to be run through the evolution process. For each of the individuals in turn, Thompson explains, I take the bit string and download it onto the chip. Now that individual is physically in silicon, and I pump in five bursts of 1 kilohertz and five bursts of 10 kilohertz in random order. He then tests these individuals to see how well they do at producing an output that’s different for the different inputs, considering that unless he is extraordinarily lucky, all the bit strings in the first few generations will be as bad as possible at doing any task. For this reason, it’s not sufficient to say one individual works and another doesn’t. Thompson had to develop a fitness test that allows him to say one individual may perform slightly less abysmally than the next, which he did by looking for cases in which the average of the outputs during a 1 kilohertz burst was as different as possible from the average during a 10 kilohertz burst. After testing and scoring all 50, the genetic algorithm randomly chooses parents for the next generation, with a built-in preference for those that scored best on the fitness test. The single best individual on the test is also copied over unchanged to the next generation, a useful addition that computer scientists call elitism. These chosen parents are mated, their bit streams commingled, with a pinch of mutations thrown in (You don’t want to screw things up too much, he says) to make 50 new offspring. Then the process begins again. After 5,000 generations and two weeks of computer time, the computer was distinguishing between the two tones.

Strangely, Thompson has been unable to pin down how the chip was accomplishing the task. When he checked to see how many of the 100 cells evolution had recruited for the task, he found no more than 32 in use. The voltage on the other 68 could be held constant without affecting the chip’s performance. A chip designed by a human, says Thompson, would have required 10 to 100 times as many logic elements—or at least access to a clock—to perform the same task. This is why Thompson describes the chip’s configuration as flabbergastingly efficient.

It wasn’t just efficient, the chip’s performance was downright weird. The current through the chip was feeding back and forth through the gates, swirling around, says Thompson, and then moving on. Nothing at all like the ordered path that current might take in a human-designed chip. And of the 32 cells being used, some seemed to be out of the loop. Although they weren’t directly tied to the main circuit, they were affecting the performance of the chip. This is what Thompson calls the crazy thing about it.

Thompson gradually narrowed the possible explanations down to a handful of phenomena. The most likely is known as electromagnetic coupling, which means the cells on the chip are so close to each other that they could, in effect, broadcast radio signals between themselves without sending current down the interconnecting wires. Chip designers, aware of the potential for electromagnetic coupling between adjacent components on their chips, go out of their way to design their circuits so that it won’t affect the performance. In Thompson’s case, evolution seems to have discovered the phenomenon and put it to work.

It was also possible that the cells were communicating through the power-supply wiring. Each cell was hooked independently to the power supply; a rapidly changing voltage in one cell would subtly affect the power supply, which might feed back to another cell. And the cells may have been communicating through the silicon substrate on which the circuit is laid down. The circuit is a very thin layer on top of a thicker piece of silicon, Thompson explains, where the transistors are diffused into just the top surface part. It’s just possible that there’s an interaction through the substrate, if they’re doing something very strange. But the point is, they are doing something really strange, and evolution is using all of it, all these weird effects as part of its system.

In some of Thompson’s creations, evolution even took advantage of the personal computer that’s hooked up to the system to run the genetic algorithm. The circuit somehow picked up on what the computer was doing when it was running the programs. When Thompson changed the program slightly, during a public demonstration, the circuit failed to work.

All the creations were equally idiosyncratic. Change the temperature a few degrees and they wouldn’t work. Download a circuit onto one chip that had evolved on a different, albeit apparently identical chip, and it wouldn’t work. Evolution had created an extraordinarily efficient, utterly enigmatic circuit for solving a problem, but one that would survive only in the environment in which it was born. Thompson describes the problem, or the evolutionary phenomenon, as one of overexploiting the physics of the chips. Because no two environments would ever be exactly alike, no two solutions would be, either.

They would be at least as different as any two species of animal that evolved in different neighborhoods, which brings us back to Harvey’s point about evolving thinking machines. If you were evolving these artificial intellects using billions or trillions of components instead of 100, the way nature has evolved human brains with trillions of neurons, you would end up with machines as different from each other as two humans, and whose thinking processes were equally inscrutable.

This brings us to the crucial question: Could you actually end up with a thinking machine as conscious as you or I?

It’s not so difficult to imagine how such a thing might happen. It would begin with the evolution of a series of circuits specialized to sift through and process more and more multimedia information. As the speed increased, along with the amount of information processed, evolution would happen upon and make use of sophisticated processing strategies, like anticipation, or an integrated control system that would ask, What should be done next? and coordinate all the computer’s thought processes to come up with an answer. In doing so, it would probably have to differentiate between all the myriad fpgas and circuits that constituted its self and the external world to which it had to react. Now, with a sense of self forming, it might even evolve a higher level of autostimulation; it might begin to use the language in which it communicated with its programmers to communicate with itself. The result might be not just a sense of self but the inner voice to go with it.

Of course, as Harvey points out, it makes little sense to ask whether such a machine is really conscious, because the only thing that matters—or at least the only thing that can be observed or tested—is whether it acts conscious. Still, it could be argued that this path from simple circuits to computational cognition and consciousness seems almost as inevitable as the path from single-celled organisms to you and me. But that’s the catch. Only the rare student of evolution believes that human consciousness was inevitable and not some perverse accident of nature. Harvey and Thompson consider the computer version of consciousness along the same lines. Evolution comes up with a lot of special-purpose tricks, says Harvey, but the special-purpose tricks of these machines may have very little in common with the special-purpose tricks that humans picked up during our own evolutionary history. Thus he and Thompson set the betting line at a definitive maybe: maybe we’ll eventually evolve a machine that appears as conscious as a human and can’t be fooled into betraying its silicon soul. But even if we do, it won’t be soon. As Harvey says, Evolution isn’t any swift magic. The more components required and the more connections among them (making them more like real neurons), the longer it would take to run the genetic algorithms. So perhaps evolving a conscious machine would be possible, says Thompson, when he finally agrees to discuss it, but it would probably take a hell of a long time. More to the point, he adds, I don’t think we know how to do it, even if we didn’t have the problem of us dying during the experiment.

Because Thompson seems to be a pragmatist at heart, he has temporarily given up on evolving chips that could do anything fancier than distinguish between two sounds (in a follow-up experiment, he evolved a chip that could tell apart two spoken words, stop and go). Instead he’s trying to deal with what he calls the perverseness or the robustness problem. He has a grant from the British government to evolve circuits that can work in a wide range of environments and on more than a single fpga. He’s doing this by evolving circuits on five fpgas simultaneously. These five are all from Xilinx, but from different manufacturing plants, and a few are the equivalent of factory seconds. He is also drastically changing the temperature over those chips, so evolution will be encouraged to come up with a circuit that a chip designer would call extraordinarily robust. He’s trying for a circuit that works regardless of defects in the chips or damage incurred in shipping, a circuit that works in a tropical rain forest or in the depths of space. In other words, one that will run anywhere, maybe on anything.

This is the kind of design task, Thompson says, that troubles human designers. Evolution, though, should be good at creating chips that are extremely efficient, for instance, or ridiculously fault tolerant, chips that can run anywhere and maybe even adapt on the fly, if the application they’re dealing with—data communications, perhaps, or what computer scientists refer to as systems control—happens to change with time. It would be nice if the circuitry on the chip could evolve to handle the new version of the problem. I’m really exploring what evolution can do that humans can’t, he explains. There are properties that humans have great trouble designing into a system, like being very efficient, using small amounts of power, or being fault tolerant. Evolution can cope with them all. And then, of course, Thompson added, a fellow has to make a living. If we could design something that is really defect tolerant, we could make a lot of money.

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group