Discover Interview: Thanks, Evolution, For Making the Great Building Material Called DNA

Electronic computers are great at what they do. But to accomplish really complicated physical tasks—like building an insect—Erik Winfree says you have to grow them from DNA.

By Stephen Cass
Aug 11, 2009 5:00 AMNov 12, 2019 4:07 AM
winfree.jpg
NULL | Spencer Lowell

Newsletter

Sign up for our email newsletter for the latest science news
 

The humblest amoeba performs feats of molecular manipulation that are the envy of any human engineer. Assembling complex biological structures quickly and with atomic precision, the amoeba is living proof of the power of nanotechnology to transmute inert matter into wondrous forms. Amoebas—and the cells in your body, for that matter—are expert at these skills because they have had billions of years to perfect their molecular tool kit. Erik Winfree, a professor of computer science and bioengineering at Caltech, is determined to harness all that evolution-honed machinery. He is seeking ways to exploit the methods of cellular biology to create a new type of molecular-scale engineering. Although still in its early days, this line of research could lead to revolutionary ways of treating illness or creating complicated machines by growing them rather than assembling them from parts.

Winfree, who in 2000 won a MacArthur “genius grant,” focuses his research particularly on DNA, the molecule that stores genetic information. Our cells use this information to build the proteins that form our bodies’ structure and do nearly all the work involved in being alive. But Winfree is going beyond biology. He wants to exploit DNA’s unique chemical properties to process information like a computer (using novel scientific disciplines known as molecular programming and DNA computing) and even appropriate the DNA molecule as a scaffold on which to build useful structures. Winfree spoke to DISCOVER senior editor Stephen Cass about his work, its implications for understanding the origin of life, and where this kind of research could lead in the far future.

You work in biomolecular computing. What exactly is that? It is different things to different people. For me, it means understanding that chemical systems can perform information processing and be designed to carry out various tasks. One way I look at it is by analogy: We can design computers to perform all sorts of information tasks, and they are particularly useful when you can hook up those computers to control electromechanical systems. For instance, you can get inputs from a video camera. You can send outputs to a motor. The goal for biomolecular computing is to develop similar controls for chemical and molecular-scale systems. How can you program a set of molecules to carry out instructions?

How did you get involved in this rather exotic field of research? I got interested in the connection between biology and computation before high school, in the early 1980s. I was just learning how to program an Apple II computer and at the same time was reading books like The Selfish Gene by Richard Dawkins. These things got merged in my mind. I was interested in programming biological systems—playing the games that evolution is playing. And I was interested in biological complications of all forms, particularly neural complications: How do brains work? At the same time I was developing a love for algorithms. I did mathematics and theoretical computer sciences as an undergraduate at the University of Chicago. I went to Caltech as a graduate student, interested in neural networks for robotics. Then I gave a presentation on [University of Southern California computer scientist] Leonard Adleman’s work on DNA computing. It was a whole new way of thinking about the connection between molecular systems and computation. It wasn’t just a theorist’s playground, but an area where you could actually start having ideas for molecular algorithms and testing them in the laboratory.

You’re not the first in your family to win a MacArthur fellowship—your father, Arthur Winfree, got one in 1984 for his work on applying mathematics to biology. How did his thinking influence you?

When I was growing up, he wasn’t a MacArthur fellow; he was just Dad. And eccentric, maybe. He loved showing things to us kids. I developed my habit of never really just believing anything because he would always try to catch us out and make us think for ourselves. A lot of his friends that I met as a kid eventually became fellows themselves, so I grew up thinking that their original way of thinking and being was normal.

Those MacArthur connections have continued to follow you throughout your life, haven’t they? Some of that has happened by accident, and some of it not. I worked for Stephen Wolfram [an independent mathematician who created the influential Mathematica software package] for a year after meeting him at a MacArthur conference with my dad. So that wasn’t by chance. But later, my Ph.D. adviser, John Hopfield, was a MacArthur fellow I met by chance, I guess because I was seeking out people I really respected. Then other people who I bumped into became fellows. I spent some time at Princeton University and met Michael Elowitz, who taught me about microscopy; he became a fellow in 2007. And there’s Paul Rothemund, who was a postdoc in my lab; he got a fellowship too.

Does that sense of freewheeling community reflect the way you run your lab at Caltech? I try to encourage a very independent attitude in my lab, partly because I know that my success is largely due to my adviser’s giving me a lot of free rein. Actually, his phrase was that he gave me enough rope to hang myself. I think back to the ancient Greek philosophers and how they would meet and have a discussion where everyone brought their own story and process to the table. So when a student comes into my lab, I like to say, “OK, so come up with a project and tell me next week what you will be doing.” Sometimes it’s an agonizing process for them. They will take not a week but a month or a year or two years before they really figure out what they are interested in. Although that might be painful, I think it’s a better process than telling people to carry out specific things where they get into a mode of not really knowing what they like.

Real biological systems use proteins to handle most jobs, but in your lab you focus on using DNA. Why?

Proteins are much more complicated than DNA. DNA is more predictable, yet it can carry out an enormous range of functions. It’s sort of like a Lego kit for building things at the nanoscale; it’s much easier to fit pieces together than with proteins. In a sense, we’re not doing anything new. Biologists have a hypothesis that there was once an RNA world [RNA is a single-stranded cousin of DNA that acts as a translator between DNA and the protein factories in living cells]. If you look at the history of life on this planet, there was probably a time before proteins evolved. Back then RNA was both an information storage system and an active element, performing a majority of the functions within the cell. That vision tells us we can do an awful lot with nucleic acids, be it RNA or DNA.

OK, so what tasks can you accomplish with engineered DNA? It’s really exciting. We see different kinds of molecular systems as models of computation. A model of computation, to a computer scientist, is a set of primitive operations and ways of putting those primitives together to get system-level behavior.

For example, digital circuit designers have simple logic gates, such as AND and OR, as primitives. You can wire them together into circuits to do complicated functions. [Your PC operates using those commands, for instance.] But there are many different kinds of models of computation considered in computer science.

One of my main interests is in looking at what models of computation are appropriate for thinking about molecular systems. In the last four years we have gotten interested in chemical reaction networks, where you have a set of reactions: Molecule A plus molecule B reacts to form molecule C, and X plus C forms A. Traditionally, chemical reactions have been used as a descriptive language for explaining things that we see in nature. Instead, we are treating them as elements of a programming language, a way of expressing behaviors that we are trying to obtain. When you can move parts of a molecule from one place to another, it’s like a computer algorithm acting on data. In the molecular world, the data structure is actually a physical structure—for example, in DNA molecules. So growing something out of DNA can be thought of as modifying a data structure. The challenge is taking a program written in that language and implementing it with real molecules—we’ve had some demonstrations of that, and we’re very interested to see how far we can go. We also think about how to take a molecule and control it so it folds up into a very specific structure. Paul Rothemund developed that. [Rothemund made headlines in 2006 for building microscopic smiley faces out of programmed DNA.] And then there are molecular-scale motors. All of these things have been demonstrated in a primitive form with DNA systems.

That sounds fascinating from a theoretical perspective, but what are the practical implications of being able to control molecules that way? There is a lot of excitement about intelligent therapeutics, where chemistry interfaces with biological systems to cure disease; a view based on computer science could play a role. For that kind of work, we need to distinguish among sensors, actuators, and information-processing units. At the macroscopic scale, we are familiar with the idea that sensors and actuators have to deal with the physical world, but the information-processing unit is isolated from the physical world. It’s completely symbolic: zeros and ones. It doesn’t care what the meaning of the zeros and ones is; it just processes them. With intelligent therapeutics there is going to be a lot of sensor and actuator work required to interface with biological systems in meaningful ways [such as detecting and manipulating molecules in order to cure disease]—that’s really difficult. But the hope is that one day we will be able to build a DNA processing unit that can connect to those sensors and actuators and make decisions about what cells to target or what chemicals to produce. This is fairly speculative. I’m a long way from biomedical research myself.

What about utilizing biomolecular computing to grow devices or machines—how might that work? Here again, the idea is that there’s a part of the job that can be done by the DNA—the programmable part. And then there’s a part where you need some chemically viable substance that is linked to the DNA; that is the actuator part. There is a whole set of chemistries for attaching things like proteins, carbon nanotubes, or quantum dots [5- to 10-nanometer metal dots with interesting optical properties] to DNA in specific locations. That suggests that if you can build a scaffold out of DNA, you could then chemically process it to get something useful. For example, an arrangement of carbon nanotubes bound to DNA could be turned into an electrically conductive circuit. To build that DNA scaffold, you might have it self-assemble from “tiles” made from short lengths of DNA. The tiles are designed so that they have binding rules for how they stick to each other. That is basically a programmable crystal growth process. You could put in a feed crystal containing your program [placing it into a stew of DNA tiles and other raw materials]. The feed crystal would then grow whatever object you programmed it to create.

On a philosophical level, this work is exciting because it is a purely nonbiological growth process that has many of the features we normally associate with biology. I’m so used to thinking of DNA as the ultimate biological molecule that it’s hard to imagine its being used in a nonbiological way, but there is actually a long tradition of using biological components for nonbiological purposes. Like I’m sitting at a wooden desk, but trees have no intention of making desks or boats or houses or any of the things that we use wood for. So using DNA this way is completely in the human tradition for technology. It seems strange only because all our associations with DNA are biological.

When you regard DNA as a form of technology, does that change the way you look at people or at life in general? Using DNA in this way certainly makes it possible to have a different perspective on what life is. This is a topic that philosophers often worry about, because you just can’t find a satisfactory definition of life. Biologists often don’t worry about it and just get on with studying it. But when you take the reductionist approach—that the phenomena we see can be explained in terms of components and how those components interact with each other—life is a mechanism, and what you look for are molecules that are capable of doing lots of interesting things. That is exactly what we found with DNA: It’s a kind of information-bearing molecule that is very programmable. We can design DNA molecules to act as gates, act as motors, act as catalysts. These findings make it more plausible to view living things as software in a chemical programming language.

What is the biggest obstacle you face in turning all your amazing concepts into a reality? I want to be able to make molecules that work the way I ask them to! For someone who is trained in theoretical computer sciences, it is difficult starting a career as an experimental lab researcher. We build and test systems, except the systems that we actually build and test are so much simpler than the systems we can write down on paper. It’s one thing to make a case on paper that we can implement a 5,000-line-long set of chemical reactions with DNA. It’s a different thing to build a system involving three or four reactions—and still not have it work quite the way we want it. There are many interesting things to think about at the conceptual level of how to structure programs, but at the moment we are very concerned about the implementation issue and spending most of our time there. Several issues are limiting us. For example, when we design molecular components there’s all kinds of cross-talk. Our DNA-based components bump into each other. Some of the components that are not supposed to react with each other do, anyway. Certain reactions don’t happen that should.

How do you plan to address those problems? We need to build in fault tolerance. It’s not clear how that will play out. One proposed reason for why biological systems are constantly making, then destroying, proteins is just so that we always have fresh molecules rather than moldy molecules on hand, which is potentially part of the solution to this cross-talk issue. Another problem is that if you have many components, they all have to be at fairly low concentrations, and at low concentrations you have very slow operations.

Are there ways to make biomolecular computing happen at the brisk pace we associate with conventional computing? We are not going to compete with electronic computers. We’re doing different things. Think about manufacturing some new kind of instrument or device that’s as incredibly complicated and carefully orchestrated as a fly or an insect. To my mind, to manufacture things like that you need to grow them. Then the comparison is to biological development. If you look at the timescales in biological development, they are often hours or days. You need the right thing to happen and at the right time to grow different parts of a structure.

How long will it be before you can actually design complicated systems and therapeutic treatments with programmed DNA? I made a plot about a year ago where I looked through influential papers in DNA computing and nanotechnology. In 1980 Ned Seeman at NYU started out the field by making a system with roughly 32 nucleotides [molecules that link together to form DNA]. If you plot the number of nucleotides people have put together since then, the growth is roughly exponential. We have a new paper that describes a system of roughly 14,000 nucleotides. The number of nucleotides in designs is roughly doubling every three years. Six more doublings—roughly 20 years from now—and we are up to a million nucleotides, which is on the order of the size of a bacterial genome. That size is not necessarily a measure of what you can do with the system, but it does tell us that in order to keep increasing at that rate we need to master complexity. We need to play the same games that computer science has been playing to handle systems that complicated. Getting these systems to work is going to be extremely challenging and will probably require real conceptual breakthroughs. Which is why I like the area.

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group