Argument: Science can't ever come to an end because theories, by their very nature, keep being overturned. Many philosophers—and a surprising number of scientists—accept this line, which essentially means that all science is ironic. They adhere to the postmodern position that we do not discover truth so much as we invent it; all our knowledge is therefore provisional and subject to change. This view can be traced back to two influential philosophers: Karl Popper, who held that theories can never be proved but only disproved, or falsified, and Thomas Kuhn, who contended that theories are not true statements about reality but only temporarily convenient suppositions, or paradigms.
If all our scientific knowledge were really this flimsy and provisional, then of course science could continue forever, with theories changing as often as fads in clothing or music. But the postmodern stance is clearly wrong. We have not invented atoms, elements, gravity, evolution, the double helix, viruses, and galaxies; we have discovered them, just as we discovered that Earth is round and not flat.
When I spoke to him more than 10 years ago, philosopher Colin McGinn, now at the University of Miami, rejected the view that all of science is provisional, saying, "Some of it is, but some of it isn't!" He also suggested that, given the constraints of human cognition, science will eventually reach its limits; at that point, he suggests, "religion might start to appeal to people again." Today, McGinn stands by his assertion that science "must in principle be completable" but adds, "I don't, however, think that people will or should turn to religion if science comes to an end." Current events might suggest otherwise.
Argument: Reductionist science may be over, but a new kind of emergent science is just beginning. In his new book, A Different Universe, Robert Laughlin, a physicist and Nobel laureate at Stanford, concedes that science may in some ways have reached the "end of reductionism," which identifies the basic components and forces underpinning the physical realm. Nevertheless, he insists that scientists can discover profound new laws by investigating complex, emergent phenomena, which cannot be understood in terms of their individual components.
Physicist and software mogul Stephen Wolfram advances a similar argument from a more technological angle. He asserts that computer models called cellular automata represent the key to understanding all of nature's complexities, from quarks to economies. Wolfram found a wide audience for these ideas with his 1,200-page self-published opus A New Kind of Science. He asserts that his book has been seen as "initiating a paradigm shift of historic importance in science, with new implications emerging at an increasing rate every year."
Actually, Wolfram and Laughlin are recycling ideas propounded in the 1980s and 1990s in the fields of chaos and complexity, which I regard as a single field—I call it chaoplexity. Chaoplexologists harp on the fact that simple rules, when followed by a computer, can generate extremely complicated patterns, which appear to vary randomly as a function of time or scale. In the same way, they argue, simple rules must underlie many apparently noisy, complicated aspects of nature.
So far, chaoplexologists have failed to find any profound new scientific laws. I recently asked Philip Anderson, a veteran of this field, to list major new developments. In response he cited work on self-organized criticality, a mathematical model that dates back almost two decades and that has proved to have limited applications. One reason for chaoplexity's lack of progress may be the notorious butterfly effect, the notion that tiny changes in initial conditions can eventually yield huge consequences in a chaotic system; the classic example is that the beating of a butterfly's wings could eventually trigger the formation of a tornado. The butterfly effect limits both prediction and retrodiction, and hence explanation, because specific events cannot be ascribed to specific causes with complete certainty.
Argument: Applied physics could still deliver revolutionary breakthroughs, like fusion energy. Some pundits insist that, although the quest to discover nature's basic laws might have ended, we are now embarking on a thrilling era of gaining control over these laws. "Every time I open the newspaper or search the Web," the physicist Michio Kaku of the City University of New York says, "I find more evidence that the foundations of science are largely over, and we are entering a new era of applications." Saying that science has ended because we understand nature's basic laws, Kaku contends, is like saying that chess ends once you understand the rules.
I see some validity to this point; that is why The End of Science focused on pure rather than applied research. But I disagree with techno-evangelists such as Kaku, Eric Drexler, and Ray Kurzweil that nanotechnology and other applied fields will soon allow us to manipulate the laws of nature in ways limited only by our imaginations. The history of science shows that basic knowledge does not always translate into the desired applications.
Nuclear fusion—a long-sought source of near-limitless energy and one of the key applications foreseen by Kaku—offers a prime example. In the 1930s, Hans Bethe and a handful of other physicists elucidated the basic rules governing fusion, the process that makes stars shine and thermonuclear bombs explode. Over the past 50 years, the United States alone has spent nearly $20 billion trying to control fusion in order to build a viable power plant. During that time, physicists repeatedly touted fusion as the energy source of the future.
The United States and other nations just agreed to invest another $13 billion to build the International Thermonuclear Experimental Reactor in France. Still, even optimists acknowledge that fusion energy faces formidable technical, economic, and political barriers all the same. William Parkins, a nuclear physicist and veteran of the Manhattan Project, recently advocated abandoning fusion-energy research, which he called "as expensive as it is discouraging." If there are breakthroughs here, the current generation probably will not live to see them.
Argument: We are on the verge of a breakthrough in applied biology that will allow people to live essentially forever. The potential applications of biology are certainly more exciting these days than those of physics. The completion of the Human Genome Project and recent advances in cloning, stem cells, and other fields have emboldened some scientists to predict that we will soon conquer not only disease but aging itself. "The first person to live to 1,000 may have been born by 1945," declares computer scientist-turned-gerontologist Aubrey de Grey, a leader in the immortality movement (who was born in 1963).
Many of de Grey's colleagues beg to differ, however. His view "commands no respect at all within the informed scientific community," 28 senescence researchers declared in a 2005 journal article. Indeed, evolutionary biologists warn that immortality may be impossible to achieve because natural selection designed us to live only long enough to reproduce and raise our children. As a result, senescence does not result from any single cause or even a suite of causes; it is woven inextricably into the fabric of our bodies. The track record of two fields of medical research—gene therapy and the war on cancer—should also give the immortalists pause.
In the early 1990s, the identification of specific genes underlying inherited disease—such as Huntington's chorea, early-
onset breast cancer, and immune deficiency syndrome—inspired researchers to devise therapies to correct the genetic malformations. So far, scientists have carried out more than 350 clinical trials of gene therapy, and not one has been an unqualified success. One 18-year-old patient died in a trial in 1999, and a promising French trial of therapy for inherited immune deficiency was suspended last year after three patients developed leukemia, leading The Wall Street Journal to proclaim that "the field seems cursed."
The record of cancer treatment is also dismal. Since 1971, when President Richard Nixon declared a "war on cancer," the annual budget for the National Cancer Institute has increased from $250 million to $5 billion. Scientists have gained a much better understanding of the molecular and genetic underpinnings of cancer, but a cure looks as remote as ever. Cancer epidemiologist John Bailar of the University of Chicago points out that overall cancer mortality rates in the United States actually rose from 1971 until the early 1990s before declining slightly over the last decade, predominantly because of a decrease in the number of male smokers. No wonder then that experts like British gerontologist Tom Kirkwood call predictions of human immortality "nonsense."