Future Imperfect

By Robert Root-Bernstein|Monday, November 01, 1993
RELATED TAGS: GLOBAL WARMING
Brace Yourself for Another Ice Age! . . . The present episode of amiable climate is coming to an end. . . . On a day-to-day basis this global cooling will be imperceptible; more likely winter will lengthen year by year, century by century, until it’s 365 days long. Cities will be buried in snow, and an immense sheet of ice could cover North America as far south as Cincinnati. As the ice caps of Greenland and the Antarctic grow, so will the reflective quality of the snow and ice, bouncing back the warm rays of the sun, cooling the earth as they move. Eventually all the available moisture will have crystallized into ice and snow. . . . The world’s sea level will have dropped a few hundred feet.

As unbelievable as this scenario may sound to those of us harried by the Cassandras who are currently predicting our demise from runaway global warming, the Science Digest article proclaiming the forthcoming ice age was written a mere 20 years ago and was based on the best scientific information then available. Reports of galloping glaciers and worldwide drops in surface temperatures had led climatologists to begin speculating during the 1960s that Earth might be entering a new period of chill. At then-predicted rates, it would be only some 200 to 2,000 years before temperatures had dropped sufficiently to create ice age conditions. Measurable effects on glaciation, sea level, and precipitation could be expected well before that.

Climatologists, as we all know, are no longer predicting an impending ice age. On the contrary, their current worry is global warming. After all, 1990 was the warmest year on record, and it capped a decade-long warming trend that has been documented by weather stations in most major cities in the Northern Hemisphere. The possibility that this increase will continue unabated has stimulated fears that we might face as much as a 9- degree rise in average global temperature during the next 60 years. These temperature changes, fueled by uncontrolled emission of carbon dioxide and other compounds that tend to absorb and retain the heat of the sun, would result in drastic alterations in the lengths of growing seasons, in climatic zones, in the rate at which the polar caps melt, and in the movement of oceanic currents such as the Gulf Stream. Agriculture would suffer, forests would not be able to adapt to such quick environmental changes, the weather would become violent and unpredictable, the sea level would rise enough to cause coastal flooding, and according to some calculations the American Midwest would once again become a dust bowl of stupendous proportions.

Some scientists are not sure, however, that global warming is a reality. Physicist Philip Abelson, for example, points out that very sensitive satellite measurements demonstrate wide variability in recorded temperatures between 1979 and 1988, but no obvious temperature trend was noted during the ten-year period. Other evidence is also frustratingly contradictory; for example, researchers examining atmospheric temperature records over the North Pole over a period of 40 years announced in January that the Arctic shows no signs of greenhouse warming. So which way is global climate tending, and how can we find out for sure? Does it simply depend on which data one chooses to analyze? How do we know which of these predictions--if either--is accurate?

The seeming paradox in global warming predictions is at heart a problem of extrapolation. Extrapolation is the process of extending data or inferring values for any unobserved period or interval. For example, if we have data for the number of AIDS cases diagnosed for each year between 1981 and 1991 and we want to guess how many cases there will be in 2001, this prediction involves extrapolation. So does the determination of the age of an ancient rock by carbon dating. Unfortunately, there is no science of extrapolation. It is, at best, an art, and a highly fallible art at that. The difficulties inherent in generating accurate extrapolations are immense; however, we all too often rely uncritically on extrapolations to assess everything from the future of AIDS to ecological disintegration, economic trends, population growth, and how fast the universe is expanding. If we do not understand extrapolation, we cannot understand its implications.

There are many ways to extrapolate. Perhaps the most successful (if also the most difficult) is to develop a model of the system whose behavior is to be predicted. In some sciences, such as the astronomy of planetary motion or the physics of subatomic particles, our models are extremely good and our extrapolations are, too. Centuries of accumulated insights allow us to predict with great accuracy exactly where a planet will be many years in the future or how an electron will behave when its energy is increased by any specified amount. But models are representations or abstractions of real phenomena, not the phenomena themselves. They are valid only within strictly specified limits. We can, for example, solve the equations describing two interacting gravitational masses, such as the sun and Earth, exactly, so long as there are no other masses present. We cannot solve exactly the equations for three or more gravitational masses interacting. Modeling the solar system precisely is therefore beyond our ability. So is the application of quantum mechanics to complicated systems such as photosynthesis. We must therefore make approximations or ignore perturbing effects in our models. Over short periods of time, or under controlled physical conditions, these approximations allow us to predict accurately enough to satisfy our needs.

The apparent successes (apparent, because many of them have not in fact been tested) have led us to expect that we can always make such accurate predictions, if only we have sufficiently accurate data. This is itself an extrapolation from one part of physics to the rest of science, and as such is questionable.

Unfortunately, few areas of science have models as well-founded and precise as those encountered in some areas of astronomy and physics. Models of climate, ozone depletion, the course of epidemics, population dynamics, economic indicators, and many other important phenomena are still evolving. We do not yet have basic principles as fundamental and established as Newton’s laws or Schrödinger’s equation, and so we are still searching for what things should be in our models and what things we can ignore.

Consider again the global warming question as an example. In 1989 MIT professor Richard Lindzen made some unpopular criticisms of global warming predictions based on his analysis of where current climate models are particularly weak. He claimed that the computer models are rife with uncertainties, have not been adequately tested, and ignore feedback systems that will tend to counteract temperature increases--for example, clouds. Indeed, an independent study published in Nature in 1989 compared 14 climate models and found that some predicted that cloud formation would enhance the greenhouse effect, while others predicted that it would result in drastic cooling. More recently scientists found a correlation between sunspot activity and Earth’s temperature. The finding suggests that the amount of energy leaving the sun directly influences global climate--but no climate model included solar radiation as a variable.

The long and short of it is that we cannot accurately extrapolate from a model that does not accurately represent nature. All too often we do not understand the basic science well enough to make the necessary representations.

Faced with areas of science that are too young for accurate modeling, or with systems too complex for accurate description, scientists tend to simplify. Simplification is a necessary part of science, but as Einstein supposedly cautioned, Make it as simple as possible, but no simpler. If anything gives extrapolation a bad name it is oversimplification. Such oversimplification often takes the form of identifying a trend (usually described by a very simple mathematical function such as a straight line, a bell curve, or an S-shaped curve) and then assuming the trend will continue at the same rate indefinitely into the future (or past). Precisely because such predictions are so oversimplified, they are often the ones that get the most press. For example, Princeton economist Uwe Reinhardt recently produced what he calls the mother of all health-care forecasts. Starting with 1990, he draws a line through the year 2000, when, he predicts, 18 percent of the gross domestic product of the United States will go to health care. He then extrapolates into the future in a nearly linear fashion: by 2050, assuming current trends continue, 50 percent of gross domestic product will go to health care; by 2100, 81.5 percent. Biologist Paul Ehrlich did the same thing in his famous book The Population Bomb. Ehrlich argued that if population growth continued linearly at 1960s rates, there would hardly be room for everyone to stand up by the beginning of the twenty-first century.

Ehrlich’s predictions were grossly inaccurate (and he has since modified them drastically) for the same reason that Reinhardt’s will prove to be: few natural processes increase at a constant, linear rate. Most systems are too complex for that. What Ehrlich and Reinhardt both ignore (undoubtedly for rhetorical purposes) is that human beings (and indeed most natural processes) are adaptive. Population pressure and economic necessities lead to changes in reproductive strategies, agricultural productivity, ecological stability, rates of infection, medical care, insurance policies, and government regulations. To accurately predict the future of health-care costs or population figures, one must also be able to predict how all the attendant necessities of life will also change. In other words, we must be able to predict inventiveness. That we cannot do.

Another common pitfall of extrapolation is an overreliance on curve fitting. Curve fitting is a process of finding a mathematical function that describes a given set of data to within a given margin of error. It is often considered to be a totally objective method, since it assumes no specific theory about the process being modeled, nor any particular increase or decrease in the rate at which the process occurs. It is induction at its purest. The data determine the answer. For example, calculations made of the future of the AIDS epidemic are made by curve fitting. The number of AIDS cases (or deaths) is plotted, and an equation describing the plotted points is generated by a computer. This equation is then used to predict how many cases (or deaths) there will be at any given time in the future.

Unfortunately, induction has never been a secure basis for science, and its offspring, curve fitting, is fraught with perils. Some of these perils have been graphically highlighted by pharmacologist Douglas S. Riggs in his book The Mathematical Approach to Physiological Problems. Riggs warns us of a fact that all mathematicians and logicians know: any set of data, no matter how complete, has more than one description. (In fact, in the case of AIDS, there have been several dozen different equations generated to describe the future of the epidemic.) To make his point concrete, he displays 21 (arbitrary) data points plotted with respect to time. He demonstrates that four very similar curves, defined by four rather different mathematical functions, describe the 21 points equally well. In other words, he finds that he can fit four different equations to his data. Each mathematical function yields a very different extrapolation, however. Curve A quickly levels off at a constant value. Curve B continues to decay in a perfect exponential function. Curve C decays at a slightly higher rate than curve B, suggesting an ever increasing rate of decay. And curve D drops almost immediately to a value of zero. If these curves describe an epidemic, clearly each predicts a very different future. These examples warn us not to take too seriously any particular set of coefficients and rate constants that we may get by plotting data, comments Riggs.

It’s too bad that AIDS researchers and policymakers did not pay attention to Riggs long ago. Everyone undoubtedly remembers the doomsday predictions that AIDS was to become the Black Death of the twentieth century. Perfectly respectable scientists predicted in 1986 that exponential growth rates of HIV infection would lead 1 in 70 Americans to be infected with the virus and 270,000 to have AIDS by 1991. In fact, fewer than 1 in 300 Americans is infected with HIV, and there were just over 200,000 cases of AIDS in the United States by the end of 1991. Even attempts made just three and four years ago by the Royal Society (London) and the Centers for Disease Control to predict AIDS rates based on curve fitting have proved to be terribly inaccurate. As Gordon T. Stewart, emeritus professor of public health at Glasgow University in Scotland, and American actuary Peter Plumley have noted, the vast majority of these studies have regularly overestimated observed rates of AIDS by 26 to 263 percent. Both point out the same error: curve fitting assumes that everyone in the population is at equal risk of acquiring HIV and AIDS, while the reality is that HIV and AIDS are remaining within very limited high-risk groups such as promiscuous homosexual men and intravenous drug users and their sexual partners. Since there are limited numbers of such people, there are limits on how many people will ever develop AIDS. Models that take these limitations into account have proved to be accurate within 10 percent over several years.

In sum, what we do not know is as important to evaluating extrapolations as what we do know. False assumptions can undermine extrapolations as easily as they undermine logic. In the case of AIDS, the epidemic is undoubtedly much more complex than just the dynamics of HIV infection. In global warming, we have very little notion of the kinds of hidden thermostats (like clouds or chemical reactions) that might counteract the greenhouse effects of CO2.

The problem, quite simply, is that we do not have a science of extrapolation--that is, a metascience that would allow us to evaluate the validity of various models and how far each can accurately predict. At present, we have no way to determine, save through trial and error, whether any particular model, trend, prediction, or data set is sufficient to our purposes. We pay the scientific, economic, and human costs of our ignorance every day.

Clearly, we need to develop such a science. But until we have one, we should be careful to distinguish between extrapolations based on verified scientific models, those based on still-evolving models of unknown accuracy, and those that are purely statistical inferences based on data trends. Only well-tested models are likely to prove trustworthy. All trend- derived extrapolations are highly suspect because we do not understand the scientific principles that underlie them. Extrapolations based on unverified scientific models should be considered a form of science fiction. I do not mean this caveat to be an insult. As award-winning science fiction novelist Ursula Le Guin has written:

Science fiction is often described, and even defined, as extrapolative. The science fiction writer is supposed to take a trend or phenomenon of the here and now, purify and intensify it for dramatic effect, and extend it into the future. If this goes on, this is what will happen. A prediction is made. Method and results much resemble those of a scientist who feeds large doses of a purified and concentrated food additive to mice in order to predict what may happen to people who eat it in small quantities for a long time. The outcome seems almost inevitably to be cancer. So does the outcome of extrapolation.

What thought experiments such as Ehrlich’s and Reinhardt’s or the AIDS extrapolations tell us is not what future populations will be, or how much health care will really cost, or how many people will really have AIDS, but rather that these are issues of such great magnitude that we need to understand them much more thoroughly than we do. Truth be told, we don’t know whether global warming is occurring or not, nor do we know to what extent human beings are driving the process or can alter it. The issue is cloudy because, in part, we don’t understand clouds. Instead of grandiose policies about CO2 emissions, we need a policy that would encourage more basic research into the effects of those emissions. Just imagine, for example, if policymakers two decades ago had taken the predictions of impending glaciation seriously and mandated policies to force more CO2 into the atmosphere to warm it up!

Until we understand the science underlying our extrapolative models, and until we have some means of evaluating the extrapolations themselves, we know too little to act rationally or with the necessary foresight to assure us that our actions will not have unfortunate, perhaps catastrophic, effects we never intended. Instead of acting to change things we do not understand, we should first act to understand them better. Extrapolation, for the present, must remain a tool for analyzing the state of our scientific knowledge rather than a science for directing the tools of state.
Comment on this article
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

ADVERTISEMENT
ADVERTISEMENT
Collapse bottom bar
DSCJulyAugCover
+

Log in to your account

X
Email address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it emailed to you.

Not registered yet?

Register now for FREE. It takes only a few seconds to complete. Register now »