What You Don't Know Can Kill You

Humans have a tendency to fear rare threats like shark attacks but ignore far greater risks like 
unsafe sex or an unhealthy diet.

By Jason Daley
Oct 3, 2011 12:00 AMJul 11, 2023 4:08 PM

Newsletter

Sign up for our email newsletter for the latest science news
 

Last march, as the world watched the aftermath of the Japanese earthquake/tsunami/nuclear near-meltdown, a curious thing began happening in West Coast pharmacies. Bottles of potassium iodide pills used to treat certain thyroid conditions were flying off the shelves, creating a run on an otherwise obscure nutritional supplement. Online, prices jumped from $10 a bottle to upwards of $200. Some residents in California, unable to get the iodide pills, began bingeing on seaweed, which is known to have high iodine levels.

The Fukushima disaster was practically an infomercial for iodide therapy. The chemical is administered after nuclear exposure because it helps protect the thyroid from radioactive iodine, one of the most dangerous elements of nuclear fallout. Typically, iodide treatment is recommended for residents within a 10-mile radius of a radiation leak. But people in the United States who were popping pills were at least 5,000 miles away from the Japanese reactors. Experts at the Environmental Protection Agency estimated that the dose of radiation that reached the western United States was equivalent to 1/100,000 the exposure one would get from a round-trip international flight.

Although spending $200 on iodide pills for an almost nonexistent threat seems ridiculous (and could even be harmful—side effects include skin rashes, nausea, and possible allergic reactions), 40 years of research into the way people perceive risk shows that it is par for the course. Earthquakes? Tsunamis? Those things seem inevitable, accepted as acts of God. But an invisible, man-made threat associated with Godzilla and three-eyed fish? Now that’s something to keep you up at night. “There’s a lot of emotion that comes from the radiation in Japan,” says cognitive psychologist Paul Slovic, an expert on decision making and risk assessment at the University of Oregon. “Even though the earthquake and tsunami took all the lives, all of our attention was focused on the radiation.”

We like to think that humans are supremely logical, making decisions on the basis of hard data and not on whim. For a good part of the 19th and 20th centuries, economists and social scientists assumed this was true too. The public, they believed, would make rational decisions if only it had the right pie chart or statistical table. But in the late 1960s and early 1970s, that vision of homo economicus—a person who acts in his or her best interest when given accurate information—was knee­capped by researchers investigating the emerging field of risk perception. What they found, and what they have continued teasing out since the early 1970s, is that humans have a hell of a time accurately gauging risk. Not only do we have two different systems—logic and instinct, or the head and the gut—that sometimes give us conflicting advice, but we are also at the mercy of deep-seated emotional associations and mental shortcuts.

Even if a risk has an objectively measurable probability—like the chances of dying in a fire, which are 1 in 1,177—people will assess the risk subjectively, mentally calibrating the risk based on dozens of subconscious calculations. If you have been watching news coverage of wildfires in Texas nonstop, chances are you will assess the risk of dying in a fire higher than will someone who has been floating in a pool all day. If the day is cold and snowy, you are less likely to think global warming is a threat.

Our hardwired gut reactions developed in a world full of hungry beasts and warring clans, where they served important functions. Letting the amygdala (part of the brain’s emotional core) take over at the first sign of danger, milliseconds before the neocortex (the thinking part of the brain) was aware a spear was headed for our chest, was probably a very useful adaptation. Even today those nano-pauses and gut responses save us from getting flattened by buses or dropping a brick on our toes. But in a world where risks are presented in parts-per-billion statistics or as clicks on a Geiger counter, our amygdala is out of its depth.

A risk-perception apparatus permanently tuned for avoiding mountain lions makes it unlikely that we will ever run screaming from a plate of fatty mac ’n’ cheese. “People are likely to react with little fear to certain types of objectively dangerous risk that evolution has not prepared them for, such as guns, hamburgers, automobiles, smoking, and unsafe sex, even when they recognize the threat at a cognitive level,” says Carnegie Mellon University researcher George Loewenstein, whose seminal 2001 paper, “Risk as Feelings,” (pdf) debunked theories that decision making in the face of risk or uncertainty relies largely on reason. “Types of stimuli that people are evolutionarily prepared to fear, such as caged spiders, snakes, or heights, evoke a visceral response even when, at a cognitive level, they are recognized to be harmless,” he says. Even Charles Darwin failed to break the amygdala’s iron grip on risk perception. As an experiment, he placed his face up against the puff adder enclosure at the London Zoo and tried to keep himself from flinching when the snake struck the plate glass. He failed.

The result is that we focus on the one-in-a-million bogeyman while virtually ignoring the true risks that inhabit our world. News coverage of a shark attack can clear beaches all over the country, even though sharks kill a grand total of about one American annually, on average. That is less than the death count from cattle, which gore or stomp 20 Americans per year. Drowning, on the other hand, takes 3,400 lives a year, without a single frenzied call for mandatory life vests to stop the carnage. A whole industry has boomed around conquering the fear of flying, but while we down beta-blockers in coach, praying not to be one of the 48 average annual airline casualties, we typically give little thought to driving to the grocery store, even though there are more than 30,000 automobile fatalities each year.

In short, our risk perception is often at direct odds with reality. All those people bidding up the cost of iodide? They would have been better off spending $10 on a radon testing kit. The colorless, odorless, radioactive gas, which forms as a by-product of natural uranium decay in rocks, builds up in homes, causing lung cancer. According to the Environmental Protection Agency, radon exposure kills 21,000 Americans annually.

David Ropeik, a consultant in risk communication and the author of How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts, has dubbed this disconnect the perception gap. “Even perfect information perfectly provided that addresses people’s concerns will not convince everyone that vaccines don’t cause autism, or that global warming is real, or that fluoride in the drinking water is not a Commie plot,” he says. “Risk communication can’t totally close the perception gap, the difference between our fears and the facts.”

In the early 1970s, psychologists Daniel Kahneman, now at Princeton University, and Amos Tversky, who passed away in 1996, began investigating the way people make decisions, identifying a number of biases and mental shortcuts, or heuristics, on which the brain relies to make choices. Later, Paul Slovic and his colleagues Baruch Fischhoff, now a professor of social sciences at Carnegie Mellon University, and psychologist Sarah Lichtenstein began investigating how these leaps of logic come into play when people face risk. They developed a tool, called the psychometric paradigm, that describes all the little tricks our brain uses when staring down a bear or deciding to finish the 18th hole in a lighting storm.

Many of our personal biases are unsurprising. For instance, the optimism bias gives us a rosier view of the future than current facts might suggest. We assume we will be richer 10 years from now, so it is fine to blow our savings on a boat—we’ll pay it off then. Confirmation bias leads us to prefer information that backs up our current opinions and feelings and to discount information contradictory to those opinions. We also have tendencies to conform our opinions to those of the groups we identify with, to fear man-made risks more than we fear natural ones, and to believe that events causing dread—the technical term for risks that could result in particularly painful or gruesome deaths, like plane crashes and radiation burns—are inherently more risky than other events.

But it is heuristics—the subtle mental strategies that often give rise to such biases—that do much of the heavy lifting in risk perception. The “availability” heuristic says that the easier a scenario is to conjure, the more common it must be. It is easy to imagine a tornado ripping through a house; that is a scene we see every spring on the news, and all the time on reality TV and in movies. Now try imagining someone dying of heart disease. You probably cannot conjure many breaking-news images for that one, and the drawn-out process of athero­sclerosis will most likely never be the subject of a summer thriller. The effect? Twisters feel like an immediate threat, although we have only a 1-in-46,000 chance of being killed by a cataclysmic storm. Even a terrible tornado season like the one last spring typically yields fewer than 500 tornado fatalities. Heart disease, on the other hand, which eventually kills 1 in every 6 people in this country, and 800,000 annually, hardly even rates with our gut.

The “representative” heuristic makes us think something is probable if it is part of a known set of characteristics. John wears glasses, is quiet, and carries a calculator. John is therefore . . . a mathematician? An engineer? His attributes taken together seem to fit the common stereotype.

But of all the mental rules of thumb and biases banging around in our brain, the most influential in assessing risk is the “affect” heuristic. Slovic calls affect a “faint whisper of emotion” that creeps into our decisions. Simply put, positive feelings associated with a choice tend to make us think it has more benefits. Negative correlations make us think an action is riskier. One study by Slovic showed that when people decide to start smoking despite years of exposure to antismoking campaigns, they hardly ever think about the risks. Instead, it’s all about the short-term “hedonic” pleasure. The good outweighs the bad, which they never fully expect to experience.

Our fixation on illusory threats at the expense of real ones influences more than just our personal lifestyle choices. Public policy and mass action are also at stake. The Office of National Drug Control Policy reports that prescription drugoverdoses have killed more people than crack and heroin combined did in the 1970s and 1980s. Law enforcement and the media were obsessed with crack, yet it was only recently that prescription drug abuse merited even an after-school special.

Despite the many obviously irrational ways we behave, social scientists have only just begun to systematically document and understand this central aspect of our nature. In the 1960s and 1970s, many still clung to the homo economicus model. They argued that releasing detailed information about nuclear power and pesticides would convince the public that these industries were safe. But the information drop was an epic backfire and helped spawn opposition groups that exist to this day. Part of the resistance stemmed from a reasonable mistrust of industry spin. Horrific incidents like those at Love Canal and Three Mile Islanddid not help. Yet one of the biggest obstacles was that industry tried to frame risk purely in terms of data, without addressing the fear that is an instinctual reaction to their technologies.

The strategy persists even today. In the aftermath of Japan’s nuclear crisis, many nuclear-energy boosters were quick to cite a study commissioned by the Boston-based nonprofit Clean Air Task Force. The study showed that pollution from coal plants is responsible for 13,000 premature deaths and 20,000 heart attacks in the United States each year, while nuclear power has never been implicated in a single death in this country. True as that may be, numbers alone cannot explain away the cold dread caused by the specter of radiation. Just think of all those alarming images of workers clad in radiation suits waving Geiger counters over the anxious citizens 
of Japan. Seaweed, anyone?

At least a few technology promoters have become much more savvy in understanding the way the public perceives risk. The nanotechnology world in particular has taken a keen interest in this process, since even in its infancy it has faced high-profile fears. Nanotech, a field so broad that even its backers have trouble defining it, deals with materials and devices whose components are often smaller than 1/100,000,000,000 of a meter. In the late 1980s, the book Engines of Creation by the nanotechnologist K. Eric Drexler put forth the terrifying idea of nanoscale self-replicating robots that grow into clouds of “gray goo” and devour the world. Soon gray goo was turning up in video games, magazine stories, and delightfully bad Hollywood action flicks (see, for instance, the last G.I. Joe movie).

The odds of nano­technology’s killing off humanity are extremely remote, but the science is obviously not without real risks. In 2008 a study led by researchers at the University of Edinburgh suggested that carbon nanotubes, a promising material that could be used in everything from bicycles to electrical circuits, might interact with the body the same way asbestos does. In another study, scientists at the University of Utah found that nanoscopic particles of silver used as an antimicrobial in hundreds of products, including jeans, baby bottles, and washing machines, can deform fish embryos.

The nanotech community is eager to put such risks in perspective. “In Europe, people made decisions about genetically modified food irrespective of the technology,” says Andrew Maynard, director of the Risk Science Center at the University of Michigan and an editor of the International Handbook on Regulating Nanotechnologies. “People felt they were being bullied into the technology by big corporations, and they didn’t like it. There have been very small hints of that in nanotechnology.” He points to incidents in which sunblock makers did not inform the public they were including zinc oxide nanoparticles in their products, stoking the skepticism and fears of some consumers.

For Maynard and his colleagues, influencing public perception has been an uphill battle. A 2007 study conducted by the Cultural Cognition Project at Yale Law School and coauthored by Paul Slovic surveyed 1,850 people about the risks and benefits of nanotech (pdf). Even though 81 percent of participants knew nothing or very little about nanotechnology before starting the survey, 89 percent of all respondents said they had an opinion on whether nanotech’s benefits outweighed its risks. In other words, people made a risk judgment based on factors that had little to do with any knowledge about the technology itself. And as with public reaction to nuclear power, more information did little to unite opinions. “Because people with different values are predisposed to draw different factual conclusions from the same information, it cannot be assumed that simply supplying accurate information will allow members of the public to reach a consensus on nanotechnology risks, much less a consensus that promotes their common welfare,” the study concluded.

It should come as no surprise that nanotech hits many of the fear buttons in the psychometric paradigm: It is a man-made risk; much of it is difficult to see or imagine; and the only available images we can associate with it are frightening movie scenes, such as a cloud of robots eating the Eiffel Tower. “In many ways, this has been a grand experiment in how to introduce a product to the market in a new way,” Maynard says. “Whether all the up-front effort has gotten us to a place where we can have a better conversation remains to be seen.”

That job will be immeasurably more difficult if the media—in particular cable news—ever decide to make nanotech their fear du jour. In the summer of 2001, if you switched on the television or picked up a news magazine, you might think the ocean’s top predators had banded together to take on humanity. After 8-year-old Jessie Arbogast’s arm was severed by a seven-foot bull shark on Fourth of July weekend while the child was playing in the surf of Santa Rosa Island, near Pensacola, Florida, cable news put all its muscle behind the story. Ten days later, a surfer was bitten just six miles from the beach where Jessie had been mauled. Then a lifeguard in New York claimed he had been attacked. There was almost round-the-clock coverage of the “Summer of the Shark,” as it came to be known. By August, according to an analysis by historian April Eisman of Iowa State University, it was the third-most-covered story of the summer until the September 11 attacks knocked sharks off the cable news channels.

All that media created a sort of feedback loop. Because people were seeing so many sharks on television and reading about them, the “availability” heuristic was screaming at them that sharks were an imminent threat.

“Certainly anytime we have a situation like that where there’s such overwhelming media attention, it’s going to leave a memory in the population,” says George Burgess, curator of the International Shark Attack File at the Florida Museum of Natural History, who fielded 30 to 40 media calls a day that summer. “Perception problems have always been there with sharks, and there’s a continued media interest in vilifying them. It makes a situation where the risk perceptions of the populace have to be continually worked on to break down stereotypes. Anytime there’s a big shark event, you take a couple steps backward, which requires scientists and conservationists to get the real word out.”

Then again, getting out the real word comes with its own risks—like the risk of getting the real word wrong. Misinformation is especially toxic to risk perception because it can reinforce generalized confirmation biases and erode public trust in scientific data. As scientists studying the societal impact of the Chernobyl meltdown have learned, doubt is difficult to undo. In 2006, 20 years after reactor number 4 at the Chernobyl nuclear power plant was encased in cement, the World Health Organization (WHO) and the International Atomic Energy Agency released a report compiled by a panel of 100 scientists on the long-term health effects of the level 7 nuclear disaster and future risks for those exposed. Among the 600,000 recovery workers and local residents who received a significant dose of radiation, the WHO estimates that up to 4,000 of them, or 0.7 percent, will develop a fatal cancer related to Chernobyl. For the 5 million people living in less contaminated areas of Ukraine, Russia, and Belarus, radiation from the meltdown is expected to increase cancer rates less than 1 percent.

Even though the percentages are low, the numbers are little comfort for the people living in the shadow of the reactor’s cement sarcophagus who are literally worrying themselves sick. In the same report, the WHO states that “the mental health impact of Chernobyl is the largest problem unleashed by the accident to date,” pointing out that fear of contamination and uncertainty about the future has led to widespread anxiety, depression, hypochondria, alcoholism, a sense of victimhood, and a fatalistic outlook that is extreme even by Russian standards. A recent study in the journal Radiology concludes that “the Chernobyl accident showed that overestimating radiation risks could be more detrimental than underestimating them. Misinformation partially led to traumatic evacuations of about 200,000 individuals, an estimated 1,250 suicides, and between 100,000 and 200,000 elective abortions.”

It is hard to fault the Chernobyl survivors for worrying, especially when it took 20 years for the scientific community to get a grip on the aftereffects of the disaster, and even those numbers are disputed. An analysis commissioned by Greenpeacein response to the WHO report predicts that the Chernobyl disaster will result in about 270,000 cancers and 93,000 fatal cases.

Chernobyl is far from the only chilling illustration of what can happen when we get risk wrong. During the year following the September 11 attacks, millions of Americans opted out of air travel and slipped behind the wheel instead. While they crisscrossed the country, listening to breathless news coverage of anthrax attacks, extremists, and Homeland Security, they faced a much more concrete risk. All those extra cars on the road increased traffic fatalities by nearly 1,600. Airlines, on the other hand, recorded no fatalities.

It is unlikely that our intellect can ever paper over our gut reactions to risk. But a fuller understanding of the science is beginning to percolate into society. Earlier this year, David Ropeik and others hosted a conference on risk in Washington, D.C., bringing together scientists, policy makers, and others to discuss how risk perception and communication impact society. “Risk perception is not emotion and reason, or facts and feelings. It’s both, inescapably, down at the very wiring of our brain,” says Ropeik. “We can’t undo this. What I heard at that meeting was people beginning to accept this and to realize that society needs to think more holistically about what risk means.”

Ropeik says policy makers need to stop issuing reams of statistics and start making policies that manipulate our risk perception system instead of trying to reason with it. Cass Sunstein, a Harvard law professor who is now the administrator of the White House Office of Information and Regulatory Affairs, suggests a few ways to do this in his book Nudge: Improving Decisions About Health, Wealth, and Happiness, published in 2008. He points to the organ donor crisis in which thousands of people die each year because others are too fearful or uncertain to donate organs. People tend to believe that doctors won’t work as hard to save them, or that they won’t be able to have an open-
casket funeral (both false). And the gory mental images of organs being harvested from a body give a definite negative affect to the exchange. As a result, too few people focus on the lives that could be saved. Sunstein suggests—controversially—“mandated choice,” in which people must check “yes” or “no” to organ donation on their driver’s license application. Those with strong feelings can decline. Some lawmakers propose going one step further and presuming that people want to donate their organs unless they opt out.

In the end, Sunstein argues, by normalizing organ donation as a routine medical practice instead of a rare, important, and gruesome event, the policy would short-circuit our fear reactions and nudge us toward a positive societal goal. It is this type of policy that Ropeik is trying to get the administration to think about, and that is the next step in risk perception and risk communication. “Our risk perception is flawed enough to create harm,” he says, “but it’s something society can do something about.”

HOW YOU WILL DIE 
LIFETIME RISK

  • Total, any cause: 
1 in 1

  • Heart disease: 
1 in 6

  • Cancer: 
1 in 7

  • Stroke: 1 in 28

  • Motor vehicle 
accident: 1 in 88

  • Intentional 
self-harm: 1 in 112

  • Accidental poisoning by, or exposure to, noxious substance: 1 in 130

  • Fall: 1 in 171

  • Car occupant accident: 1 in 303

  • Assault by firearm: 1 in 306

  • Pedestrian accident: 1 in 649

  • Motorcycle accident: 1 in 770

  • Accidental drowning: 1 in 1,123

  • Fire: 1 in 1,177

  • Pedalcyclist accident: 1 in 4,717

  • Firearm discharge: 1 in 6,309

  • Air transport accident: 1 in 7,032

  • Electrocution: 1 in 9,943

  • Heat exposure: 1 in 12,517

  • Cataclysmic storm: 1 in 46,044

  • Bee, hornet, or wasp sting: 1 in 71,623

  • Legal execution: 1 in 96,691

  • Dog attack: 1 in 120,864

  • Earthquake or other earth movement: 1 in 148,756

  • Flood: 1 in 175,803

  • Fireworks: 1 in 386,766

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group