The Vexing Mental Tug-of-War Called Morality

Would you kill a crying baby to save yourself and others from hostile soldiers outside? Neuroscience offers new ways to approach such moral questions, allowing logic to triumph over deep-rooted instinct.

By Kristin Ohlson
Sep 16, 2011 12:00 AMOct 15, 2019 1:56 PM
The Vexing Mental Tug-of-War Called Morality
illustrations by Matt Mahurin

Newsletter

Sign up for our email newsletter for the latest science news
 

You arrive at the hill early, eager to cheer the cyclists racing past. the sun is bright, the people on both sides of the road are in high spirits, and speculation about the race passes through the crowd in waves. A hot dog vendor has positioned his cart up the hill, and the aroma of simmering meat wafts by, summoning your best memories of summer. Suddenly shouts erupt. The racers are approaching. You lean forward and see a blur of colors at the summit. Then you notice something wrong. The hot dog vendor has stepped away to make change, and someone has jostled his cart off its moorings. It is rolling downhill toward the road, gathering speed, and poised to kill dozens of cyclists unless someone shoves the cart across the road—but that would kill three spectators instead. What should one do?

When researchers presented this nightmarish dilemma to volunteers participating in an innovative neuropsychology study of morality at Harvard’s Moral Cognition Lab last year, the responses were evenly split. After moments of mental calculus, half the participants said the most moral decision was to push the cart into the bystanders; the other half disagreed, saying that killing for any reason was wrong, even if it meant saving more lives in the end.

One day last year, cognitive scientists Joshua Greene and Fiery Cushman, who designed the study, pulled up a series of brain scans taken as volunteers resolved the dilemma while inside an MRI machine. The scans were all marked by ghostly yellow blobs indicating areas of increased blood oxygen levels at the moment of judgment, Cushman explained. All decision-making takes mental energy, so no surprise there. More intriguing were the scans from the volunteers who opted to save more lives. These showed noticeably brighter regions of yellow, suggesting that their decisions demanded significantly more brain power. To Greene and Cushman, it appeared that reason was overriding an automatic, instinctual response.

“You have these gut reactions and they feel authoritative, like the voice of God or your conscience,” Greene says. But these powerful instincts are not commands from a higher power, they are just emotions hardwired into the brain. Our first reaction under pressure—the default response—is to go with our gut. It takes more time and far more brain power to reason the situation out.

“The reason we feel caught in moral dilemmas is that truly, our brain has two different solutions to the problem,” Cushman says. “Those processes can conflict because the brain is at war with itself.”

Brain-bending moral dilemmas like the hot dog vendor scenario have long been the province of philosophy. At times, judges and juries have also had to confront such ethically sticky questions: Is it right to kill one person to save many? Should convicted murderers be executed or kept alive behind bars? Should true intention be taken into account when evaluating the outcome—good or bad—of any given act? What is permissible? What is right? What is just?

Neuroscientists like Greene and Cushman are bringing a whole new perspective to the debate by revealing the underlying biology at work in the brain when it grapples with ethical decisions. Exposing the biological roots of moral choice, Greene believes, presents opportunities to make better choices. “Once we understand what’s happening in our brain,” he says, “we might change our opinions about some long-standing moral issues, challenging that inner voice we’ve listened to for tens of thousands of years.”

Josh Greene was on the high school debate team in Fort Lauderdale, Florida, when he was first introduced to the great moral philosophers John Stuart Mill and Immanuel Kant. Kant said moral truths were sacrosanct, determined by inviolable rights and duties, lines that could not be crossed. But Greene felt more simpatico with Mill, a utilitarian who argued that morality means serving the greater good. Then Greene found himself up against a crackerjack debater who threw out a withering question designed to hold utilitarian feet to the fire. 
“Tell me this,” she said. “Is it right for a doctor to kill a person and harvest the organs to save five critically ill patients? It must be OK if it serves the greater good, right?”
Greene was unable to respond. “I was stumped right there in the middle of cross-examination!” he recalls. “I intuitively felt that this was wrong. I lost that debate and for a while thought utilitarianism itself might be wrong.”

His views changed again in college when, first at the University of Pennsylvania’s Wharton School and later at Harvard, he studied philosophy and psychology, especially a field called heuristics, the shortcuts the mind uses to make quick decisions. Greene realized that his instant aversion to killing someone to harvest organs seemed like just such a shortcut. From an evolutionary vantage, he theorized that an intuitive aversion to injuring another would maintain greater harmony within the group.

illustrations by Matt Mahurin

Right about that time, Greene heard about an ethical thought experiment called the trolley problem, developed in the 1960s by British philosopher Philippa Foot and expanded by American philosopher Judith Jarvis Thomson. Psychologists had adapted that problem into two morally challenging scenarios. In the “switch scenario,” the subject is asked to imagine a trolley hurtling down a track toward five people, similar to the dilemma presented in the hot dog cart scenario. Instead of pushing a cart, you can throw a switch to divert the trolley away from the five people, but it will kill one person standing on another track. Is it morally permissible to throw the switch?

In the second scenario, the trolley is again hurtling toward five people. On an overhead footbridge stands a man large enough to stop the trolley. Is it right to push the man onto the track below, killing him to save the five, or is the most moral move doing nothing at all?

“I was fascinated by the work of Foot and Thompson,” Greene says, “because trolley problems capture the central tension between the two most dominant ideas in moral philosophy.” On the one hand, the philosophy associated with Kant argues that morality is about the rights and duties that all individuals have and about certain lines that must not be crossed. Pushing the man from the footbridge seems to cross one of those lines. On the other hand, the utilitarianism of Mill suggests that morality requires making the hard choices to serve the greater good—even if, on rare occasions, it can literally mean throwing someone under the bus. Flipping the switch appears to be a choice like that.

By then a graduate student studying philosophy at Princeton, Greene wrote a paper called “The Two Moralities.” Inside each of us, he wrote, the theories of Kant and Mill are constantly competing. Our minds are not devotees of one moral code or the other. We must always choose.

Soon afterward, Greene found himself in Israel for his sister’s bat mitzvah. To pass the time in his Jerusalem hotel room, he picked up a copy of Descartes’ Error, neuroscientist Antonio Damasio's pioneering book about emotion in the brain. Damasio’s central narrative involved the strange case of Phineas Gage, a 19th-century railroad construction foreman whose skull was pierced by a metal spike during an explosion. Gage was returned to health by the ministrations of his doctor and seemed physically recovered. But he was no longer socially functional because his capacity to make well-reasoned decisions and future plans was deeply impaired. Damasio and his wife, Hanna, a neurologist, studied Gage’s skull and, on the basis of historical reports of his personality decline, concluded that his problems had resulted from damage to the ventromedial prefrontal cortex, an area of the brain near the center of the forehead that is associated with emotion. They also studied contemporary patients with brain damage causing similar disruptions to personality, implicating other centers of emotion. Damasio proposed that the decision-making process, long deemed rooted in reason, was guided by emotion as well.

“I bolted straight up and said, Aha! This is it,” Greene recalls. “I think I actually copied the pages from the book and faxed them to my adviser.” What people with this sort of brain damage were missing was the gut feeling that made other people cringe at the thought of throwing a man in front of a trolley, even as they felt it would be right to throw the switch. Not so for the brain-damaged patients, Greene surmised. “They would be ok with pushing the guy off the footbridge; but in real life, in general, when it came to feeling what was right rather than reasoning it out, they would be stumped.”

Suddenly Green saw morality not just as a philosophical concept but as a neurological phenomenon. This was the beginning of what he calls his dual-process theory of moral judgment, in which instinct and reason collide in a battle for supremacy. The grand ethical tension between Kant and Mill, he hypothesized, was based on the tensions between competing systems in the brain. “I was studying traditional philosophy, but I felt the real progress to be made in ethics was in neuroscience,” Greene says.

charting a new course, greene sought the help of Jonathan Cohen, a Princeton neuroscientist studying how the brain coordinates attention, thought, and action in pursuit of a goal. One of Cohen’s main tools was functional magnetic resonance imaging (fMRI), the same instrument Greene and Cushman used to observe blood oxygen levels in different regions of the brain. As an advanced grad student and then a postdoc in Cohen’s pioneering Neuroscience of Cognitive Control Laboratory, Greene first began using fMRI to scan volunteers as they considered trolley scenarios and other tough philosophical problems. His landmark paper, published in Science in 2001, was among the first to document the brain structures involved in moral choice. Subjects contemplating shoving a man to his death showed heightened activity in the medial frontal gyrus, the posterior cingulate gyrus, and the angular gyrus, all centers of emotion and social cognition in the brain. Subjects considering whether to pull a trolley switch showed more activity in the dorsolateral prefrontal cortex, a region tied to reasoning.

The balance between those brain systems could shift, Greene found, depending on an individual’s degree of participation in the intervention. If the onlooker imagined pushing the large man with his hands or a pole, 30 percent found it acceptable to throw him in front of the trolley. Yet 60 percent said it was ok to pull a switch that would topple him though a trapdoor and onto the tracks. Two different actions, same outcome.

“The main factor here is whether or not we use personal force,” says Greene, who points to historical and observational data suggesting people have a reluctance to hurt each other, even in times of war. “We seem to have this general mechanism that makes us reluctant to engage in physical violence, and the mechanism is on autopilot. In this very unusual case, our emotions don’t distinguish between gratuitous violence and acts aimed at promoting the general good.”

Next Greene wondered if he could intensify the conflict between the brain systems simply by raising the stakes. The crying baby dilemma, a frightening wartime drama, was the perfect test. Greene asked volunteers to imagine this: You are hiding with fellow villagers in a basement while enemy soldiers search for you. Suddenly your baby starts to cry, and you cover its mouth to muffle the sound. If the soldiers hear the baby, they will find all the villagers, including you and your baby, and kill everyone. But if you don’t move your hand, the baby will smother to death. What is the morally acceptable action?

“A good dilemma is one that makes you go ugh,” Greene says. “If you ask if it’s OK to feed someone to a shark, that’s an easy negative. In the best dilemmas, you have a strong emotional response competing with a compelling utilitarian justification. They have to be nasty.”

The crying baby scenario hit Greene’s volunteers in the gut, changing the dynamic between the two competing systems in their brains. Here, refusing to act had such dire consequences that 53 percent ultimately endorsed an other­wise unimaginable infanticide: They concluded that the protagonist had to suffocate the baby to save the group. Those making this decision typically employed the dorsolateral prefrontal cortex, a brain region associated with cognitive control. Clearly the two systems in the brain were at odds, but for the utilitarians, reason overpowered emotion in the neural tug-of-war.

Greene then had subjects consider a variety of moral dilemmas while pushing a button in response to an unrelated cue. Both tasks relied on the same cognitive control networks needed to overrule emotion. When that neural system was occupied by the button-pressing task, he found, people took longer to make utilitarian decisions. But pushing the button did not interfere with decisions based on gut instinct, which volunteers rendered just as quickly whether or not they were handling a second cognitive task. The results suggest that making the utilitarian choice—killing the baby, tossing the man off the footbridge—requires a lot of cognitive override as we effortfully push against our instincts to hold back.

“For centuries philosophers have taken intuitions at face value and tried to find theories that conformed to those intuitions,” Greene says. “But as philosophers have played with more and more scenarios, it’s been increasingly difficult to find a single theory that fits. My approach is to say, forget the overriding theory. Our moral judgments are sensitive to kooky things, like whether you’re pushing someone with your hands or dropping him with a switch. There is no single moral faculty; there’s just a dynamic interplay between top-down control processes and automatic emotional control in the brain.”

Other scientists have reached similar conclusions. Philosopher and attorney John Mikhail, who was studying linguistic theory with Noam Chomsky at MIT, became intrigued by Chomsky’s argument that some grammatical rules are hardwired in our brains. Aware of the buzz over trolley problems, Mikhail began to suspect that the foundations of moral judgment were innate as well. To test the notion, he took the question beyond the walls of academia (where test subjects have generally been Ivy League college students) to friends and relatives in Ohio and Tennessee and children in the local schools.

“Even 8-year-olds were saying it was permissible to switch the train away from five people and onto one, but not permissible to throw someone in front of a train,” Mikhail says. (Studies now show that 90 percent will pull the switch to save the five, but 70 percent say it is wrong to push the large man toward the same end.) “ Why would kids and adults from different contexts all have pretty much the same moral intuitions if it weren’t some expression of a shared conscience or moral faculty that’s natural, not something one learns exclusively at school or church or from some other external source?”

Researchers have also been studying everyday moral dilemmas such as doing a favor or engaging in petty theft. In one such study, Jordan Grafman, a cognitive neuroscientist at the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland, and Jorge Moll, a neuroscientist at the D’Or Institute for Research and Education in Rio de Janeiro, dangled a pot of $128 in front of 19 subjects and gave them the opportunity to receive the money or to donate a portion to various social causes. Brain scans showed that donating money activated primitive areas like the ventral tegumentum, part of the brain’s reward circuit that lights up in response to food, sex, and other pleasurable activities necessary to our survival. Moll concluded that humans are hardwired with the neural architecture for such pro-social sentiments as generosity, guilt, and compassion. While the dollar amounts were modest, those who donated more ($80 versus $20) showed a small but significant bump of activity in the brain’s septal region, an area strongly associated with social affiliation and attachment.

“This region is very rich in oxytocin receptors,” Moll says. “I think these instincts evolved from nonhuman primates’ capacity to form social bonds and from mother-offspring attachment capacities. In our species, such capacities were probably extended to support parochialism, group cohesion, and our tendency to attach symbolic meanings to social values and religion.”

Back at MIT, cognitive neuroscientists Liane Young and Rebecca Saxe have been studying the right temporal parietal junction, a brain region used for reasoning about others’ intent. If we know someone means to do harm, they wanted to know, does that knowledge play a role in how moral or immoral we judge them to be? In one scenario, volunteers were told about someone who puts what she thinks is sugar into another person’s coffee; it turns out to be poison, and the person dies. In another scenario, someone puts what she thinks is poison into the coffee, but it turns out to be sugar and the person is unharmed. Volunteers overwhelmingly called the intent to poison more immoral than the accidental poisoning, no matter what the outcome. As subjects made this judgment, the right temporal parietal junction was especially active on fMRI scans.

In a second set of studies, the researchers temporarily disabled the right temporal parietal junction with pulses of magnetism delivered through transcranial magnetic stimulation, a technique used to treat Parkinson’s disease and some intractable cases of depression. When that key brain region was disabled, subjects placed more weight on outcome and less on intent and were more likely to judge a bungled murder moral. The researchers concluded that the right temporal parietal junction not only was activated during this kind of moral judgment but was pivotal in adding intent to the moral equation and determining the volunteers’ point of view.

Another moral quirk is the tendency to value human lives less when more of them are threatened. A few years ago, the nation was riveted by the plight of a little boy thought to have been carried away by a weather balloon, but often we barely register the many victims of foreign wars. Or to use the chilling words often attributed to Stalin (but probably apocryphal), “The death of one man is a tragedy; the death of a million is a statistic.” 
To understand the impulse, Greene and Amitai Shenhav, a doctoral student in his lab, asked volunteers to imagine piloting a rescue boat toward a drowning man when they get a call saying that another boat, in the opposite direction, has capsized and its passengers are also drowning. They are also told that another rescue boat is approaching the second group and may or may not reach the people in time. The first pilot cannot save the first drowning man and then turn around and save the second group. He must choose.

In this study, published in Neuron last year, Greene and Shenhav observed that as the subjects made their decisions, they tapped a fascinating selection of brain areas: the insula, normally used to manage probability and risk, and the ventral striatum, which tracks magnitude. Mammals generally rely on these regions to find food and sex. For instance, a squirrel might use them to consider how many nuts are lying on the ground and his odds of grabbing a bunch of them before being chased by a dog. 
“You’d like to think that when Truman was deciding to use nuclear weapons and thinking about how many people would be killed and whether the decision would make the war even worse, some special voice of conscience was informing that decision,” Greene says. “But it seems that for decisions involving numbers and probabilities, we default to systems for figuring out how to find the most nuts.”

This reliance may explain why humans give less weight to human life as the number of potential victims goes up. If we are using neural systems whose evolutionary purpose was to find things like food, we reach a point rather quickly where the numbers no longer matter. After all, squirrels can’t make use of more nuts than they can carry away. “This is just a hypothesis,” Greene says. “But maybe the reason the lives of the next 20 people aren’t worth as much as the first 20 is because we’re using valuation mechanisms designed to think about things like nuts!”

The more neuroscientists investigate, the quirkier our instinctive moral decisions seem. University of Virginia psychologist Jonathan Haidt has shown that moral judgments can be affected by disgust, a marvelously easy-to-prompt emotional response to things like bitter foods, open sores, vomit, and feces. Evolutionary biologists theorize that we were wired with disgust to avoid pathogens and that it became more generalized to make us suspicious of strangers who might inadvertently threaten us with their unfamiliar foods, habits, and germs. In one study, Haidt and psychologist Simone Schnall of Cambridge University showed that filthy surroundings caused test subjects to have harsher judgments of others’ approach to resolving moral dilemmas. Other research has shown that politically conservative people report greater sensitivity to disgust.

Against that backdrop, Cornell psychologist David Pizarro asked random students entering a campus building if they would answer a questionnaire. One group was asked to complete the questionnaire while standing next to a hand-sanitizer dispenser; the other was asked to stand in an empty hallway. Pizarro found that the students who completed the questionnaire next to the hand sanitizer reported more conservative moral, social, and fiscal attitudes than the other group did.

“What the hand sanitizers seemed to do was increase a sense of vigilance or concern over contamination,” Pizarro says of this study, which was published in Psychological Science this past April. “The hand sanitizers made people more sensitive to certain features of conservative thinking. Even though the disgust response arose for reasons that have little to do with morality, it seems to be pretty effective at shaping moral judgments.” Clearly, it can be tricky to rely on our emotional responses if they are triggered by something as seemingly value-neutral as a hand-sanitizer dispenser.

A bumper sticker reading “don’t believe Everything You Think” is poised on the edge of the whiteboard in Greene’s office. It encapsulates the underlying message of the book he is writing. An analogy carried throughout the book compares the moral brain to a camera with automatic settings for taking a picture of a mountain or an indoor portrait or a close-up of a flower, and manual settings for unusual conditions or when we want a nonstandard artistic effect. Greene believes emotions and intuitions are the auto settings for our morality while reasoning is the manual mode. We need our intuitions to make the millions of quick judgments that fill our lives from day to day or else we couldn’t function. But they are not always trustworthy moral indicators, since they were set to handle problems deep in our evolutionary past and are often useless for the newer complexities of the modern world. We need to rely on our manual settings, the reasoning sections of our brain, for more complex or novel situations, Greene says.

That is why this research matters. It helps us become conscious of our brain’s moral machinery. When the sirens of our emotions are sounding in unproductive ways, we can crank up the reasoning parts of our brain to make sound decisions. Often, Greene observes, we have made progress as individuals and a society when we have managed to override our automatic settings, even if we did not realize that was what we were doing.

“We have to be willing to put our feelings aside and think a little more,” Greene says. “A lot of people still have negative attitudes about gays, but an incredible amount has changed. It’s now possible to be out in high school, in a way that was never possible before. Congratulations to us for not being slaves to our auto settings, for being able to put ourselves in manual mode and override them.”

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group