The Neuroscience Behind Moral Instincts

Our moral instincts are deep-seated and sometimes completely irrational. Neuroscience shows where these biases come from, and how we can overcome them. 

By Kristin Ohlson|Wednesday, June 11, 2014
good-and-evil
good-and-evil
topform/Shutterstock

You arrive at the hill early, eager to cheer the cyclists racing past. The sun is bright, the people on both sides of the road are in high spirits, and speculation about the race passes through the crowd in waves. A hot dog vendor has positioned his cart up the hill, and the aroma of simmering meat wafts by, summoning your best memories of summer. Suddenly shouts erupt. The racers are approaching. You lean forward and see a blur of colors at the summit. Then you notice something wrong. The hot dog vendor has stepped away to make change, and someone has jostled his cart off its moorings. It is rolling downhill toward the road, gathering speed, and poised to kill dozens of cyclists unless someone shoves the cart across the road—but that would kill three spectators instead. What should you do?

When researchers presented this nightmarish dilemma to volunteers participating in an innovative neuropsychology study of morality at the Harvard University Moral Cognition Lab, the responses were evenly split. After moments of mental calculus, half the participants said the most moral decision was to push the cart into the bystanders; the other half disagreed, saying that killing for any reason was wrong, even if it meant saving more lives in the end.

During a visit to the lab, cognitive scientists Joshua Greene and Fiery Cushman, who designed the study, show me a series of brain scans taken while volunteers resolved the dilemma from inside an MRI machine. The scans were all marked by ghostly yellow blobs indicating areas of increased blood oxygen levels at the moment of judgment, Cushman explained. All decision making takes mental energy, so no surprise there. More intriguing were the scans from the volunteers who opted to save more lives. These showed noticeably brighter regions of yellow, suggesting that their decisions demanded significantly more brain power. To Greene and Cushman, it appeared that reason was overriding an automatic, instinctual response.

“You have these gut reactions and they feel authoritative, like the voice of God or your conscience,” Greene says. But these powerful instincts are not commands from a higher power, they are just emotions hardwired into the brain. Our first reaction under pressure—the default response—is to go with the gut. It takes more time and far more brain power to reason the situation out.

“The reason we feel caught in moral dilemmas is that truly, our brain has two different solutions to the problem,” Cushman says. “Those processes can conflict because the brain is at war with itself.”

Brain-bending moral dilemmas like the hot dog cart scenario have long been the province of philosophy. At times, judges and juries have also had to confront such sticky questions: Is it right to kill one person to save many? Should intentions be taken into account when evaluating the outcome—good or bad—of an act? What is right? What is just?

Neuroscientists like Greene and Cushman bring a new perspective to the debate by revealing the biology at work in the brain as it grapples with ethical decisions. Exploring the biological basis of morality presents opportunities to make better choices, Greene believes. “Once we understand what’s happening in our brain,” he says, “we might change our opinions about some long-standing moral issues, challenging that inner voice we’ve listened to for tens of thousands of years.”

Putting Utilitarianism to the Test 

Greene was on the high school debate team in Fort Lauderdale, Florida, when he was first introduced to the great moral philosophers John Stuart Mill and Immanuel Kant. Kant said moral truths were sacrosanct, determined by inviolable rights and duties, lines that could not be crossed. But Greene felt more simpatico with Mill, a utilitarian who argued that morality means serving the greater good. Then Greene found himself up against a crackerjack debater who threw out a withering question designed to hold utilitarian feet to the fire. “Tell me this,” she said. “Is it right for a doctor to kill a person and harvest the organs to save five critically ill patients? It must be ok if it serves the greater good, right?” Greene was unable to respond. “I was stumped right there in the middle of cross-examination!” he recalls. “I intuitively felt that this was wrong. I lost that debate and for a while thought utilitarianism itself might be wrong.”

His views changed again in college when, first at the University of Pennsylvania’s Wharton School and later at Harvard, he studied philosophy and psychology, especially a field called heuristics, the shortcuts the mind uses to make quick decisions. Greene realized that his instant aversion to killing someone to harvest organs seemed like just such a shortcut. From an evolutionary vantage, he theorized that people who instinctively avoid killing one another would be able to maintain better harmony within the group.

Right about that time, Greene heard about an ethical thought experiment called the trolley problem, developed in the 1960s by British philosopher Philippa Foot and expanded by American philosopher Judith Jarvis Thomson. Psychologists had adapted that problem into two morally challenging scenarios. In the “switch scenario,” the subject is asked to imagine a trolley hurtling down a track toward five people, similar to the dilemma presented in the hot dog cart scenario. You can throw a switch to divert the trolley away from the five people, but it will kill one person standing on another track. Is it morally permissible to throw the switch?

In the second scenario, the trolley is again hurtling toward five people. On an overhead footbridge stands a man large enough to stop the trolley. Is it right to push the heavy man onto the track below, killing him to save the five, or is the most moral move doing nothing at all?

“I was fascinated by the work of Foot and Thomson,” Greene says, “because trolley problems capture the central tension between the two most dominant ideas in moral philosophy.” On the one hand, the philosophy associated with Kant argues that morality is about the rights and duties that all individuals have, identifying certain lines that must not be crossed. Pushing the man from the footbridge seems to cross one of those lines. On the other hand, the utilitarianism of Mill suggests that morality requires making the hard choices to serve the greater good—even if, on rare occasions, it can literally mean throwing someone under the bus. Flipping the switch appears to be such a choice.

Greene wrote a paper called “The Two Moralities.” Inside each of us, he wrote, the theories of Kant and Mill are constantly competing. Our minds are not devotees of one moral code or the other. We must always choose.

Soon afterward, Greene found himself in Israel for his sister’s bat mitzvah. To pass the time in his Jerusalem hotel room, he picked up a copy of Descartes’ Error, neuroscientist Antonio Damasio’s pioneering book about emotion in the brain. Damasio’s central narrative involves the strange case of Phineas Gage, a 19th-century railroad construction foreman whose skull was pierced by a metal spike during an explosion (see page 20 for more information on Gage). Gage survived, and physically recovered. But he was no longer socially functional. His capacity to make well-reasoned decisions and future plans was deeply impaired. Damasio and his wife, Hanna, a neurologist, studied Gage’s skull and, on the basis of historical reports of his personality decline, concluded that his problems had resulted from damage to the ventromedial prefrontal cortex, an area of the brain near the center of the forehead that is associated with emotion. They also studied contemporary patients with brain damage causing similar disruptions to personality, implicating other centers of emotion. Damasio proposed that the decision-making process, long deemed rooted in reason, was guided by emotion as well.

“I bolted straight up and said, Aha! This is it,” Greene recalls. “I think I actually copied the pages from the book and faxed them to my adviser.” People with these sorts of brain damage were missing the gut feeling that made other people cringe at the thought of throwing a man in front of a trolley. Not so for the brain-damaged patients, Greene surmised. “They would be ok with pushing the guy off the footbridge; but in real life, in general, when it came to feeling what was right rather than reasoning it out, they would be stumped.”

Suddenly Green saw morality as not just a philosophical concept but as a neurological phenomenon. This was the beginning of what he calls his dual-process theory of moral judgment, in which instinct and reason collide in a battle for supremacy. The ethical tension between Kant and Mill, he hypothesized, was based on the tensions between competing systems in the brain. “I was studying traditional philosophy, but I felt the real progress to be made in ethics was in neuroscience,” Greene says.

The Dual-Process Theory

To chart this new course, Greene sought the help of Jonathan Cohen, a Princeton neuroscientist studying how the brain coordinates attention, thought, and action in pursuit of a goal. One of Cohen’s main tools was functional magnetic resonance imaging (fMRI), the same instrument Greene and Cushman would later use to observe blood oxygen levels in different regions of the brain.

As an advanced grad student and then a postdoc in Cohen’s pioneering Neuroscience of Cognitive Control Laboratory, Greene used fMRI to scan volunteers as they considered trolley scenarios and other tough moral decisions. His landmark paper, published in Science in 2001, was among the first to document the brain structures involved in moral choice. Subjects contemplating shoving a man to his death showed heightened activity in the medial frontal gyrus, the posterior cingulate gyrus, and the angular gyrus, all centers of emotion and social cognition in the brain. Subjects considering whether to pull a trolley switch showed more activity in the dorsolateral prefrontal cortex, a region tied to reasoning.

The balance between those brain systems could shift, Greene found, depending on how directly the subject was to participate. If the onlooker imagined pushing the large man with his hands or a pole, 30 percent found it acceptable to throw him in front of the trolley. Yet 60 percent said it was ok to pull a switch that would topple him though a trapdoor and onto the tracks. Two different actions, same outcome.

“The main factor here is whether or not we use personal force,” says Greene, who points to historical and observational data suggesting people have a reluctance to hurt each other, even in times of war. “We seem to have this general mechanism that makes us reluctant to engage in physical violence, and the mechanism is on autopilot. In this very unusual case, our emotions don’t distinguish between gratuitous violence and acts aimed at promoting the general good.”

Next Greene tried to intensify the conflict between the brain systems by raising the stakes. The crying baby dilemma was the perfect test. Greene asked volunteers to imagine this: You are hiding with fellow villagers in a basement while enemy soldiers search for you. Suddenly your baby starts to cry, and you cover its mouth to muffle the sound. If the soldiers hear the baby, they will find all the villagers, including you and your baby, and kill everyone. But if you don’t move your hand, the baby will smother to death. What is the morally acceptable action?

“A good dilemma is one that makes you go ugh,” Greene says. “If you ask if it’s ok to feed someone to a shark, that’s an easy negative. In the best dilemmas, you have a strong emotional response competing with a compelling utilitarian justification. They have to be nasty.”

The crying baby scenario hit Greene’s volunteers in the gut, changing the dynamic between the two competing systems in their brains. Here, refusing to act had such dire consequences that 53 percent ultimately endorsed an otherwise unimaginable infanticide: They concluded that the protagonist had to suffocate the baby to save the group. Those making this decision typically employed the dorsolateral prefrontal cortex, a brain region associated with cognitive control. Clearly the two systems in the brain were at odds, but for the utilitarians, reason overpowered emotion in the neural tug-of-war.

Greene then had subjects consider a variety of moral dilemmas while pushing a button in response to an unrelated cue. Both tasks relied on the same cognitive control networks needed to overrule emotion. When that neural system was occupied by the button-pressing task, he found, people took longer to make utilitarian decisions. But responding to the button did not interfere with decisions based on gut instinct, which volunteers rendered just as quickly whether or not they were handling a second cognitive task. The results suggest that making the utilitarian choice—killing the baby, tossing the man off the footbridge—requires a lot of cognitive override as we strenuously push against our instincts to hold back.

“For centuries philosophers have taken intuitions at face value and tried to find theories that conformed to those intuitions,” Greene says. “But as philosophers have played with more and more scenarios, it’s been increasingly difficult to find a single theory that fits. My approach is to say, forget the overriding theory. Our moral judgments are sensitive to kooky things, like whether you’re pushing someone with your hands or dropping him with a switch. There is no single moral faculty; there’s just a dynamic interplay between top-down control processes and automatic emotional control in the brain.”

Hardwired Moral Judgement

Other scientists have reached similar conclusions. Philosopher and attorney John Mikhail, who was studying linguistic theory with Noam Chomsky at mit, became intrigued by Chomsky’s argument that some grammatical rules are hardwired in our brains. Aware of the buzz over trolley problems, Mikhail suspected that the foundations of moral judgment were innate as well. To test the notion, he took the question beyond the walls of academia (where test subjects have generally been Ivy League college students) to friends and relatives in Ohio and Tennessee and children in the local schools.

“Even 8-year-olds were saying it was permissible to switch the train away from five people and onto one, but not permissible to throw someone in front of a train,” Mikhail says. (Studies now show that 90 percent will pull the switch to save the five, but 70 percent say it is wrong to push the large man toward the same end.) “Why would kids and adults from different contexts all have pretty much the same moral intuitions if it weren’t some expression of a shared conscience or moral faculty that’s natural, not something one learns exclusively at school or church or from some other external source?”

Researchers have also been studying everyday moral dilemmas such as doing a favor or engaging in petty theft. In one such study, Jordan Grafman, a cognitive neuroscientist at the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland, and Jorge Moll, a neuroscientist at the D’Or Institute for Research and Education in Rio de Janeiro, offered $128 to 19 subjects, giving each the opportunity to take the money or to donate a portion to social causes. Brain scans showed that donating money activated primitive areas like the ventral tegmentum, part of the brain’s reward circuit that lights up in response to food, sex, and other pleasurable activities necessary to our survival. Moll concluded that human neural architecture promotes such pro-social sentiments as generosity, guilt, and compassion. Those who donated more ($80 versus $20) showed a small but significant bump of activity in the brain’s septal region, an area strongly associated with social affiliation and attachment.

“This region is very rich in oxytocin receptors,” Moll says. This neurochemical is involved in social relationships of all kinds, from family connections to sexual bonding. “I think these instincts evolved from nonhuman primates’ capacity to form social bonds and from mother-offspring attachment capacities. In our species, such capacities were probably extended to support parochialism, group cohesion, and our tendency to attach symbolic meanings to social values and religion.”

Back at MIT, cognitive neuroscientists Liane Young and Rebecca Saxe have been studying the right temporal parietal junction, a brain region used for reasoning about others’ intent. If we know someone intends to do harm, they asked, how does that knowledge affect the way we judge them? In one scenario, volunteers were told about someone who puts what she thinks is sugar into another person’s coffee; it turns out to be poison, and the person dies. In another scenario, someone puts what she thinks is poison into the coffee, but it turns out to be sugar and the person is unharmed. Volunteers overwhelmingly judged the intent to poison as more immoral than the accidental poisoning, no matter what the outcome. As subjects made this judgment, the right temporal parietal junction was especially active on fmri scans.

In a second set of studies, the researchers temporarily disabled the right temporal parietal junction with magnetic field pulses delivered through transcranial magnetic stimulation, a technique used to treat Parkinson’s disease and some intractable cases of depression. With that key brain region disabled, subjects emphasized outcomes over intents. If a murder attempt failed, they were more likely to judge it acceptably moral. The researchers concluded that the right temporal parietal junction not only was activated during this kind of moral judgment but was pivotal in judging intent, and adding this factor into the moral equation.

Another human moral quirk is the tendency to value human lives less when more of them are threatened. A few years ago, the nation was riveted by the plight of one little boy thought to have been carried away by a weather balloon, but often we barely register the many victims of foreign wars. Or to use the chilling words often attributed to Stalin (but probably apocryphal), “The death of one man is a tragedy; the death of a million is a statistic.” To understand that surprising disconnect, Greene and Amitai Shenhav, a doctoral student in his lab, asked volunteers to imagine piloting a rescue boat toward a drowning man when they get a call saying that another boat, in the opposite direction, has capsized and its passengers are also drowning. They are also told that another rescue boat is approaching the second group and may or may not reach the people in time. The first pilot cannot save the first drowning man and then turn around and save the second group. He must choose.

In this study, published in Neuron in 2010, Greene and Shenhav observed that as the subjects made their decisions, they tapped a fascinating selection of brain areas: the insula, normally used to manage probability and risk, and the ventral striatum, which tracks magnitude. Mammals generally rely on these regions to find food and sex. For instance, a squirrel might use them to consider how many nuts are lying on the ground and his odds of grabbing a bunch of them before being chased by a dog. “You’d like to think that when Truman was deciding to use nuclear weapons and thinking about how many people would be killed and whether the decision would make the war even worse, some special voice of conscience was informing that decision,” Greene says. “But it seems that for decisions involving numbers and probabilities, we default to systems for figuring out how to find the most nuts.”

The finding may explain why humans discount human life as the number of potential victims goes up. If we are using neural systems whose evolutionary purpose was to find things like food, we reach a point rather quickly where the numbers no longer matter. After all, a squirrel can eat only so many nuts. “This is just a hypothesis,” Greene says. “But maybe the reason the lives of the next 20 people aren’t worth as much as the first 20 is because we’re using valuation mechanisms designed to think about things like nuts!” In his latest research, he and a postdoc have discovered that relying on visual imagery in particular encourages people to make these sorts of default judgments—further evidence that many systems in the brain affect moral decision making.

The more neuroscientists investigate, the quirkier our instinctive moral decisions seem. University of Virginia psychologist Jonathan Haidt has shown that moral judgments can be affected by disgust, a marvelously easy-to-prompt emotional response to things like bitter foods, open sores, vomit, and feces. Evolutionary biologists theorize that we were wired with disgust to avoid pathogens and that it became more generalized to make us suspicious of strangers who might inadvertently threaten us with their unfamiliar foods, habits, and germs. In one study, Haidt and psychologist Simone Schnall of Cambridge University showed that filthy surroundings caused test subjects to have harsher judgments of others’ approach to resolving moral dilemmas. Other research has shown that politically conservative people report greater sensitivity to disgust.

Against that backdrop, Cornell psychologist David Pizarro asked random students entering a campus building if they would answer a questionnaire. One group was asked to complete the questionnaire while standing next to a hand-sanitizer dispenser; the other was asked to stand in an empty hallway. Pizarro found that the students who completed the questionnaire next to the hand sanitizer reported more conservative moral, social, and fiscal attitudes than the other group did.

“What the hand sanitizers seemed to do was increase a sense of vigilance or concern over contamination,” Pizarro says of this study, which was published in Psychological Science last year. “The hand sanitizers made people more sensitive to certain features of conservative thinking. Even though the disgust response arose for reasons that have little to do with morality, it seems to be pretty effective at shaping moral judgments.” Clearly, it can be tricky to rely on our emotional responses if they are triggered by something as seemingly value-neutral as a hand-sanitizer dispenser.

Relying on Our Manual Settings 

A bumper sticker reading “Don’t believe Everything You Think” is poised on the edge of the whiteboard in Greene’s office. It encapsulates the underlying message of the book he is writing. An analogy developed in the book compares the moral brain to a camera with automatic settings for taking a picture of a mountain or an indoor portrait or a close-up of a flower, and manual settings for unusual conditions or when we want an artistic effect. Greene believes emotions and intuitions are the auto settings for our morality while reasoning is the manual mode. We need our intuitions to make the millions of quick judgments that fill our lives from day to day or else we couldn’t function. But they are not always trustworthy moral indicators, since they were set to handle problems deep in our evolutionary past and are often useless for the complexities of the modern world. We need to rely on our manual settings, the reasoning sections of our brain, for more complex or novel situations, Greene says.

That is why this research matters. It helps us become conscious of our brain’s moral machinery. When the sirens of our emotions are sounding in unproductive ways, we can crank up the reasoning parts of our brain to make sound decisions. Often, Greene observes, we have made progress as individuals and a society when we have managed to override our automatic settings, even if we did not realize that was what we were doing.

“We have to be willing to put our feelings aside and think a little more,” Greene says. “A lot of people still have negative attitudes about gays, but an incredible amount has changed. It’s now possible to be out in high school, in a way that was never possible before. Congratulations to us for not being slaves to our auto settings, for being able to put ourselves in manual mode and override them.”

[This article originally appeared in print as "The Good, The Bad and The Brain."]

ADVERTISEMENT
Comment on this article
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
DSC-CV1218web
+