“A good dilemma is one that makes you go ugh,” Greene says. “If you ask if it’s OK to feed someone to a shark, that’s an easy negative. In the best dilemmas, you have a strong emotional response competing with a compelling utilitarian justification. They have to be nasty.”
The crying baby scenario hit Greene’s volunteers in the gut, changing the dynamic between the two competing systems in their brains. Here, refusing to act had such dire consequences that 53 percent ultimately endorsed an otherwise unimaginable infanticide: They concluded that the protagonist had to suffocate the baby to save the group. Those making this decision typically employed the dorsolateral prefrontal cortex, a brain region associated with cognitive control. Clearly the two systems in the brain were at odds, but for the utilitarians, reason overpowered emotion in the neural tug-of-war.
Greene then had subjects consider a variety of moral dilemmas while pushing a button in response to an unrelated cue. Both tasks relied on the same cognitive control networks needed to overrule emotion. When that neural system was occupied by the button-pressing task, he found, people took longer to make utilitarian decisions. But pushing the button did not interfere with decisions based on gut instinct, which volunteers rendered just as quickly whether or not they were handling a second cognitive task. The results suggest that making the utilitarian choice—killing the baby, tossing the man off the footbridge—requires a lot of cognitive override as we effortfully push against our instincts to hold back.
“For centuries philosophers have taken intuitions at face value and tried to find theories that conformed to those intuitions,” Greene says. “But as philosophers have played with more and more scenarios, it’s been increasingly difficult to find a single theory that fits. My approach is to say, forget the overriding theory. Our moral judgments are sensitive to kooky things, like whether you’re pushing someone with your hands or dropping him with a switch. There is no single moral faculty; there’s just a dynamic interplay between top-down control processes and automatic emotional control in the brain.”
Other scientists have reached similar conclusions. Philosopher and attorney John Mikhail, who was studying linguistic theory with Noam Chomsky at MIT, became intrigued by Chomsky’s argument that some grammatical rules are hardwired in our brains. Aware of the buzz over trolley problems, Mikhail began to suspect that the foundations of moral judgment were innate as well. To test the notion, he took the question beyond the walls of academia (where test subjects have generally been Ivy League college students) to friends and relatives in Ohio and Tennessee and children in the local schools.
“Even 8-year-olds were saying it was permissible to switch the train away from five people and onto one, but not permissible to throw someone in front of a train,” Mikhail says. (Studies now show that 90 percent will pull the switch to save the five, but 70 percent say it is wrong to push the large man toward the same end.) “ Why would kids and adults from different contexts all have pretty much the same moral intuitions if it weren’t some expression of a shared conscience or moral faculty that’s natural, not something one learns exclusively at school or church or from some other external source?”
Researchers have also been studying everyday moral dilemmas such as doing a favor or engaging in petty theft. In one such study, Jordan Grafman, a cognitive neuroscientist at the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland, and Jorge Moll, a neuroscientist at the D’Or Institute for Research and Education in Rio de Janeiro, dangled a pot of $128 in front of 19 subjects and gave them the opportunity to receive the money or to donate a portion to various social causes. Brain scans showed that donating money activated primitive areas like the ventral tegumentum, part of the brain’s reward circuit that lights up in response to food, sex, and other pleasurable activities necessary to our survival. Moll concluded that humans are hardwired with the neural architecture for such pro-social sentiments as generosity, guilt, and compassion. While the dollar amounts were modest, those who donated more ($80 versus $20) showed a small but significant bump of activity in the brain’s septal region, an area strongly associated with social affiliation and attachment.
“This region is very rich in oxytocin receptors,” Moll says. “I think these instincts evolved from nonhuman primates’ capacity to form social bonds and from mother-offspring attachment capacities. In our species, such capacities were probably extended to support parochialism, group cohesion, and our tendency to attach symbolic meanings to social values and religion.”
Back at MIT, cognitive neuroscientists Liane Young and Rebecca Saxe have been studying the right temporal parietal junction, a brain region used for reasoning about others’ intent. If we know someone means to do harm, they wanted to know, does that knowledge play a role in how moral or immoral we judge them to be? In one scenario, volunteers were told about someone who puts what she thinks is sugar into another person’s coffee; it turns out to be poison, and the person dies. In another scenario, someone puts what she thinks is poison into the coffee, but it turns out to be sugar and the person is unharmed. Volunteers overwhelmingly called the intent to poison more immoral than the accidental poisoning, no matter what the outcome. As subjects made this judgment, the right temporal parietal junction was especially active on fMRI scans.
In a second set of studies, the researchers temporarily disabled the right temporal parietal junction with pulses of magnetism delivered through transcranial magnetic stimulation, a technique used to treat Parkinson’s disease and some intractable cases of depression. When that key brain region was disabled, subjects placed more weight on outcome and less on intent and were more likely to judge a bungled murder moral. The researchers concluded that the right temporal parietal junction not only was activated during this kind of moral judgment but was pivotal in adding intent to the moral equation and determining the volunteers’ point of view.