What role, if any, do you believe that emotions should play in moral reasoning? why?

Do Emotions and Morality Mix?

A philosopher explains how feelings influence right and wrong.

What role, if any, do you believe that emotions should play in moral reasoning? why?

Kazuyoshi Nomachi / Corbis

Daily life is peppered with moral decisions. Some are so automatic that they fail to register—like holding the door for a mother struggling with a stroller, or resisting a passing urge to elbow the guy who cut you in line at Starbucks. Others chafe a little more, like deciding whether or not to give money to a figure rattling a cup of coins on a darkening evening commute. A desire to help, a fear of danger, and a cost-benefit analysis of the contents of my wallet; these gut reactions and reasoned arguments all swirl beneath conscious awareness.

While society urges people towards morally commendable choices with laws and police, and religious traditions stipulate good and bad through divine commands, scriptures, and sermons, the final say lies within each of our heads. Rational thinking, of course, plays a role in how we make moral decisions. But our moral compasses are also powerfully influenced by the fleeting forces of disgust, fondness, or fear.

Should subjective feelings matter when deciding right and wrong? Philosophers have debated this question for thousands of years. Some say absolutely: Emotions, like our love for our friends and family, are a crucial part of what give life meaning, and ought to play a guiding role in morality. Some say absolutely not: Cold, impartial, rational thinking is the only proper way to make a decision. Emotion versus reason—it’s one of the oldest and most epic standoffs we know.

Could using modern scientific tools to separate the soup of moral decision-making—peeking into the brain to see how emotion and reason really operate—shed light on these philosophical questions? The field of moral cognition, an interdisciplinary effort between researchers in social and cognitive psychology, behavioral economics, and neuroscience, has tried to do just that. Since the early 2000s, moral psychologists have been using experimental designs to assess people’s behavior and performance on certain tasks, along with fMRI scans to glimpse the brain’s hidden activity, to illuminate the structure of moral thinking.

One pioneer in this field, the philosopher and Harvard University psychology professor Joshua Greene, combined an iconic and thorny ethical thought experiment—the“trolley problem,” when you must decide whether or not you’d flip a switch, or push a man off a footbridge, to cause one person to die instead of five—with brain imaging back in 2001. Those experiments, and subsequent ones, have helped to demystify the role that intuition plays in how we make ethical tradeoffs—and ultimately showed that moral decisions are subject to the same biases as any other type of decision.

I spoke with Greene about how moral-cognition research illuminates the role of emotion in morality—scientifically, but perhaps also philosophically. Below is a lightly edited and condensed transcript of our conversation.


Lauren Cassani Davis: Your research has revealed that people’s intuitions about right and wrong often influence their decisions in ways that seem irrational. If we know they have the potential to lead us astray, are our moral intuitions still useful?

Joshua Greene: Oh, absolutely. Our emotions, our gut reactions, evolved biologically, culturally, and through our own personal experiences because they have served us well in the past—at least, according to certain criteria, which we may or may not endorse. The idea is not that they’re all bad, but rather that they’re not necessarily up to the task of helping us work through modern moral problems, the kinds of problems that people disagree about arising from cultural differences and new opportunities or problems created by technology, and so on.

Davis: You describe moral decision-making as a process that combines two types of thinking: “manual” thinking that is slow, consciously controlled, and rule-based, and “automatic” mental processes that are fast, emotional, and effortless. How widespread is this “dual-process” theory of the human mind?

Greene: I haven’t taken a poll but it’s certainly—not just for morality but for decision-making in general—very hard to find a paper that doesn’t support, criticize, or otherwise engage with the dual-process perspective. Thanks primarily to Daniel Kahneman [the author of Thinking, Fast and Slow] and Amos Tversky, and everything that follows them, it’s the dominant perspective in judgment and decision making. But it does have its critics. There are some people, coming from neuroscience especially, who think that it’s oversimplified. They are starting with the brain and are very much aware of its complexity, aware that these processes are dynamic and interacting, aware that there aren’t just two circuits there, and as a result they say that the dual-process framework is wrong. But to me, it's just different levels of description, different levels of specificity. I haven't encountered any evidence that has caused me to rethink the basic idea that automatic and controlled processing make distinct contributions to judgment and decision making.

Davis: These neural mechanisms you describe are involved in making any kind of decision, right?— the brain weighs an emotional response with a more calculated cost-benefit analysis whether you’re deciding whether to push a guy off a bridge to save people from a runaway train, or trying not to impulse buy a pair of shoes.

Greene: Right, it’s not specific to morality at all.

Davis: Does this have implications for how much we think about morality as special or unique?

Greene: Oh, absolutely. I think that's the clearest lesson of the last 10 to 15 years exploring morality from a neuroscientific perspective: There is, as far as we can tell, no distinctive moral faculty. Instead what we see are different parts of the brain doing all the same kinds of things that they do in other contexts. There’s no special moral circuitry, or moral part of the brain, or distinctive type of moral thinking. What makes moral thinking moral thinking is the function that is plays in society, not the mechanical processes that are taking place in the brain when people are doing it. I, among others, think that function is cooperation, allowing otherwise selfish individuals to reap the benefits of living and working together.

Davis: The idea that morality has no special place in the brain seems counterintuitive, especially when you think about the sacredness surrounding morality in religious contexts, and its association with the divine. Have you ever had pushback—people saying, this general-purpose mechanical explanation doesn’t feel right?

Greene: Yes, people often assume that morality has to be a special thing in the brain. And early on, there was—and to some extent there still is—a lot of research that compares thinking about a moral thing to thinking about a similar non-moral thing, and the researchers say, aha, here are the neural correlates of morality. But in retrospect it seems clear that when you compare a moral question to a non-moral question, if you see any differences there, it’s not because moral things engage a distinctive kind of cognition; instead, it’s something more basic about the content of what is being considered.

Davis: Professional ethicists often argue about whether we are more morally responsible for the harm caused by something we actively did than something we passively let happen—like in the medical setting where doctors are legally allowed to let someone die; but not to actively end the life of a terminally ill patient, even if that’s their wish. You’ve argued that this “action-omission distinction” may draw a lot of its force from incidental features of our mental machinery. Have ideas like this trickled into the real world?

Greene: People have been making similar points for some time. Peter Singer, for example, says that we should be focused more on outcomes and less on what he views as incidental features of the action itself. He’s argued for a focus on quality of life over sanctity of life. Implicit in the sanctity-of-life idea is that it’s ok to allow someone to die, but it’s not ok to actively take someone’s life, even if it’s what they want, even if they have no quality of life. So certainly, the idea of being less mystical about these things and thinking more pragmatically about consequences, and letting people choose their own way—that, I think, has had a very big influence on bioethics. And I think I’m lending some additional support to those ideas.

Davis: Philosophers have long prided themselves on using reason—often worshipped as a glorious, infallible thing—not emotion, to solve moral problems. But at one point in your book, Moral Tribes, you effectively debunk the work of one of the most iconic proponents of reason, Immanuel Kant. You say that many of Kant’s arguments are just esoteric rationalizations of the emotions and intuitions he inherited from his culture. You’ve said that his most famous arguments are not fundamentally different from his other lesser-known arguments, whose conclusions we rarely take seriously today—like his argument that masturbation is morally wrong because it involves “using oneself as a means.” How have people reacted to that interpretation?

Greene: As you might guess, there are philosophers who really don’t like it. I like to think that I’ve changed some people's minds. What seems to happen more often is that people who are just starting out and confronting this whole debate and set of ideas for the first time, but who don’t already have a stake in one side or the other and who understand the science, read that and say, oh, right, that makes sense.

Davis: How can we know when we’re engaged in genuine moral reasoning and not mere rationalization of our emotions?

Greene: I think one way to tell is, do you find yourself taking seriously conclusions that on a gut level you don’t like? Are you putting up any kind of fight with your gut reactions? I think that’s the clearest indication that you are actually thinking it through as opposed to just justifying your gut reactions.

Davis: In the context of everything you’ve studied, from philosophy to psychology, what do you think wisdom means?

Greene: I would say that a wise person is someone who can operate his or her own mind in the same way that a skilled photographer can operate a camera. You need to not only be good with the automatic settings, and to be good with the manual mode, but also to have a good sense of when to use one and when to use the other. And which automatic settings to rely on, specifically, in which kinds of circumstances.

Over the course of your life you build up intuitions about how to act, but then circumstances may change over the course of your life. And what worked at one point didn’t work at another point. And so you can build up these higher-order intuitions about when to let go and try something new. There really is no perfect algorithm, but I would say that a wise mind is one that has the right levels of rigidity and flexibility at multiple levels of abstraction.

Davis: What do you think about the potential for specific introspective techniques—I’m thinking about meditation or mindfulness techniques from the Buddhist tradition—to act as a means of improving our own moral self-awareness?

Greene: That’s an interesting connection—you’re exploring your own mental machinery in meditation. You’re learning to handle your own mind in the same way that an experienced photographer learns to handle her camera. And so you’re building these higher-order skills, where you’re not only thinking, but you’re thinking about how to think, and monitoring your own lower-level thinking from a higher level—you have this integrated hierarchical thinking.

And from what I hear from the people who study it, certain kinds of meditation really do encourage compassion and willingness to help others. It sounds very plausible to me. Tania Singer, for example, has been doing some work on this recently that has been interesting and very compelling. This isn’t something I can speak on as an expert, but based on what I’ve heard from scientists I respect, it sounds plausible to me that meditation of the right kind can change you in a way that most people would consider a moral improvement.

Should emotions play a role in moral reasoning?

Emotions – that is to say feelings and intuitions – play a major role in most of the ethical decisions people make. Most people do not realize how much their emotions direct their moral choices. But experts think it is impossible to make any important moral judgments without emotions.

What is the role of moral reasoning?

Moral reasoning is not only an essential part of how humans develop but also a fundamental aspect of how human societies change over time. Moral reasoning helps people to recognize when change is needed. This occurs by people noticing inconsistencies in principles or unequal treatment of others.

Is morality based on reason or emotion?

According to Greene, reason and emotion are independent systems for coming to a moral judgment. Reason produces characteristically utilitarian moral judgments, and emotion produces characteristically deontological judgments (Greene 2008. 2008.

What is the role of emotion in forming our values and decision making?

Emotions Shape Decisions via the Depth of Thought. In addition to influencing the content of thought, emotions also influence the depth of information processing related to decision making.