Preface. Although I liked this book, I found other books on this topic even more profound about the human mind and how it relates to morality, politics, and more, some of them posted here, and others such as:
M. Shermer. The Science of Good & Evil. Why People Cheat, Gossip, Care, Share, and follow the golden rule
H. Garcia. Sex, Power, and Partisanship. How evolutionary science makes sense of our political divide
R. Conniff. The Natural History of the Rich: A Field Guide
E. O. Wilson. The Social Conquest of Earth
Alice Friedemann www.energyskeptic.com author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]
Jonathan Haidt. 2013. The Righteous Mind: Why Good People Are Divided by Politics and Religion.
Haidt states his reasons for writing this book as
- To understand why we are so easily divided into hostile groups, each one certain of its righteousness.
- To show why human nature is intrinsically moralistic, critical, and judgmental.
- Our obsession with righteousness is part of our evolution, and enables us to cooperate in groups from tribes to nations, unlike any other animal on earth.
- But we’re also doomed to conflict. Which is fine, it keeps competing ideas in balance
- We reason morally to strategically argue for ourselves, to justify our actions, defend our group – not find the truth out
- To change how you think about morality, politics, religion, and each other, whether you’re liberal or conservative, secular or religious.
This book is one of several I’ve read lately on cognitive bias. Daniel Kahneman’s book “Thinking Fast and Slow” is one of the best introductions to this topic, Chris Mooney’s book “The Republican Brain” the most fun (see my review from April 2013), and this book the best description of how we’re ruled by the 99% of mental processes that occur outside of our conscious awareness.
Another central premise is that humans have six basic moral spheres, and which ones you subscribe to predict a lot about whether you have a liberal or conservative mind (as these terms mean in America). Haidt is not writing just about democrats and republicans in America like Mooney is in “Republican Brain”, but seeks to make this idea of liberal and conservative mind a more universal concept across cultures and history.
In evolutionary terms, there’s our individual “selfish” nature, as well as a “higher” moral nature to cooperate with others in our group, fostering altruism and heroism within the group (as well as war and genocide towards other groups). Darwin predicted that the most cohesive groups would triumph over groups of selfish individuals.
Haidt thinks that religion probably evolved to help bind groups together in communities that shared the same morals. And once you join a group (such as the Democrats or Republicans), you become blind to the alternative moral worlds.
Haidt is a liberal who is interested in applying this understanding to helping Democrats win political elections. He believes that democratic candidates don’t appeal enough to people’s moral values, unlike Republicans who know exactly how to push those buttons in the electorate.
Haidt has extensively studied thousands of responses to questionaires at his website and come up with six kinds of basic morals that apply to any human society:
- Concern about harm and suffering
- Fairness and injustice
A key argument of his is that some liberals are “blind” to some of these morals, that it’s as if they can only taste sweet and salty substances, while conservatives can taste all of these. When I’ve heard him interviewed on radio shows, I did not get it, but reading this book helps since he offers many examples. If you can’t understand this point, then you won’t be able to understand your own righteous mind because it’s essential to realize that morality differs within societies.
The United States and Western Europe are extraordinary and unique in history because they’re oriented around individual rights, not communities. They’ve done away with the rules that you’ll find in all other societies on earth – rules that anthropologists call “purity” and “pollution” – by which he means taboos about what you can eat, how boys become men, the large percentage of the Hebrew Bible devoted to “food, menstruation ,sex, skin, and handling of corpses”.
Societies above all must be ordered and stable, and there are only so many ways of doing this. By far the majority now and in the past have put the needs of groups and institutions above individual rights.
Much of what Haidt writes about is how Westerners can see certain situations as permissible, but most other societies see the same situations as completely and unalterably wrong, because social conventions are moral issues, so violating them is wrong even if now one is harmed. For example, if widows aren’t allowed to eat fish in India but one does, in America we would side with her as an individual, she should be able to eat anything she wants and cultures that prevent that are wrong since no one was harmed. But in India, Hindus believe that fish is a “hot” food that will stimulate a widow’s sexual appetite, leading to sex with another man, which will offend the spirit of her dead husband and prevent her from reincarnating at a higher level.
Since trying to help liberals understand where conservatives are coming from is such a large part of his book, here are some cross-cultural examples between India & America:
Both Indians and Americans agree are wrong:
- While walking, a man saw a dog sleeping on the road. He walked up to it and kicked it.
- A father said to his son, “If you do well on the exam, I will buy you a pen.” The son did well on the exam, but the father did not give him anything.
Americans: wrong Indians: acceptable
- A young married woman went alone to see a movie without informing her husband. When she returned home her husband said, “If you do it again, I will beat you black and blue.” She did it again; he beat her black and blue. (Judge the husband.)
- A man had a married son and a married daughter. After his death his son claimed most of the property. His daughter got little. (Judge the son.)
Americans: acceptable Indians: wrong
- In a family, a twenty-five-year-old son addresses his father by his first name.
- A woman cooked rice and wanted to eat with her husband and his elder brother. Then she ate with them. (Judge the woman.)
To further test this theory of differences in morality, Haidt made up stories where no one was harmed but that offended peoples sense of disgust and disrespect (such as a family eating a dog, but no one saw them do it). Haidt found it interesting that the way in which the 38% who insisted wrong had been done did this by inventing victims. When challenged, they said they knew it was wrong but couldn’t think of a reason why.
They were reasoning not to find the truth but to justify their emotional reactions.
To Haidt, this implies morality doesn’t come from reasoning. It comes from what culture you grow up in through social learning and some innateness. The range of moral issues is unusually narrow in Western individualistic cultures. More social cultures have broad moral domains that encompass many more aspects of life, with a lot more rules. We’re all born to be righteous, but we learn what to be righteous about.
Haidt calls it the western philosophy of worshiping reason and denying the passions the rationalist delusion. Because once a group considers something sacred, they’re almost cult-like in their inability to think clearly about it anymore.
Further proof of our emotional basis for reasoning came from patients with brain damage in the prefrontal cortex who hadn’t lost any IQ or moral reasoning, but felt almost no emotions. This led to alienation from friends and family, and an inability to make decisions, and when they did, often foolish ones. We need emotional feelings to help us make conscious choices, otherwise all options seem equally good. Reasoning requires passion, and passion is the master, not reasoning. When passion goes away, we don’t cope well anymore.
We don’t reason about our moral choices but to convince others we were right to make that choice.
Emotions used to be thought of as visceral but gradually scientists discovered they were full of cognition as well. Emotions first decide whether what just happened helped or hindered your goal and prepares you to respond. So if the event was hearing someone running up behind you in the dark, your nervous system is instantly fired up to fight or flee, your heart pounds, and your pupils widen to be better able to see what’s going on.
The vast majority of your emotions aren’t as dramatic, they’re so subtle you wouldn’t think of them as emotions. But watch yourself closely the next time you’re driving and you’ll hear flashes of annoyance at other drivers, the same as you read the newspaper.
The hundreds of effortless judgments and decisions we make every day might better be labeled intuitions than emotions, with only a few intuitions growing into fully felt emotions.
A summary of what Haidt has to say in his own words is that he calls reasoning the rider, and automatic processes, including emotion, intuition, and all forms of “seeing-that” the elephant. “I chose an elephant rather than a horse because elephants are so much bigger—and smarter—than horses. Automatic processes run the human mind, just as they have been running animal minds for 500 million years, so they’re very good at what they do, like software that has been improved through thousands of product cycles. When human beings evolved the capacity for language and reasoning at some point in the last million years, the brain did not rewire itself to hand over the reins to a new and inexperienced charioteer. Rather, the rider (language-based reasoning) evolved because it did something useful for the elephant. The rider can do several useful things. It can see further into the future (because we can examine alternative scenarios in our heads) and therefore it can help the elephant make better decisions in the present. It can learn new skills and master new technologies, which can be deployed to help the elephant reach its goals and sidestep disasters. And, most important, the rider acts as the spokesman for the elephant, even though it doesn’t necessarily know what the elephant is really thinking. The rider is skilled at fabricating post hoc explanations for whatever the elephant has just done, and it is good at finding reasons to justify whatever the elephant wants to do next. Once human beings developed language and began to use it to gossip about each other, it became extremely valuable for elephants to carry around on their backs a full-time public relations firm”.
“I also wanted to capture the social nature of moral judgment. Moral talk serves a variety of strategic purposes such as managing your reputation, building alliances, and recruiting bystanders to support your side in the disputes that are so common in daily life. I wanted to go beyond the first judgments people make when they hear some juicy gossip or witness some surprising event. I wanted my model to capture the give-and-take, the round after round of discussion and argumentation that sometimes leads people to change their minds.”
“We make our first judgments rapidly, and we are dreadful at seeking out evidence that might disconfirm those initial judgments. Friends can do for us what we cannot do for ourselves: they can challenge us, giving us reasons and arguments (link 3) that sometimes trigger new intuitions, thereby making it possible for us to change our minds. We occasionally do this when mulling a problem by ourselves, suddenly seeing things in a new light or from a new perspective”.
“Far more common than such private mind changing is social influence. Other people influence us constantly just by revealing that they like or dislike somebody. That form of influence is the social persuasion link. Many of us believe that we follow an inner moral compass, but the history of social psychology richly demonstrates that other people exert a powerful force, able to make cruelty seem acceptable and altruism seem embarrassing, without giving us any reasons or arguments.”
The social intuitionist model offers an explanation of why moral and political arguments are so frustrating: because moral reasons are the tail wagged by the intuitive dog. A dog’s tail wags to communicate. You can’t make a dog happy by forcibly wagging its tail. And you can’t change people’s minds by utterly refuting their arguments.
Since reasoning is not the source that either person gets his belief from then no logic which doesn’t speak to the emotions can get someone to change their mind.
In his classic book How to Win Friends and Influence People, Carnegie repeatedly urged readers to avoid direct confrontations. Instead he advised people to “begin in a friendly way,” to “smile,” to “be a good listener,” and to “never say ‘you’re wrong.’ ” The persuader’s goal should be to convey respect, warmth, and an openness to dialogue before stating one’s own case. You might think his techniques are superficial and manipulative, appropriate only for salespeople. But Carnegie was in fact a brilliant moral psychologist who grasped one of the deepest truths about conflict. He used a quotation from Henry Ford to express it: “If there is any one secret of success it lies in the ability to get the other person’s point of view and see things from their angle as well as your own.”
It’s such an obvious point, yet few of us apply it in moral and political arguments because our righteous minds so readily shift into combat mode. The rider and the elephant work together smoothly to fend off attacks and lob rhetorical grenades of our own. The performance may impress our friends and show allies that we are committed members of the team, but no matter how good our logic, it’s not going to change the minds of our opponents if they are in combat mode too. If you really want to change someone’s mind on a moral or political matter, you’ll need to see things from that person’s angle as well as your own. And if you do truly see it the other person’s way—deeply and intuitively—you might even find your own mind opening in response. Empathy is an antidote to righteousness, although it’s very difficult to empathize across a moral divide.
Animals assess the world thousands of times a day to decide whether to approach or avoid something without reasoning about it. We do too, but these perceptions are so fleeting and subtle they don’t deserve the word emotion, they’re more like flashes of liking or disliking something the instant we see it.
So it makes sense that reasoning, which evolved later, is not the master and leader of our emotions, but merely a useful second check on reality that can override a bad emotional decision at times. But most of the time we anticipate that our emotions are steering in a certain direction and ignore the other possibilities as our thoughts jump in to rationalize the emotion we’re feeling.
Experiments have shown we tend to like familiar things. I’ve heard the music industry gets radio stations to sandwich a new song they want to turn into a hit between two popular familiar songs, in addition to playing the new song as often as possible.
This all operates at such fast speeds we’re often unaware of our biases. Haidt writes that most people have negative associations with many social groups, such as black people, immigrants, obese people, and the elderly.
We also are biased positively towards pretty people. We think they’re smarter and they’re more likely to be acquitted by a jury.
Here’s a scary experiment. Hundreds of pairs of photos of winners and losers in senate and house elections were shown to people who were asked to pick out the face that looked the most competent. It turned out that in real life, that’s the person who actually won the election two-thirds of the time. Being attractive or likeable looking did not predict who won as well, so a judgment of competence wasn’t just based on a snap positive opinion. Even when people only had one-tenth of a second to decide between photos, the results were the same.
Our brains work awfully fast. Within a second of meeting someone, we’ve already made snap judgments about them.
Immorality makes people want to get clean. People who are asked to recall their own moral transgressions, or merely to copy by hand an account of someone else’s moral transgression, find themselves thinking about cleanliness more often, and wanting more strongly to cleanse themselves. They are more likely to select hand wipes and other cleaning products when given a choice of consumer products to take home with them after the experiment.
In one of the most bizarre demonstrations of this effect, Eric Helzer and David Pizarro asked students at Cornell University to fill out surveys about their political attitudes while standing near (or far from) a hand sanitizer dispenser. Those told to stand near the sanitizer became temporarily more conservative. Moral judgment is not a purely cerebral affair in which we weigh concerns about harm, rights, and justice. It’s a kind of rapid, automatic process more akin to the judgments animals make as they move through the world, feeling themselves drawn toward or away from various things. Moral judgment is mostly done by the elephant.
Roughly one in a hundred men (and many fewer women) are psychopaths. Most are not violent, but the ones who are commit nearly half of the most serious crimes, such as serial murder, serial rape, and the killing of police officers. Robert Hare, a leading researcher, defines psychopathy by two sets of features. There’s the unusual stuff that psychopaths do—impulsive antisocial behavior, beginning in childhood—and there are the moral emotions that psychopaths lack. They feel no compassion, guilt, shame, or even embarrassment, which makes it easy for them to lie, and to hurt family, friends, and animals. Psychopaths do have some emotions. When Hare asked one man if he ever felt his heart pound or stomach churn, he responded: “Of course! I’m not a robot. I really get pumped up when I have sex or when I get into a fight.” But psychopaths don’t show emotions that indicate that they care about other people. Psychopaths seem to live in a world of objects, some of which happen to walk around on two legs.
The ability to reason combined with a lack of moral emotions is dangerous. Psychopaths learn to say whatever gets them what they want. The serial killer Ted Bundy, for example, was a psychology major in college, where he volunteered on a crisis hotline. On those phone calls he learned how to speak to women and gain their trust. Then he raped, mutilated, and murdered at least thirty young women before being captured in 1978. Psychopathy does not appear to be caused by poor mothering or early trauma, or to have any other nurture-based explanation. It’s a genetically heritable condition that creates brains that are unmoved by the needs, suffering, or dignity of others. The elephant doesn’t respond with the slightest lean to the gravest injustice. The rider is perfectly normal—he does strategic reasoning quite well. But the rider’s job is to serve the elephant, not to act as a moral compass.
Infants as young as two months old will look longer at an event that surprises them than at an event they were expecting. If everything is a buzzing confusion, then everything should be equally surprising. But if the infant’s mind comes already wired to interpret events in certain ways, then infants can be surprised when the world violates their expectations.
Infants come equipped with innate abilities to understand their social world as well. They understand things like harming and helping.
By six months of age, infants are watching how people behave toward other people, and they are developing a preference for those who are nice rather than those who are mean. In other words, the elephant begins making something like moral judgments during infancy, long before language and reasoning arrive.
The results were clear and compelling. When people read stories involving personal harm, they showed greater activity in several regions of the brain related to emotional processing. Across many stories, the relative strength of these emotional reactions predicted the average moral judgment.
With few exceptions, the results tell a consistent story: the areas of the brain involved in emotional processing activate almost immediately, and high activity in these areas correlates with the kinds of moral judgments or decisions that people ultimately make.
In an article titled “The Secret Joke of Kant’s Soul,” Greene summed up what he and many others had found. Greene did not know what E. O. Wilson had said about philosophers consulting their “emotive centers” when he wrote the article, but his conclusion was the same as Wilson’s: We have strong feelings that tell us in clear and uncertain terms that some things simply cannot be done and that other things simply must be done. But it’s not obvious how to make sense of these feelings, and so we, with the help of some especially creative philosophers, make up a rationally appealing story [about rights]. This is a stunning example of consilience. Wilson had prophesied in 1975 that ethics would soon be “biologicized” and refounded as the interpretation of the activity of the “emotive centers” of the brain. When he made that prophecy he was going against the dominant views of his time. Psychologists such as Kohlberg said that the action in ethics was in reasoning, not emotion.
In the 33 years between the Wilson and Greene quotes, everything changed. Scientists in many fields began recognizing the power and intelligence of automatic processes, including emotion.
A slave is never supposed to question his master, but most of us can think of times when we questioned and revised our first intuitive judgment. The rider-and-elephant metaphor works well here. The rider evolved to serve the elephant, but it’s a dignified partnership, more like a lawyer serving a client than a slave serving a master. Good lawyers do what they can to help their clients, but they sometimes refuse to go along with requests. Perhaps the request is impossible (such as finding a reason to condemn Dan, the student council president—at least for most of the people in my hypnosis experiment). Perhaps the request is self-destructive (as when the elephant wants a third piece of cake, and the rider refuses to go along and find an excuse). The elephant is far more powerful than the rider, but it is not an absolute dictator. When does the elephant listen to reason? The main way that we change our minds on moral issues is by interacting with other people. We are terrible at seeking evidence that challenges our own beliefs, but other people do us this favor, just as we are quite good at finding errors in other people’s beliefs. When discussions are hostile, the odds of change are slight. The elephant leans away from the opponent, and the rider works frantically to rebut the opponent’s charges. But if there is affection, admiration, or a desire to please the other person, then the elephant leans toward that person and the rider tries to find the truth in the other person’s arguments. The elephant may not often change its direction in response to objections from its own rider, but it is easily steered by the mere presence of friendly elephants (that’s the social persuasion link in the social intuitionist model) or by good arguments given to it by the riders of those friendly elephants (that’s the reasoned persuasion link).
In other words, under normal circumstances the rider takes its cue from the elephant, just as a lawyer takes instructions from a client. But if you force the two to sit around and chat for a few minutes, the elephant actually opens up to advice from the rider and arguments from outside sources. Intuitions come first, and under normal circumstances they cause us to engage in socially strategic reasoning, but there are ways to make the relationship more of a two-way street.
Elephants rule, but they are neither dumb nor despotic. Intuitions can be shaped by reasoning, especially when reasons are embedded in a friendly conversation or an emotionally compelling novel, movie, or news story.
Why do we have this weird mental architecture? As hominid brains tripled in size over the last 5 million years, developing language and a vastly improved ability to reason, why did we evolve an inner lawyer, rather than an inner judge or scientist? Wouldn’t it have been most adaptive for our ancestors to figure out the truth, the real truth about who did what and why, rather than using all that brainpower just to find evidence in support of what they wanted to believe? That depends on which you think was more important for our ancestors’ survival: truth or reputation.
In this chapter I’ll show that reason is not fit to rule; it was designed to seek justification, not truth. I’ll show that Glaucon was right: people care a great deal more about appearance and reputation than about reality. In fact, I’ll praise Glaucon for the rest of the book as the guy who got it right—the guy who realized that the most important principle for designing an ethical society is to make sure that everyone’s reputation is on the line all the time, so that bad behavior will always bring bad consequences.
Human beings are the world champions of cooperation beyond kinship, and we do it in large part by creating systems of formal and informal accountability. We’re really good at holding others accountable for their actions, and we’re really skilled at navigating through a world in which others hold us accountable for our own. Phil Tetlock, a leading researcher in the study of accountability, defines accountability as the “explicit expectation that one will be called upon to justify one’s beliefs, feelings, or actions to others,” coupled with an expectation that people will reward or punish us based on how well we justify ourselves.8 When nobody is answerable to anybody, when slackers and cheaters go unpunished, everything falls apart.
We act like intuitive politicians striving to maintain appealing moral identities in front of our multiple constituencies.
In Tetlock’s research, subjects are asked to solve problems and make decisions. For example, they’re given information about a legal case and then asked to infer guilt or innocence. Some subjects are told that they’ll have to explain their decisions to someone else. Other subjects know that they won’t be held accountable by anyone. Tetlock found that when left to their own devices, people show the usual catalogue of errors, laziness, and reliance on gut feelings that has been documented in so much decision-making research. But when people know in advance that they’ll have to explain themselves, they think more systematically and self-critically. They are less likely to jump to premature conclusions and more likely to revise their beliefs in response to evidence.
Tetlock concludes that conscious reasoning is carried out largely for the purpose of persuasion, rather than discovery. But Tetlock adds that we are also trying to persuade ourselves. We want to believe the things we are about to say to others.
Our moral thinking is much more like a politician searching for votes than a scientist searching for truth.
Leary’s conclusion was that “the sociometer operates at a nonconscious and preattentive level to scan the social environment for any and all indications that one’s relational value is low or declining.”16 The sociometer is part of the elephant. Because appearing concerned about other people’s opinions makes us look weak, we (like politicians) often deny that we care about public opinion polls. But the fact is that we care a lot about what others think of us. The only people known to have no sociometer are psychopaths.
If you want to see post hoc reasoning in action, just watch the press secretary of a president or prime minister take questions from reporters. No matter how bad the policy, the secretary will find some way to praise or defend it. Reporters then challenge assertions and bring up contradictory quotes from the politician, or even quotes straight from the press secretary on previous days. Sometimes you’ll hear an awkward pause as the secretary searches for the right words, but what you’ll never hear is: “Hey, that’s a great point! Maybe we should rethink this policy.” Press secretaries can’t say that because they have no power to make or revise policy. They’re told what the policy is, and their job is to find evidence and arguments that will justify the policy to the public. And that’s one of the rider’s main jobs: to be the full-time in-house press secretary for the elephant.
“What about 35–37–39?” “Yes.” “OK, so the rule must be any series of numbers that rises by two?” “No.” People had little trouble generating new hypotheses about the rule, sometimes quite complex ones. But what they hardly ever did was to test their hypotheses by offering triplets that did not conform to their hypothesis. For example, proposing 2–4–5 (yes) and 2–4–3 (no) would have helped people zero in on the actual rule: any series of ascending numbers. Wason called this phenomenon the confirmation bias, the tendency to seek out and interpret new evidence in ways that confirm what you already think. People are quite good at challenging statements made by other people, but if it’s your belief, then it’s your possession—your child, almost—and you want to protect it, not challenge it and risk losing it.
Deanna Kuhn, a leading researcher of everyday reasoning, found evidence of the confirmation bias even when people solve a problem that is important for survival: knowing what foods make us sick. To bring this question into the lab she created sets of eight index cards, each of which showed a cartoon image of a child eating something—chocolate cake versus carrot cake, for example—and then showed what happened to the child afterward: the child is smiling, or else is frowning and looking sick. She showed the cards one at a time, to children and to adults, and asked them to say whether the “evidence” (the 8 cards) suggested that either kind of food makes kids sick. The kids as well as the adults usually started off with a hunch—in this case, that chocolate cake is the more likely culprit. They usually concluded that the evidence proved them right. Even when the cards showed a stronger association between carrot cake and sickness, people still pointed to the one or two cards with sick chocolate cake eaters as evidence for their theory, and they ignored the larger number of cards that incriminated carrot cake. As Kuhn puts it, people seemed to say to themselves: “Here is some evidence I can point to as supporting my theory, and therefore the theory is right.”
Perkins found that IQ was by far the biggest predictor of how well people argued, but it predicted only the number of my-side arguments. Smart people make really good lawyers and press secretaries, but they are no better than others at finding reasons on the other side. Perkins concluded that “people invest their IQ in buttressing their own case rather than in exploring the entire issue more fully and evenhandedly.”
Research on everyday reasoning offers little hope for moral rationalists. In the studies I’ve described, there is no self-interest at stake. When you ask people about strings of digits, cakes and illnesses, and school funding, people have rapid, automatic intuitive reactions. One side looks a bit more attractive than the other. The elephant leans, ever so slightly, and the rider gets right to work looking for supporting evidence—and invariably succeeds.
If thinking is confirmatory rather than exploratory in these dry and easy cases, then what chance is there that people will think in an open-minded, exploratory way when self-interest, social identity, and strong emotions make them want or even need to reach a preordained conclusion?
Many psychologists have studied the effects of having “plausible deniability.” In one such study, subjects performed a task and were then given a slip of paper and a verbal confirmation of how much they were to be paid. But when they took the slip to another room to get their money, the cashier misread one digit and handed them too much money. Only 20 percent spoke up and corrected the mistake. But the story changed when the cashier asked them if the payment was correct. In that case, 60 percent said no and returned the extra money. Being asked directly removes plausible deniability; it would take a direct lie to keep the money. As a result, people are three times more likely to be honest. You can’t predict who will return the money based on how people rate their own honesty, or how well they are able to give the high-minded answer on a moral dilemma of the sort used by Kohlberg. If the rider were in charge of ethical behavior, then there would be a big correlation between people’s moral reasoning and their moral behavior. But he’s not, so there isn’t.
When given the opportunity, many honest people will cheat. In fact, rather than finding that a few bad apples weighted the averages, we discovered that the majority of people cheated, and that they cheated just a little bit.
People didn’t try to get away with as much as they could. Rather, when Ariely gave them anything like the invisibility of the ring of Gyges, they cheated only up to the point where they themselves could no longer find a justification that would preserve their belief in their own honesty. The bottom line is that in lab experiments that give people invisibility combined with plausible deniability, most people cheat. The press secretary (also known as the inner lawyer)27 is so good at finding justifications that most of these cheaters leave the experiment as convinced of their own virtue as they were when they walked in.
The difference between can and must is the key to understanding the profound effects of self-interest on reasoning. It’s also the key to understanding many of the strangest beliefs—in UFO abductions, quack medical treatments, and conspiracy theories.
The social psychologist Tom Gilovich studies the cognitive mechanisms of strange beliefs. His simple formulation is that when we want to believe something, we ask ourselves, “Can I believe it?” Then (as Kuhn and Perkins found), we search for supporting evidence, and if we find even a single piece of pseudo-evidence, we can stop thinking. We now have permission to believe. We have a justification, in case anyone asks. In contrast, when we don’t want to believe something, we ask ourselves, “Must I believe it?” Then we search for contrary evidence, and if we find a single reason to doubt the claim, we can dismiss it. You only need one key to unlock the handcuffs of must. Psychologists now have file cabinets full of findings on “motivated reasoning,”29 showing the many tricks people use to reach the conclusions they want to reach. When subjects are told that an intelligence test gave them a low score, they choose to read articles criticizing (rather than supporting) the validity of IQ tests.30 When people read a (fictitious) scientific study that reports a link between caffeine consumption and breast cancer, women who are heavy coffee drinkers find more flaws in the study than do men and less caffeinated women.
If people can literally see what they want to see—given a bit of ambiguity—is it any wonder that scientific studies often fail to persuade the general public? Scientists are really good at finding flaws in studies that contradict their own views, but it sometimes happens that evidence accumulates across many studies to the point where scientists must change their minds. I’ve seen this happen in my colleagues (and myself) many times,34 and it’s part of the accountability system of science—you’d look foolish clinging to discredited theories. But for nonscientists, there is no such thing as a study you must believe. It’s always possible to question the methods, find an alternative interpretation of the data, or, if all else fails, question the honesty or ideology of the researchers.
And now that we all have access to search engines on our cell phones, we can call up a team of supportive scientists for almost any conclusion twenty-four hours a day. Whatever you want to believe about the causes of global warming or whether a fetus can feel pain, just Google your belief. You’ll find partisan websites summarizing and sometimes distorting relevant scientific studies. Science is a smorgasbord, and Google will guide you to the study that’s right for you.
Many political scientists used to assume that people vote selfishly, choosing the candidate or policy that will benefit them the most. But decades of research on public opinion have led to the conclusion that self-interest is a weak predictor of policy preferences. Parents of children in public school are not more supportive of government aid to schools than other citizens; young men subject to the draft are not more opposed to military escalation than men too old to be drafted; and people who lack health insurance are not more likely to support government-issued health insurance than people covered by insurance.
Rather, people care about their groups, whether those be racial, regional, religious, or political. The political scientist Don Kinder summarizes the findings like this: “In matters of public opinion, citizens seem to be asking themselves not ‘What’s in it for me?’ but rather ‘What’s in it for my group?’ Political opinions function as “badges of social membership.” They’re like the array of bumper stickers people put on their cars showing the political causes, universities, and sports teams they support. Our politics is groupish, not selfish.
Studies have documented the “attitude polarization” effect that happens when you give a single body of information to people with differing partisan leanings. Liberals and conservatives actually move further apart when they read about research on whether the death penalty deters crime, or when they rate the quality of arguments made by candidates in a presidential debate, or when they evaluate arguments about affirmative action or gun control.
The threatening information (their own candidate’s hypocrisy) immediately activated a network of emotion-related brain areas—areas associated with negative emotion and responses to punishment. The handcuffs (of “Must I believe it?”) hurt. Some of these areas are known to play a role in reasoning, but there was no increase in activity in the dorso-lateral prefrontal cortex (dlPFC). The dlPFC is the main area for cool reasoning tasks. Whatever thinking partisans were doing, it was not the kind of objective weighing or calculating that the dlPFC is known for. Once Westen released them from the threat, the ventral striatum started humming—that’s one of the brain’s major reward centers. All animal brains are designed to create flashes of pleasure when the animal does something important for its survival, and small pulses of the neurotransmitter dopamine in the ventral striatum (and a few other places) are where these good feelings are manufactured. Heroin and cocaine are addictive because they artificially trigger this dopamine response. Rats who can press a button to deliver electrical stimulation to their reward centers will continue pressing until they collapse from starvation. Westen found that partisans escaping from handcuffs (by thinking about the final slide, which restored their confidence in their candidate) got a little hit of that dopamine. And if this is true, then it would explain why extreme partisans are so stubborn, closed-minded, and committed to beliefs that often seem bizarre or paranoid. Like rats that cannot stop pressing a button, partisans may be simply unable to stop believing weird things. The partisan brain has been reinforced so many times for performing mental contortions that free it from unwanted beliefs. Extreme partisanship may be literally addictive.
From Plato through Kant and Kohlberg, many rationalists have asserted that the ability to reason well about ethical issues causes good behavior. They believe that reasoning is the royal road to moral truth, and they believe that people who reason well are more likely to act morally. But if that were the case, then moral philosophers—who reason about ethical principles all day long—should be more virtuous than other people. Are they? The philosopher Eric Schwitzgebel tried to find out. He used surveys and more surreptitious methods to measure how often moral philosophers give to charity, vote, call their mothers, donate blood, donate organs, clean up after themselves at philosophy conferences, and respond to emails purportedly from students. And in none of these ways are moral philosophers better than other philosophers or professors in other fields. Schwitzgebel even scrounged up the missing-book lists from dozens of libraries and found that academic books on ethics, which are presumably borrowed mostly by ethicists, are more likely to be stolen or just never returned than books in other areas of philosophy. In other words, expertise in moral reasoning does not seem to improve moral behavior, and it might even make it worse (perhaps by making the rider more skilled at post hoc justification). Schwitzgebel still has yet to find a single measure on which moral philosophers behave better than other philosophers. Anyone who values truth should stop worshipping reason. We all need to take a cold hard look at the evidence and see reasoning for what it is.
Most of the bizarre and depressing research findings make perfect sense once you see reasoning as having evolved not to help us find truth but to help us engage in arguments, persuasion, and manipulation in the context of discussions with other people. As they put it, “skilled arguers … are not after the truth but after arguments supporting their views.” This explains why the confirmation bias is so powerful, and so ineradicable. How hard could it be to teach students to look on the other side, to look for evidence against their favored view? Yet, in fact, it’s very hard, and nobody has yet found a way to do it. It’s hard because the confirmation bias is a built-in feature (of an argumentative mind), not a bug that can be removed (from a platonic mind). I’m not saying we should all stop reasoning and go with our gut feelings. Gut feelings are sometimes better guides than reasoning for making consumer choices and interpersonal judgments, but they are often disastrous as a basis for public policy, science, and law. Rather, what I’m saying is that we must be wary of any individual’s ability to reason.
In the same way, each individual reasoner is really good at one thing: finding evidence to support the position he or she already holds, usually for intuitive reasons. We should not expect individuals to produce good, open-minded, truth-seeking reasoning, particularly when self-interest or reputational concerns are in play. But if you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system. This is why it’s so important to have intellectual and ideological diversity within any group or institution whose goal is to find truth (such as an intelligence agency or a community of scientists) or to produce good public policy (such as a legislature or advisory board).
Miscellaneous insights of Haidt:
Religion. “Groups create supernatural beings not to explain the universe but to order their societies”
Evolution of morality in children: For a long time moral psychology believed in rationalism, that kids figure out morality for themselves when their minds are ready and when they have the right kinds of experiences. Piaget and others did experiments that showd children grew in their ability to understand an apply rules, resolve arguments that grew in sophistication as their minds matured. Piaget thought kids learned morality by playing with other kids, not from adults or hard-wired genetically. Kohlberg and others slanted experiments by having a framework that was secular, questioning, and egalitarian – unintentionally with no hidden motivations, and Haidt sets out to prove that there is a lot more going on with the development of childhood morality.
More reading: 28 May 2012. Dan Jones. The argumentative ape: Why we’re wired to persuade. NewScientist.