By IESE Insight
When Donald J. Trump was elected president of the United States, many people fixated on what could have motivated voters to choose him. Was it the promise to build a border wall to keep Mexicans out? Mass deportations of undocumented migrant workers? A ban on Muslims entering the country? Voters must have been swayed by his extreme views on immigration, people speculated, and therefore all Trump supporters must be racist xenophobes, they surmised.
Is this generalized inference about Trump supporters accurate? Or, putting political leanings aside, might we be seeing a more basic cognitive heuristic at work?
This is essentially what prompted a study by IESE’s Kate Barasz — working with Tami Kim and Ioannis Evangelidis — to test cognitive biases in both political and nonpolitical contexts.
In psychology, heuristics are those simple, efficient rules we tend to lean on in order to help us form judgments and make decisions regarding complex matters. Research has found that these sorts of mental crutches can produce erroneous inferences and influence broader beliefs, which makes understanding them important.
For their paper, Barasz et al. designed seven studies to learn more about what they call the “value-weight heuristic” — that is, a tendency to overweight more extreme positions to arrive at quick judgments.
In their first study, they looked at the assumptions that supporters of Hillary Clinton made about Trump voters, and vice versa, and considered their accuracy, just five days after the presidential election. Using Amazon’s Mechanical Turk crowdsourcing site to screen participants, 300 Clinton and Trump voters were asked about their reasons for voting and their assumptions about why others voted the way they did.
The results offered initial evidence of a value-weight heuristic and its possible consequences. So, Clinton voters tended to believe that Trump’s extreme immigration policy was important in his supporters’ decision to cast their votes, yet Trump voters themselves put more weight on his economic policies, a less extreme issue.
What’s more, Clinton voters who surmised that Trump voters put more weight on immigration viewed Trump voters less favorably overall.
Consider: if someone believes in the border wall and a Muslim ban, they may well have voted for Trump. But it’s weaker, logically speaking, to assume that if they voted for Trump, they must support his extreme views on immigration. This is essentially turning the tables on cause and effect. Trump voters may, in fact, care more about his infrastructure spending promises. Yet, when extreme views are in the mix, here the co-authors find evidence that people have inferential blind spots.
From political climate to actual climate
To remove politics from the equation, Barasz et al. turned to a more neutral topic: the weather. Their second study asked more than 200 participants about the climate in Fort Lauderdale, Florida, and Fort Worth, Texas, and how important that weather was to a hypothetical person’s decision to move there.
As expected, participants who considered the weather more extreme in either location were also likely to give it more weight in someone’s decision to move there.
Does that mean the availability of jobs, family ties or other factors are less important in Florida moves? They find, once again, a tendency to conflate value and weight, supporting their main finding that a value-weight heuristic is at play.
Five other studies, involving more than 2,000 participants, corroborated the main finding that, whether it’s a political stance or the weather, the more extreme the feature, the easier it is — and the more confident and likely we, as human beings, are — to assume we know what motivated that choice.
Why is this important? For one thing, political polarization seems to be growing, not just in the United States but all around the world. How we come to perceive others’ attitudes — and what we believe they prioritize — may contribute to our observations of further polarization.
This research suggests that the value-weight heuristic may be especially relevant and consequential where extremity, such as intense policy stances and conspicuous platform issues, can distort observers’ perceptions, regardless of party affiliation. That can render observers insensitive to other factors — in addition to or instead of — that could have motivated their choices.
What business positions or people have I dismissed for their apparent “extremity”?
Might I have made erroneous inferences about them?
What additional research could I do to gain a deeper understanding?
In sum, if people infer an entire group is singularly motivated by an especially extreme or divisive policy issue, perceptions of political polarization are only likely to grow. And the more that happens, the less likely it becomes to really understand where the other side is coming from.
Making people aware of their inference blind spots or over-inference tendencies could actually help reduce political polarization. Now wouldn’t that be nice?