Debunking False Beliefs Requires Tackling Belief Systems

Study finds biased prior beliefs affected how people from across both political parties updated their fraud beliefs regarding the 2020 U.S. presidential election

Understanding how beliefs are formed and why they can be resistant to counter evidence is important in today’s polarized world, as views sharply diverge on issues ranging from vaccines to climate change.

To debunk a false belief, it may be better to target a person’s system of beliefs rather than trying to change the false belief itself, according to a new Dartmouth-led study published in Nature Human Behaviour analyzing how people update their beliefs about fraud following the 2020 U.S. presidential election.

“People don’t just have one single belief but a system of interrelated beliefs that depend on each other,” says lead author Rotem Botvinik-Nezer, a postdoctoral researcher in the Cognitive and Affective Neuroscience Lab at Dartmouth.

“This helps explain why it’s really hard to change people’s beliefs about election fraud, just by showing them evidence against fraud, as you may need to convince them that the majority did not prefer their candidate and address the other beliefs anchoring their system,” says Botvinik-Nezer.

For a long time, members of the research team had been studying placebo effects—treatments that can lead to healing outcomes due to the power of the mind even though they have no therapeutic benefits—and they became interested in the broader view of how beliefs are formed and updated in high-stakes situations.

The researchers decided to analyze fraud beliefs during the 2020 U.S. presidential election. They surveyed more than 1,600 Americans on November 4, 2020, while the votes were still being counted for six key states.

Respondents reported their partisan preferences and were tested on fraud beliefs based on hypothetical outcomes of the election. They were asked to indicate: which presidential candidate, Joe Biden v. Donald Trump, they wanted to win and how much they preferred their candidate; how likely their candidate would win the true vote in the absence of fraud; and how likely they thought fraud would affect the actual outcome.

The respondents were then randomly assigned and shown one of two U.S. maps with hypothetical winners in the remaining states depicting either a Biden or Trump win for president and were asked again about their fraud beliefs. This provided the researchers with an opportunity to examine how respondents updated their beliefs in election fraud after new information was provided.

Approximately three months after the initial survey, a subset of respondents completed a follow-up survey reporting their beliefs about the true vote winner and who had benefited from purported election fraud.

The results showed that both Democrats and Republicans increased their beliefs in election fraud when their candidate lost but decreased them when their candidate won.  In addition, the stronger the preference for a candidate, the stronger the bias or “desirability effects,” as dubbed by the researchers.

To better understand the cognitive mechanisms of such desirability effects and predict them quantitatively, the researchers developed a probability-based computational model. “We wanted to determine if this phenomenon was irrational, where people just believe what they want to believe, or if the process of updating beliefs may be rational,” says Botvinik-Nezer.

The team created a Bayesian model, which is commonly used to model how people make rational inferences. Using the survey data, they based their model on a system of three key beliefs: whether or not respondents thought there was fraud in the election before the outcome; who they thought was going to win the true vote; and who they thought benefits from fraud.

The model contained no information on people’s preferences as to whether they wanted a Biden or Trump win; however, the team found that it was able to accurately predict how people would update their beliefs given their system of prior beliefs.

The team then compared their model to other models of irrational belief updating (believing what you want to believe) and found that their rational model best explained the patterns of updating beliefs. The key was that Democrats and Republicans tended to believe that their candidate was supposed to win and that if there was any fraud, it was committed by the opposing partisan group.

The psychological idea in the model is that as people get new information, they update their beliefs based on their existing belief system, which is a rational process involving causal attribution of new evidence across competing explanations. “For respondents who strongly believed that Trump was supposed to win the 2020 election, it didn’t make sense to them that not enough people voted for him, so for some people, it might have been rational to infer that people from the other partisan group must have either cheated or committed fraud,” says Botvinik-Nezer.

The results demonstrated that about one-third of the sample attributed a hypothetical loss in the election almost entirely to fraud and not to the true vote.

“Our results show that if you have this other explanation for an election outcome, where fraud is a potential reality, then it becomes more plausible that fraud gets credit for the election,” says Tor Wager, the Diana L. Taylor Distinguished Professor in Neuroscience and director of the Dartmouth Brain Imaging Center. “When election fraud is considered plausible, this short circuits the link between the belief in the true election winner and the evidence,” says Wager. “So, to change the false belief, you have to focus on the auxiliary beliefs that are supporting that short circuit.”

Leave a Reply

Your email address will not be published.