The neuroscience of misinformation

In 2025, disinformation remains a major social challenge with serious consequences for people's health, well-being, and democratic rights . From election interference in the United States to science denialism during the pandemic, to the hunt for blame after the COVID-19 pandemic in Spain, disinformation campaigns are continually used to skew public opinion, polarize the electorate, and destroy any sense of shared reality.
Not everyone is affected equally. For example, according to studies conducted in the United States, far-right voters are the most susceptible to misinformation: they are several times more exposed to it and more likely to share it on social media than centrist or left-wing voters.
Given this situation, it's urgent to understand why we share misinformation. Only then can we find solutions to a phenomenon that continues to rise.
In the psychological field, several theories have been proposed to try to understand our susceptibility to misinformation. On the one hand, the cognitive model argues that people believe misinformation because they lack the capacity or motivation to analyze whether the information is true .
This perspective implies that raising public awareness about the importance of checking or evaluating the veracity of information may be sufficient to mitigate its effects.
On the other hand, the sociocognitive model proposes that people tend to blindly believe any information that reinforces our ideological positions, especially when it benefits the groups with which we identify. If this is the case, cultivating a critical mind may not be enough to mitigate the effects of misinformation.
To examine these two theoretical propositions, my research team and I decided to study the psychological and neurobiological bases of our susceptibility to misinformation. We did so by focusing on the population that currently seems most susceptible to it: far-right voters in Spain and the United States. To do so, we designed a collection of fake posts for the social network X (formerly Twitter) in which different political leaders criticized the government based on various current issues such as immigration, women's rights, and national unity. The posts included false information on all of these topics.
For example, in one of the posts, a far-right leader stated: “This year alone, more than 100,000 immigrants have stormed our shores because of the government.”
Our goal was to see whether mentioning group-identifying values—such as attitudes toward immigration—made voters more likely to share posts, as well as to test what brain mechanisms underlie the decision to share the post with others through their social networks.
To do this, we conducted an online experiment with 400 far-right voters and 400 center-right voters (whom we used as a control group) in Spain, and 800 Republican voters in the United States, of whom more than 100 fully identified with Donald Trump.
This first comparative study between Spain and the United States allowed us to verify that mentioning identity values in the Twitter posts we designed increased the willingness to share misinformation in all samples, regardless of whether the language used was more or less inflammatory.
On the other hand, Republicans who strongly identified with Trump, as well as far-right voters in Spain, were more likely to share the posts than other individuals.

US President Donald Trump. Photo: AFP
Another interesting result was seeing how those with greater analytical skills were more resistant to misinformation. But be careful: only if it didn't mention identity-based values. Ultimately, we saw that mentions of group values, such as those related to immigration, motivated more extreme voters to share misinformation, even if they had high analytical skills.
Following these results, we wanted to investigate the brain processes involved in making decisions about whether or not to share misinformation. To do so, we recruited a sample of 36 far-right voters for a functional neuroimaging study.
This technique allows us to obtain images of brain activity while participants perform a task such as problem-solving. The brain activity signal we obtain reflects blood oxygenation levels, allowing us to assess which brain regions are most metabolically active. Once recruited, participants filled out a questionnaire and completed a task inside an MRI scanner while we obtained images of their brains. The task was very similar to the one we used in the online experiment: they had to decide to what extent they would share on their social networks a series of Twitter posts containing misinformation on key topics for the group, such as immigration and gender issues.
Brain imaging analysis revealed significant neural activity in circuits related to social cognition—our ability to navigate social environments. Some of this activity was found in circuits associated with our ability to attribute mental states to other people, such as intentions or desires, a skill known as theory of mind. Another portion was found in brain regions that allow us to adapt to norms.
Most interesting of all, activity in these brain regions spiked when posts mentioned values that defined the group's identity, but not when they only included criticism of the government on less relevant issues, such as the state of the roads, for example.
Identity values Our results, and those of other similar studies conducted in the United States, suggest that our online behavior responds to a need to connect with our audience. Furthermore, our research suggests that we invest significantly more cognitive resources in making decisions that involve identity values.
Perhaps because, by mentioning these values, a critical situation is created that forces us to take a position for or against them. In this situation, it's important to know how to predict the appropriate response for the audience we're addressing.
Sharing a post with a clear stance on immigration shows others that we are fully aligned with the group. Therefore, it serves a social function: it's a way to reaffirm our membership in a group.
Furthermore, any member of a group with clear identity values, not just one with far-right ideology, might feel equally compelled to invest resources in assessing the appropriate response to their group. Everything suggests that people have partisan motivations for sharing disinformation. And that's something that interventions designed to stop the spread of disinformation should take into account.
Cultivating a critical spirit and contrasting information can help combat disinformation in general, as emphasized by institutions such as the European Commission. However, for those with extreme ideological positions, it is necessary to seek solutions that take into account their ties to their group and address their distrust in society.
(*) Neuroscientist and professor of methods in behavioral sciences, UAB.
(**) It is a non-profit organization that seeks to share ideas and academic knowledge with the public. This article is reproduced here under a Creative Commons license.
eltiempo