Why do people believe in ‘fake news’ and share it on social media?
On March 9th 2018, MIT researchers Soroush Vosoughi, Deb Roy and Sinan Aral published their work in Science – it was praised as the “largest-ever study on fake news” – confirming what we all probably empirically suspected: misinformation, lies, falsehood and fabricated stories on Twitter are penetrating deeper, farther and faster than truth and accurate information. The idea sparked right after the Boston Marathon Bombing incident in 2013. As Vosoughi notes, “I realized that […] a good chunk of what I was reading on social media was rumors; it was false news”.
The study examined a massive volume of tweets spanning a 12-year period. A distinction was made between truth/accurate information and lies/false information, using 6 different fact-checking websites (factcheck.org, hoax-slayer.com, politifact.com, snopes.org, truthorfiction.com, and urbanlegends.about.com). Researchers analyzed more than 126.000 stories shared by 3 million users approximately 4.5 million times. They found that “falsehood diffused significantly farther, faster, deeper and more broadly than the truth in all categories of information”. True information took six times more than false information to reach an audience of 1.500 Twitter users while ‘fake news’ stories were 70% more likely to be retweeted.
One of the most striking results concerned the role of bots. Despite the widespread impression that bots play the dominant role, data showed that bots amplified true stories as much as they amplified false ones. It was humans who were the primary culprits for how quickly and how far ‘fake news’ spread on Twitter.
It’s our pleasure…
Psychiatrist and psychoanalyst, Dr. Luba Kessler, provides an interesting psychoanalytic perspective. By employing Freud’s pleasure principle (denoting fulfillment of desires, the driving force behind the id) and its constant fight against the reality principle (the norms and rules of the environment we live in), Kessler notes that “perhaps made-up stories simply entertain us. Like gossip, they are usually titillating and sensational. The stimulation of ‘fake news’ fulfills a craving to relieve the burdens of our lived realities”. On a deeper subconscious level, recent research confirms that people are cognitively biased, agreeing with false information as long as it confirms pre-existing opinions and beliefs.
What is new and creates significant risks is the framework based on Speed and Scale. That’s why data manipulation strategies like the ones employed in Cambridge Analytica scandal should be alarming about the ways public trends can be “shaped”, structurally influenced and directed accordingly with the proper use of massive personal information.
The believability of ‘fake news’
While true and verified stories have a normal flow in everyday life reality, misinformation and fabricated stories are unusual, unexpected and can trigger emotions of high intensity. Research also indicates that this emotional intensity makes a tweet more likely to be spread among people of the same ideological group, providing yet another direct evidence of how confirmation bias operates in conjunction with the “echo chamber” or “filter bubble” effect.
There is nevertheless a paradox connected with the “illusory truth effect” phenomenon, according to which the very repetition of information increases its possibilities of believability. Nothing more and nothing less than the good old golden rule of propaganda: repeat the lie and it will be believed. The effect also operates in the social media environment.
Who is more likely to spread ‘fake news’?
The science of Social Network Analysis has contributed significantly to our understanding of why networked structures matter for the flow of information online. Applying graph theory for analyzing nodes (people or objects in the network) and ties/links (connections and interactions between them), has brought to the fore the properties of networks themselves and have provided clear visualizations of the diffusion patterns of false stories and misinformation online.
Researchers Goyanes and Lavin focused on the demographic factors and situational predictors that influence the probability to share political misinformation. They used a representative sample of 1.002 US adults from the Pew Research Centre and the results of the logistic regression analysis revealed some interesting relationships between factors under examination. In their own words “…those more likely to share fake political news are males, older people, individuals with lower incomes, people with a high perception of the frequency of online ‘fake news’ and people who grant greater responsibility to the public in preventing ‘fake news’ stories. People less likely to share are democrat voters, people who inadvertently have shared ‘fake news’, people who grant great responsibility to the public in preventing ‘fake news’ stories and democrat female voters.
Although such statistics provide a new useful set of assumptions and hypotheses, they are not easily testable, and one should caution against over generalisation. Each user’s decision on how to treat ‘fake news’, the propensity to believe them and the drive to share them online, is highly coincidental and contingent on how a variety of individual, situational and medium factors interrelate.
Not so simple…
Network Analysis and machine learning systems using complex and state-of-the-art algorithms provide good knowledge on “how” falsehood circulates along digital social networks, tracking the visible behaviour of the users. They identify, trace, map and measure information flows. However, they seem to be unable to provide adequate answers to “why”. Extensive behavioural, cognitive, and sociological research is needed to shed light on people’s drives and motives, beliefs and values, knowledge and literacy levels, moods and responses to media messages.
Misinformation is an extremely complex and multifactorial issue with possible negative consequences in almost every aspect of human activity. It is not a new phenomenon – along with ‘fake news’ they existed in the traditional media environment. What makes the crucial difference is the speed and scale of the online diffusion of false information. Network structures and digitalisation have the power to amplify its negative consequences.
The Co-Inform Research Project brings together a multidisciplinary team of scientists and practitioners, to combat misinformation. Beyond mapping disinformation dissemination patterns, it also aspires to shed light on motives and mindsets. After all, there’s a single common factor: humans.
Subscribe to our newsletter
Get our latest project updates, news and events announcements first!
Co-inform project is co-funded by Horizon 2020 – the Framework Programme for Research and Innovation (2014-2020)
H2020-SC6-CO-CREATION-2016-2017 (CO-CREATION FOR GROWTH AND INCLUSION)
Type of action: RIA (Research and Innovation action)
Proposal number: 770302