Content Management Policies for Combating Misinformation

Author: Akram Sadat Hosseini, Researcher 

Misinformation on social media is a broad term to define deceitful content, either manipulated or genuine content taken out of context. In any form, misinformation undeniably poses a threat to society as it can influence people’s beliefs and decisions.

In spite of the vast amount of misinformation present on all media outlets, there are currently no clear policies for determining how to manage misinformation on social media platforms, how to deal with it when detected, what legal frameworks and ethical issues to consider, how to disseminate corrective information, and how to encourage citizens to read and share this information as soon as it becomes available.


Psychology research proves that simply presenting people with corrective information is not going to change their fundamental beliefs and opinions and that it may even reinforce them.

Dr. Emily A. Thorson’s research showed that people often struggle to change their beliefs even after finding out that the information they believed to be reliable was incorrect. Some strategies seem to be more effective in correcting misperceptions, such as providing an explanation rather than a simple refuteexposing to related but disconfirming stories, and revealing the demographic similarity of the opposing groupRecent work by Cambridge University is also considering the use of a fake news vaccine to immunise users against the problem by pre-emptively exposing the readers to a small dose of the misinformation. 


Online platforms such as Twitter and Facebook provide the perfect environment for misinformation to thrive.

Social media platforms have started to take steps to adapt and improve their content management policies after repeatedly receiving harsh critics and accusations of contributing to the spread of misinformation. For example, Facebook has its community standards laying out its content management policies. That includes a dedicated paragraph on how they plan to combat the spread of false news on their platform and a list of several methods they will employ for better regulation. These methods include the disruption of economic incentives for spreading misinformation, using machine learning for false news detection, and integrating third-party fact-checkers. Other platforms list their policies for users’ content management and misinformation in a similar format. However, these policies are the result of centralized policy-making by small groups of experts, platform managers, and developers.


This top-down approach is the Achilles’ heel of these policies.

To effectively manage the diverse types of misinformation, the policy design process should be decentralized and collaborative. It should allow for democratic participation, the open inclusion of platform users, instead of being opaquely determined by a small group of experts in the public or private sector.

To test and generate policies for its tools, Co-Inform adopts a co-creation approach.

Co-Inform will apply opinion mining and behaviour change monitoring techniques to measure the impact of various misinformation management policies and persuasion strategies. The stakeholders will be actively involved in the development of Co-Inform tools to ensure that they fit their needs and benefit all parties involved.

Subscribe to our newsletter

Get our latest project updates, news and events announcements first!

Co-Inform Copyright 2021

Co-inform project is co-funded by Horizon 2020 – the Framework Programme for Research and Innovation (2014-2020)
Type of action: RIA (Research and Innovation action)
Proposal number: 770302

Share This