Blog

Fixing misinformation for the sake of democracy

Author: Francesca Zuccotti, Digital Marketing & Communications Lead at Scytl

You are what you eat, as they say.  Just as too much junk food can lead to obesity, an overload of junk information can lead to a lack of trust in democratic institutions. So how do we ensure that the information we consume on the Internet every day is accurate and from reliable sources?

According to Reuters’ Institute 2018 Digital News Report, even though the use of Facebook for news has declined since 2016, the platform remains the most widely used social network for news in almost all of the 40 countries covered by the report. While Facebook likes to consider itself as ‘only a platform for its users’, not a news publisher, the extent of its global reach as a de facto news outlet frustrates traditional, regulated media companies.

Facebook and other user-generated content platforms like YouTube and Twitter have their own rules and community standards about acceptable practices and content.

However, these standards are an expression of their own corporate values, not of the public good. Facebook and Google treat these decisions as theirs and their line of defense is that they only show people ‘what they want to see’. While they present themselves as defenders of free speech, their focus is clearly on business: more users, more engagement, more profit. Commercial pressures impact on how these companies operate and the problem is that every time they update their algorithms and content policies, in a non-transparent way, they are shaping our society by significantly influencing the information that we consume.

Even former employees of these tech giants raised concerns about their operating methods. Nicole Wong, a former lawyer for Google, recently explained how the search engine priorities changed over time, “from getting people all the right information quickly to showing them personalised search results” — meaning content that would keep them on the platform for longer. Former Facebook Vice President Chamath Palihapitiya labeled the mechanisms the company created to hold users’ attention as “short-term, dopamine-driven feedback loops [that] are destroying how society works”. We are right to be concerned.

What are these tech companies doing to limit the spread of misinformation on their platforms? Are these countermeasures working?

Should they be granted a self-regulatory status or should they be regulated, as concluded by the House of Commons Digital, Culture, Media and Sport Committee’s final report on disinformation and ‘fake news’? Let’s take a step back. Progress never comes without a price. As some experts have argued, while the Internet has been enormously beneficial to society, we cannot afford to ignore the corrosive effects that social media have on democracy and on the quality of our lives. This is a complex issue, and a very pressing one since misinformation has real-life consequences: it generates division in society and can influence our opinions and votes. There is a great concern and a real danger that misinformation and disinformation campaigns may also influence the results of the upcoming European Parliament elections.

In order to prevent this from happening, the EU Commission has launched the Code of Practice in September 2018. The Code’s main aim is to improve the transparency, trustworthiness and accountability of the online ecosystem. The Code is less than comprehensive, but constitutes a first step in trying to stop misinformation online. Voluntary signatories, including Facebook, Google and Twitter, have presented their latest reports on February 28th. After the assessment, the EU’s security commissioner, Sir Julian King, and digital economy commissioner, Mariya Gabriel, stated that the companies have only “fallen further behind” since January reports and asked online platforms to provide more details on progress made, to prove that new policies and tools are deployed timely and with sufficient resources across all Member States ahead of elections in May. Facebook was identified as the ‘worst offender’.

Zuckerberg’s company, recently labeled a ‘digital gangster’ by the DCMS Committee’s report, has been under scrutiny and subject to harsh criticism for the past two years.

The company has been plagued by various scandals and incidents: from the Cambridge Analytica’s illegal harvesting of 87 million users’ data to the Russian involvement in the 2016 US elections, defending holocaust denialism as ‘users’ right to be wrong’, or allowing advertisers to target users interested in Nazis, and restricting third parties tools like Pro Publica and Who Targets Me’s  ability to check ads for political transparency. Facebook has also been linked to episodes of violence and hate crimes in Germany, India, Myanmar, Libya, and the Philippines. Facebook’s CEO has admitted some ‘mistakes’ and promised to make combating misinformation on the platform his main objective for 2018. Since then, the company has doubled its content review team, started to work with fact-checking organizations, and prohibited foreign-funded ads ahead of recent elections.

As a global platform, the main challenge for Facebook is to establish one set of consistent content management rules worldwide.

Moderating billions of posts a week in more than a hundred languages is a very big task, an impossible one according to many experts. And while artificial intelligence works well to identify porn, spam, and fake accounts, it does not work for hate speech and misinformation. The amount of content shared on Facebook every day is overwhelming, and it is objectively impossible for the current workforce to moderate everything. Every day mistakes are inevitably made, because 30,000 moderators have to review more than 10 million potentially rule-breaking posts per week with 30 seconds to take each decision while working in inhumane conditions.

Facebook is facing an undeniable content management problem.

And while some experts have suggested some ways for Facebook to improve, other members of the public have made it clear that the tech giant is still far from fixing the issue. John Oliver has recently called Facebook ‘a Fetid Swamp of Mistruths and Outright Lies’; Courtney Tenz from the Deutsche Welle has argued that it is time to delete Facebook; and the New York Times has made a not-so-subtle friendship anniversary video to celebrate Facebook 15th birthday.

Facebook’s solution to the problem is immensely important for the future of global free expression and democracy.

“Executive decisions made by Facebook have major political, social, and cultural consequences around the world” writes Timothy Garton Ash, Professor of European Studies at the University of Oxford, and adds that “while industry-wide self-regulation should be actively pursued, attaining it will be a long and complex task”. However, Zuckerberg himself has recently come to the conclusion that Facebook should not make such decisions, but rather defer to an independent body of technology and human rights experts, free of commercial influences.

Facebook’s CEO thinks that the best solution would be to give users control of their own online experience, basically suggesting that everyone should be given the right to set their own content policy through their individual settings. Facebook already allows users to choose what kind of ads they want to see, and a similar system could be put in place for content. What Zuckerberg’s suggested approach is missing though is transparency about funding of the ads and about the trustworthiness of the news sources. To fix that, the platform should give third-party tools complete access to its APIs, allowing scraping and ensuring interoperability.

 

Tech companies need to understand whether the way they operate in our society is sustainable in the long term, and if not, they need to make it sustainable, even if that may imply reducing their profits.

Censorship is not the ideal solution either. Citizens want more information, but they deserve to have access to accurate information, from trustworthy sources: no one likes to be deceived. Society cannot just wait for these platforms to fix the issue for us or for governments to regulate them. We all need to act and improve our digital literacy, be skeptical and think critically. We need to be responsible citizens and start sharing accurate information for the good of society, for the good of our democracy.

As with any complicated issue, there is no simple solution. Co-Inform verification tools will be co-created with citizens, journalists, and policymakers by a multidisciplinary team of scientists, fact-checkers and developers, with a deep understanding of the digital phenomena, as well as of sociology, anthropology, and psychology. Co-Inform tools will allow people to identify the news source and help them develop critical thinking.

Subscribe to our newsletter

Get our latest project updates, news and events announcements first!

Co-Inform Copyright 2021
Disclaimer

Co-inform project is co-funded by Horizon 2020 – the Framework Programme for Research and Innovation (2014-2020)
H2020-SC6-CO-CREATION-2016-2017 (CO-CREATION FOR GROWTH AND INCLUSION)
Type of action: RIA (Research and Innovation action)
Proposal number: 770302

Share This