BlogPolitical Ads and Microtargeting on Social Media: Are Big Tech Breaking Democracy?
The regulation of political advertising on social media has received a lot of attention over the last three years. Given its correlation with issues such as the growing polarisation of society, disinformation, and foreign election interference, such attention seems quite appropriate.
It is no longer a secret that digital platforms have a keen interest in maximising user engagement, as that makes them more attractive to marketers.
These platforms live on the collection of users’ personal data and have their feeds curated by AI. A political party can, therefore, tailor its message by microtargeting groups of individuals. However, these individuals might never see any counter-arguments due to the effect of filter bubbles and echo chambers, which are intrinsic to the AI-curated feeds of social media platforms. This increases polarisation in society. Such weaponisation of our personal data is the reason why many experts argue that political ads on social media endanger healthy democratic deliberation and electoral integrity. Democracy is based on the idea that people make informed choices between political opinions. Unfortunately, disinformation makes it harder for people to make considered and well-informed choices.
Let’s simplify the concept.
Social media platforms like Facebook know everything about us, or almost. We gave up our personal data freely to Facebook when signing up (gender, age, where we live, where we study or work, religion, political affiliation, etc.) and they also know what we like, what we share, what we comment on, what we are passionate about and so on. Marketers can reach us on Facebook through paid advertising, that is how the platform makes money. They can target specific groups of people thanks to social media profiling, meaning that Facebook will feed us specific ads based on all the information it has gathered about us. But on Facebook, marketers include political parties. A political party can pay Facebook to reach you with its messages as much as a cosmetic brand can pay Facebook to persuade you to buy their shampoo. And thanks to the vast amount of entry points these platforms have for each of us, very personalised, tailored messages are created. Messages that generate a strong emotional response and can influence our opinions, beliefs, and ultimately gain our votes. Cambridge Analytica is a remarkable example of how such operations work.
Ellen Weintraub, a member and three-time chairperson of the Federal Election Commission, said “the American people are entitled to have access to a variety of perspectives on the important issues of the day. But they are also entitled not to have their personal data used to manipulate them.”
Political advertising and microtargeting on social media constitute a threat to public discourse, to the integrity of elections and ultimately to our democracies. The question to be answered is: are big tech companies breaking democracy and harming vulnerable communities for the sake of engagement and profit? There is an ethical aspect of the issue that needs addressing and some tech companies have started to do so.
Social media platforms are taking opposite stands on the matter.
Back in October 2019, Facebook’s CEO Mark Zuckerberg announced his company stance in favour of freedom of speech and against censorship in the political context. That includes any political content, true or false, disseminated by political parties on his platform. There is, of course, a lot of commercial convenience in such a stance, which received harsh criticism by tech experts and regulators. A few days later, in what was undoubtedly a great PR move, Twitter’s CEO Jack Dorsey took the opposite stance announcing that the platform was going to ban all political advertising. Following, Google announced a new policy limiting political ads’ targeting options, increasing the pressure on Facebook even further.
It’s difficult to say whose strategy is the best. Some experts argue that banning political ads on social media won’t solve the issue and that it might end up doing little good, stopping legitimate political advertising instead. The first challenge is to define what constitutes a political ad. The second issue is that such a strict measure would favour candidates with a bigger following on social media and make it more difficult for smaller and progressive parties to reach out to undecided voters.
Platforms have started to take steps to increase the transparency of political advertising.
Facebook now requires advertisers to go through an authorization process. It displays who is paying for the ads and it has created an ad library, so that everyone can see how many and which ads a political party has run. Other measures include removing fraudulent accounts, disrupting economic incentives and providing users with the tools to identify ‘fake news’ by partnering with fact-checking organisations. Experts in the field of technology have made further suggestions for steps that social media platforms could take to safeguard the democratic process and limit their weaponisation by bad actors.
This is a good start, but it is far from an optimal solution. “It is simply impossible for any citizen to hear and assess the unfiltered messages that Facebook lets politicians pump out at a truly astonishing scale,” says Guardian reporter Julia Carrie Wong, adding that “[… ] it’s deeply disingenuous for Facebook to suggest that even the most informed citizen should be able to sort through all this information and understand what various politicians are saying.” One thing is certain, this complex issue will continue to spark attention in the upcoming years. More commitment from the digital platforms and proper regulation are necessary.
As things stand, there seems to be no cure. But there is a very strong antidote: education.
The 2019 media literacy Index shows how some countries are better equipped to withstand the impact of `fake news’ due to the high quality of their education system, the freedom of their media and their low corruption rates. Arming people with media literacy and critical thinking helps them to become better at distinguishing fact from fiction on social media. They can make better-informed decisions and be better-informed citizens in a democratic society.
Co-Inform’s mission is to foster critical thinking and digital literacy.
Academic surveys have shown that online misinformation is becoming more difficult to identify. Online misinformation has the potential to deceive even readers with strong literacy skills. Our goal is to provide citizens, journalists, and policymakers with tools to spot ‘fake news’ online, understand how they spread, and obtain access to verified information.
Subscribe to our newsletter
Get our latest project updates, news and events announcements first!
Co-inform project is co-funded by Horizon 2020 – the Framework Programme for Research and Innovation (2014-2020)
H2020-SC6-CO-CREATION-2016-2017 (CO-CREATION FOR GROWTH AND INCLUSION)
Type of action: RIA (Research and Innovation action)
Proposal number: 770302