The last decade has seen increasing attention paid to the problem of online disinformation. Veronika Datzer and Luigi Lonardo chart how the EU’s anti-disinformation policy has evolved in the face of shifting threats.
Digital platforms have become vital conduits for information, services and the exchange of goods, creating significant social value and revolutionising economics. But while facilitating these interactions, they can also be misused, for example through the spread of disinformation – defined by the European Commission as the deliberate spread of false or misleading information.
This contradiction – economic benefits and negative societal implications – also derives from the existence of a classic collective action problem. Social media platforms are companies acting in their own interests and these interests may run counter to the long-term interests of society. For example, inflammatory information generates high levels of engagement among users, which can be highly profitable for a social media platform, at least in the short term, but is detrimental to society more broadly.
Therefore, each individual platform has a “selfish” motivation to avoid investing resources into preventing the misuse of its channels, even though, arguably, all social media platforms would be better off if they all invested in such measures. In essence, all anti-disinformation policies are an attempt by regulators to reduce the impact of such self-interested motivations.
Responding to this issue is an intricate quest for an equilibrium between safeguarding individual freedom of expression and combating the potentially harmful effects of disinformation. The challenge lies in distinguishing between genuine interpretations and intentionally false narratives. This is no easy task.
EU anti-disinformation policy
Recognising the growing threat of disinformation, the European Commission has taken the lead in crafting an EU-level response that spans both legislative and non-legislative avenues. However, it was not the Commission, but rather the European External Action Service (EEAS) that played a pivotal role in the early stages of EU anti-disinformation policy.
This was particularly the case when disinformation was perceived to be a foreign threat. Russia’s deliberate use of disinformation in 2014 – following the Maidan protests in Ukraine and the subsequent annexation of Crimea – was a wakeup call for the EU’s institutions in this respect. However, as the Commission’s purview encompassed broader policy areas, it later took on a more prominent role in addressing internal challenges posed by disinformation.
The Cambridge Analytica scandal and Donald Trump’s victory in the 2016 US presidential election marked a turning point, prompting the EU to acknowledge disinformation as a systemic issue. Up until then, platform regulation had primarily focused on data protection rather than disinformation itself. In 2018, several significant anti-disinformation initiatives were launched, driven by heightened awareness due to the impending European Parliament elections, which were believed to be a potential target for disinformation campaigns by foreign actors.
These efforts centred on non-legislative policy tools. The budget for countering disinformation doubled, while strategic communications and capabilities to tackle hybrid threats were bolstered through joint communications. The establishment of the High-Level Expert Group on Fake News and Online Disinformation (HLEG) played a pivotal role, engaging academia, platforms, media and civil society to address the various facets of disinformation.
An Action Plan against Disinformation was introduced in December 2018, aimed at consolidating EU initiatives against disinformation and safeguarding the European parliamentary elections. The subsequent Code of Practice against Disinformation, introduced through the work of the HLEG and updated in 2022, encouraged monitoring of disinformation campaigns online. This code evolved over time to integrate industry, academia and investigative communities. It now commits the signatories to counter disinformation that might occur on their platforms. Following the European elections, the Commission continued to introduce initiatives due to recognition that disinformation could impact on fundamental rights.
A landmark development in December 2020 was the proposal of the Digital Services Act (DSA), addressing broader digital economy concerns. While not solely focused on disinformation, the act sought to enhance the transparency, accountability and oversight of social networks in curbing disinformation.
On 25 August 2023, the DSA, which includes the abovementioned Code of Practice, became legally enforceable and platforms must now abide by it. The European Commission can now sanction platforms for violating the act. Amid the Russian invasion of Ukraine in 2022, the EU adopted restrictive measures (sanctions) against European branches of Russian media outlets, thus diverging from previous instruments, with disinformation now being tackled as a quasi-criminal offence.
The way forward
The EU’s journey in combating disinformation is a testament to the complex interplay of digital technologies, political motivations, the perceived salience of issues and institutional dynamics. While challenges persist, the EU’s multifaceted response reflects a commitment to striking a balance between safeguarding democratic processes, preserving individual freedoms and harnessing the potential of digital platforms.
As the digital landscape continues to evolve, the EU’s experience offers valuable insights for other democracies grappling with the same issues. It underscores the need for collaborative efforts among governmental bodies, tech platforms, civil society and citizens to create a resilient defence against the proliferation of disinformation.
In an age where information and misinformation spread at the speed of light, the EU’s proactive approach serves as a case study in adapting to the challenges of the digital age. By understanding the motivations, complexities and nuances of this response, we can better equip ourselves to navigate the turbulent waters of disinformation and preserve the integrity of democratic processes.
About the Authors
Veronika Datzer is a Consultant with the German Agency for International Cooperation, where she works on international digital policy.
Luigi Lonardo is a Lecturer at University College Cork and a Visiting Lecturer at Sciences Po Paris.