The EU has announced plans to make Europe a hub for artificial intelligence (AI). Daniel Mügge writes that while this sounds like a noble cause, the EU’s strategy is limited by its focus on enhancing the European AI sector against competition from other countries, rather than empowering citizens against Big Tech.
For years now, EU policy has emphasised the need for digital sovereignty, including in relation to artificial intelligence. If this line of thinking needed any additional boost, Donald Trump’s open disdain for NATO has provided it. Even before then, though, the ambition to have Europe stand on its own AI feet – a policy goal that could be called “AI sovereignty” – already pervaded EU Commission strategy.
At the same time, as is true for digital sovereignty, AI sovereignty as a lodestar is attractive because it is open to many different and even conflicting interpretations. At that level of generality, it allows everyone to project their hopes for the digital world of tomorrow onto that ambition. “AI made in Europe” can function as a rallying flag because of its vagueness. Indeed, it hides that giving weight to the AI sovereignty agenda entails tough political choices, in which conflicting versions of that project must be traded off against each other. “AI made in Europe” sounds attractive, but what is actually the point of it?
Three trade-offs
In a new study, I have dug into EU Commission documents to find an answer to this question. Before getting to that, though, three central trade-offs inherent in EU AI strategy immediately surfaced once I tried to pin down AI sovereignty in practice.
First, does it pit the EU against other major AI powers, or rather citizens against large tech companies? A jurisdictional sovereignty perspective would envision a happy marriage between European public authorities and tech companies in Europe to confront the US and China as the current leading AI powers. A citizen sovereignty perspective, in contrast, would take serious Europe’s claim that, contrary to authorities elsewhere, its goal of “human-centric AI” would put citizen interests first in tech policy – including by pushing back against corporate interests.
Second, is AI sovereignty meant to boost the EU’s position in a putative AI race, or is it instead a means to defy this competitiveness-logic? If you accept the idea that the future of prosperity hinges on AI and add to that a winner-takes-all logic, trying to outrun other AI powers becomes a logical choice. If, in contrast, you think that the main challenges to societal thriving are rooted in damaging economic dynamics – creating inequality, dumbing down jobs, damaging the environment and so on – AI sovereignty could be used to extract Europe from a global “AI race” and allow it to prioritise other kinds of societal challenges.
Finally, is EU AI sovereignty primarily meant to benefit European citizens, or does it embrace a global responsibility? Europeans frequently betray a sense of moral superiority (their fabled “European values”), which at least on paper entail a commitment to helping not only those with an EU passport, but people around the world. If that ambition were taken seriously, AI sovereignty could then allow the EU to chart a course that would heed AI’s environmental footprint elsewhere in the world, that would not drain developing labour markets of AI talent, that would foster AI innovation that genuinely benefits poor people elsewhere and that would restrict the outsourcing of the most degrading tasks in the AI supply chain beyond EU borders.
Embracing the AI race
On these three dimensions, EU strategy documents speak a clear language. “AI made in Europe” returns time and again, both in the 2018 Coordinated Plan on Artificial Intelligence and its 2021 update, Fostering a European Approach to Artificial Intelligence. It envisions public authorities and European companies joining forces, given that, as the Commission wrote in its 2020 White Paperon AI, “the race for global leadership [in AI] is ongoing”. Criticism of corporate rationales driving AI development hardly figures. If anything, Europe needs to accelerate the rollout of AI throughout society to allow it to make up lost ground vis-à-vis the US and China.
On the global dimension, we find little concern for interests or perspectives beyond those of European citizens. Societies are seen as being locked into an AI race and can hardly afford to look over their shoulders to those who may be falling even further behind, losing out on opportunities to improve their lot, or bearing the costs of our digitalisation. The Commission appreciates the environmental impact of rushing into the AI age – think raw material extraction or electricity use. But there is little appetite to slow or redirect digitisation in Europe to limit these impacts.
This orientation of EU AI strategy has been carried over into the AI Act negotiations. The European Parliament had introduced amendments to the Commission draft that at least mentioned some broader concerns, from environmental damage to helping the socio-economic losers from the digital transformation. Little of that made it into the final AI Act version, however, and where it did, it lacks concreteness and teeth.
So what is “AI made in Europe” good for? In practice, AI sovereignty has a strong jurisdictional, rather than citizen-emancipatory bent. It embraces an AI race rather than trying to get the EU out of that dubious logic. And it prioritises European interests, with little attention to the interests of people elsewhere. It may not be surprising that that approach has carried the day so far. But it would underestimate the role of public policy to assume that it flowed naturally from technological transformations, rather than from political choices.
About the Author
Daniel Mügge is a Professor of Political Arithmetic at the University of Amsterdam.