The EU’s new Artificial Intelligence (AI) act identifies unfair biases in AI systems as a key risk. Yet as Sergio Scandizzo argues, we should be equally concerned about our faith in the neutrality of technology.
The Artificial Intelligence Act, the European Union regulation covering Artificial Intelligence (AI) that came into force on 1 August 2024, provides, among other things, that AI systems should avoid “discriminatory impacts and unfair biases that are prohibited by Union or national law”.
AI bias occurs when artificial intelligence systems exhibit prejudice, presumably due to training data, algorithm design, historical inequities or feedback loops, leading to unfair treatment and exacerbating social inequalities. This effect can take various forms, such as selection, measurement, exclusion and confirmation biases, impacting areas like hiring, lending, law enforcement and healthcare, thereby eroding public trust.
The meaning of bias
But what do we mean by the word “bias”? According to the Oxford dictionary, bias is “an inclination or prejudice for or against one person or group, especially in a way considered to be unfair”. The Merriam-Webster dictionary more soberly starts with “an inclination of temperament or outlook”.
Both definitions start with the word “inclination”, not incidentally reminiscent of the Latin word “clinamen”, famously used by the Roman poet Lucretius to explain why atoms can randomly deviate from their set trajectories, thus allowing for uncertainty and free will. Lucretius christened what in many ways is still the fundamental model of bias in western thought: a deviation from a supposedly “straight” course of action or a disturbance with respect to an unfettered state of mind, human or artificial as it may be.
An interesting alternative to this view comes from Baruch Spinoza, who argues that our beliefs are not, as Descartes would have it, the product of a deliberate selection amongst often directly competing ideas, but are on the contrary almost indistinguishable from the act of contemplating such ideas.
According to Spinoza, as soon as we entertain a proposition, we automatically believe it, and it takes a conscious intellectual effort to consider rational arguments in favour and against it to eventually confirm or withdraw our belief. As this effortful process does not happen systematically for all the concepts coming to our attention, biases emerge as a direct consequence of the mere process of acquiring information and knowledge. Fairness, in other words, is hard work.
A survival tool
However, having inclinations, or even prejudices, is not always a twisted attitude. In several instances, it is a behaviour that gives us clear evolutionary advantages when dealing, for instance, with a potentially dangerous situation in which there is little time for reflection.
Although it is entirely possible that the lion we encounter on our path is not hungry and will leave us in peace, assuming we are about to be eaten is the prudent approach and the one that statistically provides the best outcome. If children are offered candies by a stranger, they are well advised to assume they may be in danger and refuse, even if in many cases this preoccupation may be ill-founded.
The reason why these biases are helpful is that in those situations a thorough analysis of the alternatives would take too long, leaving us exposed to the worst outcome in all those cases where the danger is not just potential, but clear and present.
In other words, bias is a key survival tool, which often makes our lives easier. What makes bias dangerous is our lack of awareness and our tendency to rely on biases even when we can afford, and should afford, the time to conduct a thorough analysis of the problem. Even in those cases, however, identifying biases is not as straightforward as finding the clinamen disturbing the straight path.
Biases in action
As an example, let us look at two real life cases in which the concept of bias works in different ways. The Test-Achats case originated in Belgium, where a consumer organisation and two private individuals brought a legal action to declare unlawful a domestic law that allows insurers to take a person’s gender into account in the calculation of premiums and benefits in life insurance.
On 1 March 2011, the Court of Justice of the European Union declared invalid an exemption in EU equal treatment legislation which allowed member states to maintain differentiations between men and women in individuals’ premiums and benefits. Consequently, the insurance exemption in the Gender Directive, which allowed insurers to take gender into account when calculating premiums and benefits, went against the principle of equal treatment between men and women and was declared invalid with effect from 21 December 2012.
The issue in the Test-Achats case is not if men are higher risk than women, but whether gender can be used as a criterion for pricing. The court ruled on the meaning of an EU directive that did not deal with a statistical issue, but with a political one.
Regardless of what the empirical evidence might be, the EU, through its legislative process, established that using gender as a pricing criterion is discriminatory and as such unlawful. However, the legislation in question does not remove a gender bias – in fact one might argue that it rather introduces one – but forces a desirable result on a business process (pricing).
A specular case of gender bias is provided by the practice of microfinance, an industry that lends to individuals, mostly in developing countries, who would not otherwise have access to financial services. Several empirical studies have concluded that having more women as clients is associated with lower portfolio-at-risk, lower write-offs, and lower credit-loss provisions, all things being equal, confirming common believes that women in general are a better credit-risk for microfinance institutions.
Consequently, in micro lending, the notion that men are higher risk than women is not only widely acknowledged but also well established as one of the key lending criteria. Here the desirable objective of maximising the efficacy and outreach of lending to the poor has taken precedence on the also desirable objective of gender non-discrimination and has therefore allowed the business process to work unimpeded.
The neutrality of technology
The main lesson we can draw from these two examples is that what constitutes a bias to be corrected depends on our political priorities, where I use the word “political” to indicate that such priorities are the results of the same decision-making process that ultimately produces our laws and regulations.
In other words, the term “unbiased” does not refer to a “neutral” or technically correct decision process, but rather to a result in line with a state of the world that we have collectively identified as desirable. In the insurance case mentioned above, the use of gender in pricing is a form of bias (although from a statistical point of view, gender is indeed a relevant risk indicator) while in the microfinance case it is not. The difference is not technical or statistical, but rather ethical (and reminiscent of Richard Rorty’s distinction between solidarity and objectivity).
For these reasons, it is equally misleading to portray artificial intelligence applications as potentially objective and impartial insofar as data or algorithms are free from “bias”. In fact, while we easily recognise the inevitability for human thought to be subject to bias, we tend to expect anything technological, be it mechanical or electronical, to behave as a paragon of rationality and impartiality, often helping us justify difficult decisions by appealing to a supposedly neutral science.
This faith in the neutrality of technology is very similar to the equally disastrous faith in market efficiency. Alas, it turns out not only that markets are not, especially if left unfettered, efficient, meaning that they do not automatically yield the most efficient allocation of resources, but that, even if they did, the result would not necessarily be the most equitable or in general the most desirable.
Likewise, information, no matter how complete, does not by itself eliminate bias and, most importantly, a lack of bias, even if it were achievable, would be no guarantee of obtaining the desired results. Yes, AI systems should provide results that are in line with our values and objectives, but we should not delude ourselves that such alignment could be automatically ensured by the removal of “bias”, whatever that might mean. As Ronald Coase reminded us back in 1960, problems of welfare must ultimately dissolve into a study of aesthetics and morals.
About the Author
Sergio Scandizzo is Head of Internal Modelling at the European Investment Bank.