Oxford University Press's
Academic Insights for the Thinking World

Human vulnerability in the EU Artificial Intelligence Act

Vulnerability is an intrinsic characteristic of human beings. We depend on others (families, social structures, and the state) to enjoy our essential needs and to flourish as human beings. In specific contexts and relationships, this dependency exposes us to power imbalances and higher risks of harm. In other words, it increases our vulnerability.

The digital revolution has amplified this phenomenon—exposing our lives to addictive social media architectures, mentally manipulative commercial practices, the exploitative and abusive collection of behavioural data, more sophisticated and hidden forms of discrimination, and much more. The EU General Data Protection Regulation (GDPR) is the first “digital law” that seems to recognize the enhanced vulnerability of online users. The notion of “vulnerable consumers” (introduced in the EU law in 2005 via the Unfair Commercial Practices Directive) has some similarities with the idea of “vulnerable data subjects”. But who are the vulnerable data subjects in the digital ecosystem? Or, better, what might they be vulnerable to? In which contexts? Towards whom?

These questions have become much more urgent due to the EU Artificial Intelligence Act (hereinafter AIA), the most discussed EU law of our age. The final text was approved by the European Parliament on 13 March 2024, and will be fully applicable in two years. There are 16 references to the notion of human vulnerabilities in the final text of the AIA. For example, AI systems exploiting certain human vulnerabilities are now officially forbidden (Article 5(1)(b)). In addition, human vulnerabilities are a parameter to update the list of “high-risk AI systems” in the future (Article 7(h)). Such vulnerabilities must be analyzed and mitigated in the new Fundamental Rights Impact Assessment by high-risk AI deployers (Article 27) and considered with “particular attention” by the market surveillance authorities when dealing with AI systems presenting risk (Article 79(2)). The AIA “Codes of conduct” will need to assess and prevent the negative impact of AI systems on vulnerable persons (Article 90). Within the context of regulatory sandboxes in the AIA, the data subjects in a condition of vulnerability due to their age or disability must be “appropriately protected” (Article 60).

Despite all these references to human vulnerability, there is still considerable uncertainty about the concept. Article 3 of the AIA contains 68 definitions of concepts and terms—none of which are about human vulnerability. In addition, the language and the semantics referring to vulnerability vary greatly throughout the text of the law.

Article 7(h)

Despite the lack of a definition, the most exhaustive and fruitful reference to a conceptualization of vulnerability is in Article 7(h). If the European Commission wants to update the list of high-risk AI systems in the future, it has to consider, among other parameters, “the extent to which there is an imbalance of power, or the persons who are potentially harmed or suffer an adverse impact are in a vulnerable position in relation to the deployer of an AI system, in particular due to status, authority, knowledge, economic or social circumstances, or age”. Here, the AIA explicitly refers to the concept of vulnerability as a gradual, contextual element. The language “persons… in a vulnerable position” conveys the idea that vulnerability is an accessory condition and not a label that can define people. It can also be inferred that vulnerability is relational (“vulnerable position in relation to”) and based on “power imbalance”, which might be generated by personal characteristics of the powerless person (“knowledge, age, status”), of the powerful party (“authority”), or by social factors (“economic or social circumstances”). These are not the only possible sources of vulnerability admitted in the AIA since Article 7(h) says “in particular”, admitting other structural, external or internal conditions that might generate vulnerability.

The AIA’s preamble contains three more specific cases of human vulnerabilities. In particular, it includes: children’s vulnerabilities online (recital 48); people applying for or receiving essential public benefits or services because of their typical “dependency” on those benefits (recital 58); and people who are subjects to AI systems in migration, asylum and border control management (recital 60), since they are “dependent on the outcome of the actions of the competent public authorities”. Interestingly, these last two cases explicitly relate vulnerability to dependency, in correlation with the traditional legal literature on legal vulnerability (see, e.g., Martha Fineman).

Article 5(1)b

Article 5(1)(b) AIA prohibits the commercialization or use of an “AI system that exploits any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm”.

Although it seems to refer to “any of the vulnerabilities of a person or a group”, it clarifies specific sources of vulnerability (“their age, disability or a specific social or economic situation”). While we might easily interpret cases of vulnerability based on “age and disability”, the notion of “specific social or economic situation” seems vaguer. Recital 29 mentions some non-exhaustive examples, i.e. “persons living in extreme poverty, ethnic or religious minorities”. However, a simple textual analysis of the AIA should push for a more comprehensive list of cases, including people with lower incomes or who belong to specific marginalized groups, e.g. political, linguistic or racial minorities, migrants, LGBTIA+ people.

Some striking examples of vulnerability situations excluded from the wording in Article 5 AIA are victims of gender-based violence, employees, patients (without disabilities), gamblers, and people addicted to social media or with other specific addictions. This is unless we accept an extensive understanding of “specific social situation” or of disability (which is not possible, due to the explicit reference to the Directive (EU) 2019/882, defining disability on long-term impairments and barrier-mediated limitations to equal participation in society).

Conclusions

We observe that the references to vulnerability factors in the AIA are much broader in Article 7(2)(f) than in Article 5(1)(b), where there is no specific reference to power imbalance, authority, or knowledge asymmetry. The reason for this discrepancy is probably that Article 5 strictly prohibits AI practices, while Article 7 instead provides the regulator with instructions. Accordingly, the list in Article 5 should be clear and foreseeable. However, the “social or economic situation” in Article 5(1)(b) is anything but.

In conclusion, although the AIA is one of the world’s most advanced pieces of legislation regarding the recognition and protection of human vulnerabilities, the concept is still complex and problematic. The interpersonal, relational, contextual and power-imbalanced notion of vulnerability highlighted in the data protection literature still seems extremely pertinent and meaningful. However, the prohibitions of vulnerability exploitation in the AIA show many gaps, e.g. for what concerns some sources of vulnerability (employees, healthy patients, addicted consumers, or victims of gender-based violence).  Furthermore, the broad concepts, of power imbalance in Article 7 and social conditions in Article 5, will need specific authoritative interpretations in the coming months.

Featured image by Sam Erwin via Unsplash, public domain.

  • Posted In:
  • Law

Recent Comments

There are currently no comments.