Artificial intelligence (AI) and discrimination
Artificial intelligence (AI) poses risks of structural discrimination at the societal level. AI can also result in discrimination at the individual level, which is difficult to prove. Unia is doing all it can to make policymakers aware of these dangers. We also sit as experts on many bodies and call on people to report if they notice discriminatory effects of AI.
Unia is convinced that AI can help solve societal challenges. It can also be used with sufficient safeguards to promote equality and non-discrimination. For example, labour inspectorates could detect structural discriminatory practices using statistical data, specifically by comparing aggregated data from different public databases (so-called 'data mining' and 'data matching').
Discrimination and AI, a few examples
- If AI is used to screen CVs, they may contain biases against people based on, for example, their name, gender, ethnicity, social position or a combination of these characteristics.
- If AI decides on loan applications based on predictive models, this may result in discrimination based on, among other things, national or ethnic origin, gender, age, or a combination of these characteristics.
- Using AI systems for facial recognition could result in discrimination in security or access controls. This is because those systems appear to be less accurate in identifying people of colour or women. If the system misidentifies a person as a suspect based on bias, it could lead to a wrongful arrest.
Risks of AI in terms of discrimination
AI is trained based on historical data that often involve prejudice (bias) and discrimination. Existing inequalities in society are thereby perpetuated and even institutionalised.
The harm occurs at different levels:
- Individual harm: a person is harmed by inherent bias in AI systems.
- Collective harm: groups of individuals may be systematically excluded from certain opportunities due to bias.
- Social harm: we all have an interest in living in a society that does not discriminate against people and treats its citizens equally.
There are also challenges in terms of transparency. Harm often goes unnoticed due to the opacity of the way AI systems are designed and operate. As a result of this opacity and lack of transparency, not only is it difficult to be aware of the harm, but it can be even more difficult to prove the harm and establish causality.
Moreover, the cases handled at Unia show that even if an individual is aware of the harm, the individual harm may be perceived as insignificant, or at least too small in relation to the costs that may be involved in challenging it. This includes financial costs, but also psychosocial burden, time investment and so on. Therefore, individuals are less likely to challenge the problematic practice.
Legal framework for AI
Belgian anti-discrimination legislation provides a legislative framework to take action against discrimination. Unfortunately, it is not always easy to apply because of the exhaustive list of protected characteristics (such as racial characteristics, sexual orientation, etc.), while AI involves arbitrary correlations.
There are also several European legislative initiatives.
- The AI Act (Artificial Intelligence Regulation) is the European Union regulation to protect Belgian and European citizens against the harmful impact of AI. The AI Act was adopted by the Council of the European Union on 21 May 2024. The regulation links specific rules to categories of AI applications:
- AI practices with unacceptable risks are banned. E.g. social scoring (an AI system determines a score based on your (social) behaviour that affects your access to services or the prices you pay for them), subliminal, manipulative or misleading techniques or AI systems with emotion recognition are banned at school and at work.
- High-risk systems are subject to strict rules, such as reporting obligations, registration in an EU database, human monitoring, and in some cases impact analysis on fundamental rights. This includes certain systems used in education, the workplace, essential private and public services, biometrics, etc.
- Most applications will present limited or minimal risk. In that case, transparency obligations mainly apply to inform the user.
The regulation is directly applicable in each Member State and will gradually enter into force.
The Council of Europe's AI Convention (Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law) emphasises the importance of protecting human rights in the development and implementation of AI systems. It provides a framework for ensuring equal treatment, non-discrimination and respect for human dignity in all AI applications. It was adopted on 17 May 2024 by the Council of Ministers.
AI and intersectionality
AI systems will often make a decision based on a combination of different protected characteristics. For example, consider a female wheelchair user who belongs to an ethnic minority, or an elderly gay man.
With AI systems, it will not be clear which characteristics were most important in a decision and so there may be intersectional discrimination. Different discrimination criteria then interact simultaneously and become inseparable through their interaction with a particular context that makes someone more vulnerable than others in the same context. For example, a woman who comes from a particular ethnic minority may face a different kind of discrimination than a man who comes from the same ethnic minority, or a different kind of sexism than a white woman.
What is Unia doing in the area of AI and discrimination
In the context of its mandate as an equality and human rights body, Unia is closely monitoring developments surrounding the creation of European regulations on artificial intelligence (AI). Combating discriminatory effects of AI and developing AI into a tool for greater equality are central to Unia 's strategic plan (only available in Dutch of French) for the coming years. This is why we are very active in policy work and awareness-raising:
Open In Belgium
- Unia is committed to AI literacy: in 2023, we organised the online training on AI and discrimination in cooperation with the Council of Europe, FPS BOSA (AI4Belgium) and FPS Justice (Equal Opportunities Unit) for policymakers, staff of supervisory authorities and civil society organisations. Sixty-five participants obtained a certificate of successful completion of the training.
- Unia is committed to monitoring and redress: together with the Council of Europe and the European Commission (DG Reform) and equality bodies from Finland and Portugal, we launched a comprehensive project in 2024 to strengthen our capacities to monitor the use of AI systems in public administrations, including providing redress to those discriminated against by AI technologies. This project includes the development of an internal non-discrimination tool and the continuation of online training on AI and discrimination.
- Unia is committed to partnerships and multi-stakeholder participation:
- Unia organises a roundtable with civil society twice a year to better understand national challenges.
- We mentor KU Leuven students through the Legal Clinic ‘AI and human rights’.
- Unia is a member of AI4Belgium.
Open Within ENNHRI (European Network of National Human Rights Institutions)
Unia chairs the AI Working Group within ENNHRI and represented ENNHRI in the negotiations of the AI Convention AI Verdrag.
Open Within Equinet (European Network of Equality Bodies)
Unia is a member of the AI working group.
Open Within the Council of Europe (main European human rights organisation)
Unia is as an independent expert in the Committee of Experts on Artificial Intelligence, Equality and Discrimination (GEC/ADI-AI) and was also in charge of the AI and discrimination training.
What are Unia's recommendations for AI?
The main recommendations to prevent discrimination through AI are to:
- Introduce transparency requirements at macro and micro level:
- Provide a national registry with mandatory reporting requirements for actors (private and public).
- Introduce legal transparency requirements for all algorithmic systems: all stages of software design should be traceable, from data collection to production.
- Shift the burden of proof for systems that are not transparent.
- Ensure appropriate oversight by independent government agencies in collaboration with existing human rights protection institutions.
- Ensure AI literacy: train AI developers, users and stakeholders.
- Ensure public debate and multi-stakeholder participation.
What if you face the discriminatory effects of AI?
If you feel you are being discriminated against by AI applications or if you witness possible discriminatory effects, we encourage you to report it to Unia. Our staff will then do all they can to help you.
Report discrimination
Do you feel you have experienced or witnessed discrimination? Report it online or call the toll-free number 0800 12 800 on weekdays between 9.30 a.m. and 1 p.m.