CE39 - Sécurité globale, résilience et gestion de crise, cybersécurité 2025

Holistic and Generalizable Solutions for Bias Mitigation in Face Recognition – GOOD-BIAS

Submission summary

Bias in AI results from systematic discrimination in artificial intelligence systems. Often, this stems from non-representative training data or societal biases reflected within even diverse datasets. Such biases can lead to unfair advantages for certain groups, with consequences ranging from hiring decisions to law enforcement. To combat this, ethical guidelines and transparent methodologies are necessary. Reports indicate face recognition's accuracy is influenced by demographic groups, with some groups facing higher false match rates. This was evident in 2020 with wrongful arrests involving African American males due to face recognition errors. In 2021, the E.U introduced the Artificial Intelligence Act mandating biometric system audits for discrimination.

GOOD-BIAS aims to address challenges related to bias in face recognition systems, with the dual goal of pushing the boundaries of current knowledge and bolstering trust in the deployment of secure and equitable solutions.

Beyond demographic disparities, AI bias arises from other factors like poor data quality, collection methods, or even algorithmic design. Addressing it requires a comprehensive approach, considering all bias sources. The project's non-technical objectives focus on enhancing trust in biometric systems. Addressing bias ensures these systems remain secure, equitable, and align with European directives, emphasizing technology's service to European citizens.

Technical objectives include:
- Holistic Metrics Development: Design adaptable metrics considering bias and utility in biometrics.
- Robust Algorithm Design: Develop bias-resistant algorithms using advanced architectures.
- Bias Disentanglement: Identify biases beyond demographic groups, including sample quality and environment.
- Generalisation: Extend the newly developed approaches and metrics to generalise to various scenarios.
- Results Dissemination: Share findings with a broad audience, aiding in standardisation.

Project coordination

Chiara Galdi (EURECOM)

The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.

Partnership

EURECOM EURECOM

Help of the ANR 249,049 euros
Beginning and duration of the scientific project: October 2025 - 48 Months

Useful links

Explorez notre base de projets financés

 

 

ANR makes available its datasets on funded projects, click here to find more.

Sign up for the latest news:
Subscribe to our newsletter