How can we ensure that AI decisions are justified and do not perpetuate unintended biases or discrimination? Within the eFAIR project, Hogeschool Utrecht, MavenBlue, and the Dutch Association of Insurers are developing a framework that enables organizations to link quantitative technical fairness scores to the decisions they need to make regarding AI. 

Objective

The eFAIR project explores how Explainable AI (XAI) can contribute to making fairness metrics and principles transparent and understandable for different user groups. The project develops a framework that presents fairness metrics dynamically, taking into account the knowledge and needs of specific user groups. The framework will be interactive and modular, allowing it to be adapted to different users, from technical experts to policymakers and end users. 

AI systems are increasingly used to make automated decisions, but the lack of transparency leads to ethical and legal concerns. Although there are various ways to measure fairness in the scientific literature, it is difficult for many users to understand what these measures mean and how they can be applied in practice. 

Results

  • Practical publication: An open-access publication on user needs regarding the explanation of fairness  
  • Open-source tool: A set of visualization techniques for fairness metrics  
  • Scientific publication: A literature review on how fairness metrics can be explained  
  • Open-source software: An online demonstration tool that allows fairness metrics to be explored interactively  
  • Practical publication 2: Guidelines and best practices  
  • Scientific publication 2: A report on the test results of the eFAIR tool 

Approach

Through literature reviews, interviews, and surveys, we identify needs and potential solutions. We develop prototypes of an eFAIR framework. The framework is validated using concrete case studies within the insurance sector and other financial sectors. 

Education impact

For professional practice, the eFAIR framework provides a way to use technical quantitative fairness metrics in day-to-day decision-making, such as determining an insurance premium. 

For education, this project offers a case study that brings together state-of-the-art AI and ethics in a practical problem, in collaboration with insurers and other financial organizations. 

HU researchers involved in the research

  • Rijk Mercuur
    • Researcher
    • Research group: Artificial Intelligence
  • Huib Alderwereld
    Huib Aldewereld
    • Senior lecturer
    • Research group: Artificial Intelligence
  • Danielle Sent
    • Senior lecturer
    • Research group: Artificial Intelligence

Collaboration with knowledge partners

Co-funding

SIDN Project Fonds 

Related research groups

Related courses

Would you like to collaborate or do you have any questions?

Rijk Mercuur

  • Researcher
  • Research group: Artificial Intelligence