Explainable AI in the financial sector

AI in financiele sector
The way in which AI is utilized in finance is becoming more and more complex, often including how a decision is reached. However, the parties that have a stake in financial services –such as customers and regulators– either require or have a right to explanation, for instance on how personal data is used in lending. The question arises what kind of explanations of AI processes are required for each stakeholder depending on the situation.

Explainable AI (XAI) is the research field that strives to open the black box of algorithms. In our approach, that process starts with getting a clear picture of the kind of explanation that is required in each specific situation for the various types of stakeholders impacted by AI. Subsequent questions explore which forms of AI (models) lend themselves well to explanation, and which XAI solution is best suited to generate that explanation. In our research, we have defined XAI as being: a set of methods and techniques to provide stakeholders with an appropriate explanation of the functioning and / or the results of an AI solution in such a way that that explanation is understandable and addresses the concerns of those stakeholders.


The aim of the project is to conduct practice-oriented research into explainability, in collaboration with organizations in the financial sector, thereby identifying the preconditions for successfully applying explainability. This process consists of first clearly identifying the relevant stakeholders, then mapping which concerns they have, which information they require to meet these concerns, and how that explanation can best be conveyed. Organizations we currently work with include financial service providers and regulators in the sector.


  •  A framework for explainable AI with types of stakeholders and types of explanations, specifically geared toward the financial sector. This framework will be outlined in the following whitepaper: XAI in the financial sector 'a conceptual framework for explainable AI'.
  • Examples of different use cases analysed to validate the framework.
  • Principles and guidelines for the development of responsible and explainable AI applications within the financial sector. 


01 June 2020 - 30 June 2022


At Hogeschool Utrecht, we strive for practice-oriented research and therefore focus research on XAI at the level of use cases. For each use case, we aim to identify which stakeholders need which explanation. This approach allows us to establish a link from the literature to practice and to report novel insights. An example of a use case that is being investigated is lending to consumers (consumer credit). Ultimately, we will work towards a verified framework with accompanying principles and guidelines for XAI geared to the entire financial sector.

Financial service providers or other parties in the financial ecosystem that are interested in working with us are invited to contact us.

Related project

HU researchers involved in the research

Related research groups

Any questions or do you want to collaborate?