Explainable AI in the financial sector
Explainable AI (XAI) is the research field that strives to open the black box of algorithms. In our approach, that process starts with getting a clear picture of the kind of explanation that is required in each specific situation for the various types of stakeholders impacted by AI. Subsequent questions explore which forms of AI (models) lend themselves well to explanation, and which XAI solution is best suited to generate that explanation. In our research, we have defined XAI as being: a set of methods and techniques to provide stakeholders with an appropriate explanation of the functioning and / or the results of an AI solution in such a way that that explanation is understandable and addresses the concerns of those stakeholders.
The aim of the project is to conduct practice-oriented research into explainability, in collaboration with organizations in the financial sector, thereby identifying the preconditions for successfully applying explainability. This process consists of first clearly identifying the relevant stakeholders, then mapping which concerns they have, which information they require to meet these concerns, and how that explanation can best be conveyed. Organizations we currently work with include financial service providers and regulators in the sector.
- A framework for explainable AI with types of stakeholders and types of explanations, specifically geared toward the financial sector. This framework will be outlined in the following whitepaper: XAI in the financial sector 'a conceptual framework for explainable AI'.
- Examples of different use cases analysed to validate the framework.
- Principles and guidelines for the development of responsible and explainable AI applications within the financial sector.
- We collaborated in an explorative research on explainability with DNB, AFM, the Dutch banking association and three Dutch banks. The framework developed by the HU was applied in this research. Results of this research can be found here. Furthermore, an academic paper was submitted and accepted by the 33e Benelux Conference on Artificial Intelligence.
- Together with consortiumpartners Floryn, Researchable and de Volksbank, we conducted a one-year research project into the aspects that need consideration in the implementation of explainable AI. As a result a checklist has been published and a whitepaper in which the checklist is explained. Furthermore an academic paper has been submitted to the HHAI2023 conference. More information on this project can be found on this page.
- A subsidy application for a two-year RAAK-SME project has been granted. This project, called FIN-X, aims to develop tools that give internal users of AI applications more and better insight into their operation and outcomes. More information about this project can be found via the project link below.
01 June 2020 - 31 March 2025
At Hogeschool Utrecht, we strive for practice-oriented research and therefore focus research on XAI at the level of use cases. For each use case, we aim to identify which stakeholders need which explanation. This approach allows us to establish a link from the literature to practice and to report novel insights. An example of a use case that is being investigated is lending to consumers (consumer credit). Ultimately, we will work towards a verified framework with accompanying principles and guidelines for XAI geared to the entire financial sector.
Financial service providers or other parties in the financial ecosystem that are interested in working with us are invited to contact us.
This project is also linked to the KIEM project research project Explainable AI in the Financial Sector and Fin-X