Developing Meaningful Explanations for Machine Learning Models in the Telecom Domain

As companies, including telecom providers, increasingly rely on complex AI-systems, the lack of interpretability, they often introduce poses many challenges in understanding the underlying decision-making process. Trust in AI systems is important because it facilitates acceptance and adoption among users. The field of Explainable AI (XAI) plays a crucial role in this by providing explanations and transparency to users regarding the decisions and operation of such systems.


AI systems commonly involve a variety of stakeholders, each playing a unique role in relation to these systems. Consequently, explanations regarding system outputs should be customized to cater to the diverse stakeholders' needs.


Results include identifying the current best practices for generating meaningful explanations and developing novel stakeholder tailored explanations for telecom use-cases.



01 September 2023 - 30 August 2027


The research will begin with a literature study, followed by the identification of potential use-cases and stakeholder needs. Prototypes will then be developed, and their ability to provide meaningful explanations will be evaluated.


HU researchers involved in the research

Related research groups

"The expected research output will allow us to fully implement the cornerstone of our AI Governance Policy, namely the Transparency, thus improving the way we do Responsible AI at KPN."

Collaboration with knowledge partners

JADS logo

Any questions or want to collaborate?

Henry Maathuis

  • Researcher
  • Research group: Artificial Intelligence