Developing Meaningful Explanations for Machine Learning Models in the Telecom Domain
AI systems commonly involve a variety of stakeholders, each playing a unique role in relation to these systems. Consequently, explanations regarding system outputs should be customized to cater to the diverse stakeholders' needs.
Results include identifying the current best practices for generating meaningful explanations and developing novel stakeholder tailored explanations for telecom use-cases.
01 September 2023 - 30 August 2027
The research will begin with a literature study, followed by the identification of potential use-cases and stakeholder needs. Prototypes will then be developed, and their ability to provide meaningful explanations will be evaluated.
"The expected research output will allow us to fully implement the cornerstone of our AI Governance Policy, namely the Transparency, thus improving the way we do Responsible AI at KPN."