AI systems are very good at making predictions in a variety of settings. In many cases, however, this comes at the expense of interpretability of the models used. In this module, we will see how to interpret the decisions made by these systems such that they are accessible to humans. We will provide an overview of different methods for interpreting machine learning predictions. We will look at models which are interpretable by nature as well as model-agnostic methods for interpreting predictions.
XAI/Explainable AI
12 July 2021
2:15 pm - 5:30 pm
This event is part of the Safe and Trusted AI Summer School 2021. The Summer School is core for STAI CDT PhD students, and open to a limited number of other students, by invitation.
About the speaker
Oana Cocarascu is a Lecturer in Artificial Intelligence at King’s College London. Her work is on applied research, specifically on how AI can be deployed to support real world applications. She received her PhD from Imperial College London, where she worked at the intersection of natural language processing and machine learning for argument mining. She also worked on the automatic extraction of argumentation frameworks from data to provide user-centric explanations in a variety of settings. Application areas span recommender systems, interpreting classifiers, as well as safe and trusted AI systems.