Please note that this event has now passed
In this talk we outline several ground-breaking directions for how explainable artificial intelligence (XAI) can be quantified and applied for improved AI safety and trust. We focus on an open challenge to properly match the need that motivates creating XAI and the algorithms used to provide that solution. To facilitate comparison between potential XAI solutions, we developed four metrics to quantify their algorithmic differences based on the number of the features in its input, rules it outputs, performance differences between algorithms, and stability within different possibilities. We demonstrate that these metrics can objectively quantify XAI without user studies and are thus a potentially better way to measure its effectiveness. We provide a use case for how these metrics can be applied to better quantify AI safety and trust within a medical application.
This event is being organised by the King’s Institute for Artificial Intelligence and the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence.
It is a hybrid event, held at King’s College London and online via Microsoft Teams. Please register with your preferred attendance method via Eventbrite. Students on the STAI CDT do not need to register via Eventbrite.