Israel Shitta

My research project is in the field of Explainable AI (XAI). As AI models become more powerful and widely-adopted, providing explanations for the decision they make becomes very important for engendering trust in AI. I’m interested in the nature of explanation itself: what constitutes an explanation and what makes one explanation better than another? Intuitively, an explanation of a model output is a high-level understanding of the inner-workings of that model. A good explanation is one that is accurate to the model yet easily-understandable to the user. How do we formalise these notions? I am also interested in how computational argumentation can be applied to provide explanations for models.

The STAI CDT is a great place to pursue this line of research because the whole mission of the CDT is focused on AI safety and trust, and I am surrounded by peers working on related issues. 

Masters Qualification: BA in Mathematics

Undergraduate Qualification: MSc in Mathematics