The field of explainable AI (XAI) is a particularly active area of research at the moment whose goal is to provide transparency to the decisions of traditionally more opaque machine learning techniques. Being able to assess the decisions of an AI system in a human-readable format can be a key aspect of building trust in the system, ensuring it is safe for use and devoid of obvious bias, but the most effective techniques (e.g., often deep learning of various kinds) are often also the least explainable, and even the outputs of interpretable methods (e.g., decision trees and random forests) may need explaining to non-expert, lay users. Recent work  has identified argumentation frameworks, extracted from data from which machine learning
classifiers are built, as a suitable basis from which explanations can be built.
This project aims to look at the extraction of explanations from defeasible rules (i.e., rules admitting exceptions) in turn extracted from these argumentation frameworks and/or from the data. The rules will be in the form of normal logic programs, and their extraction may benefit from work in logic program transformation .
This project brings together expertise in data analysis and in logic programming, marrying statistical and symbolic AI.