Explainable AI by defeasible rules

The field of explainable AI (XAI) is a particularly active area of research at the moment whose goal is to provide transparency to the decisions of traditionally more opaque machine learning techniques. Being able to assess the decisions of an AI system in a human-readable format can be a key aspect of building trust in the system, ensuring it is safe for use and devoid of obvious bias, but the most effective techniques (e.g., often deep learning of various kinds) are often also the least explainable, and even the outputs of interpretable methods (e.g., decision trees and random forests) may need explaining to non-expert, lay users. Recent work [1] has identified argumentation frameworks, extracted from data from which machine learning
classifiers are built, as a suitable basis from which explanations can be built.

This project aims to look at the extraction of explanations from defeasible rules (i.e., rules admitting exceptions) in turn extracted from  these argumentation frameworks and/or from the data. The rules will be in the form of normal logic programs, and their extraction may benefit from work in logic program transformation [2].

This project brings together expertise in data analysis and in logic programming, marrying statistical and symbolic AI. 

[1] Oana Cocarascu, Andria Stylianou, Kristijonas Cyras, Francesca Toni: Data-Empowered Argumentation for Dialectically Explainable Predictions. ECAI 2020: 2449-2456 [http://ebooks.iospress.nl/publication/55172]

[2] Alberto Pettorossi, Maurizio Proietti: Transformation of Logic Programs: Foundations and Techniques. J. Log. Program. 19/20: 261-320 (1994) [https://www.sciencedirect.com/science/article/pii/0743106694900280?via%3Dihub]

Project ID

STAI-CDT-2021-IC-17

Supervisor

Category

Argumentation, Logic