Explanation-empowered feedback via argumentation

Today’s AI landscape is permeated by plentiful data and dominated by powerful methods with the potential to impact a wide range of human sectors, including healthcare and the practice of law. Yet, this potential is hindered by the opacity of most data-centric AI methods and it is widely acknowledged that AI cannot fully benefit society without addressing its widespread inability to engage with humans, causing human mistrust and doubts regarding its regulatory and ethical compliance. Extensive research efforts are currently being devoted towards explainable AI, but they are one-way, static methods focused on delivering one-off explanations that cannot benefit from human input.

This PhD project will aim to define a novel notion of argumentative wrapper for a variety of data-centric AI methods to integrate feedback from humans so as to improve the outputs of the methods without necessarily modifying the methods. The wrappers will be realised using computational argumentation as the underpinning, unifying theoretical foundation: argumentation will be used to provide abstractions for a chosen form of data-centric AI (labelled data, or recommender systems, or black-box methods or white-box methods) from which various explanation types, providing argumentative grounds for outputs, can be drawn to facilitate suitable human feedback to be incorporated within the argumentative wrappers . The novel paradigm will be theoretically defined and informed and tested by experiments and empirical evaluation, and it will lead to a radical re-thinking of explainable AI that can work in synergy with humans within a human-centred but AI-supported society.

Project ID

STAI-CDT-2020-IC-16

Supervisor