Human-in-the-Loop Debugging Deep Models for Image Classification via Argumentation-based Explanations

Deep learning has become the dominant approach to address most image processing tasks. However, deep learning models are mostly black boxes whose outputs are difficult to understand and verify. Recent work proposes Deep Argumentative Explanations (DAXs) [1] as a principled mechanism for opening black-box deep models for a variety of tasks and data types. Other recent work [2] uses (forms of) DAXs to debug neural models for classification from textual data. This is especially useful when the training data is not ideal, e.g., because small, or  biased, or out-of-distribution with respect to the test data.

This project will look at deploying, adapting and extending the methods of [1,2] to the complex setting of image processing, using neural models built from synthetic data (e.g., the medically-inspired synthetic dataset of [3]) and potentially real medical data (subject to approval). The project will aim at addressing challenges of domain-shift, out-of-distribution detection and model self-awareness.

Project ID

STAI-CDT-2021-IC-18

Supervisor

Francesca Tonihttps://www.doc.ic.ac.uk/~ft/

Category

Argumentation