Explanations of Medical Images

We developed a framework for causal explanations of image classifiers based on the principled approach of actual causality [1] and responsibility [2], the latter pioneered by Dr Chockler.

Our framework already resulted in a number of publications in top-ranked conferences [3] and [4]. The framework provides an explanation to the question “why is this image classified the way it is” (e.g., “why is this image classified as a panda?”). It refers to the AI image classifier (a neural network) as a black-box, hence making it applicable to any AI system.
Explanations are crucial in the interaction of humans with the AI systems and components and are a subject of upcoming US, EU, and UK regulations (the right for explanations, the EU AI act).

Unfortunately, our study revealed an inadequacy of the existing explanation techniques for image classifiers to the medical domain [5].

This project focusses on generating explanations of classifiers for MRI scans of the brain. The student will work at the beginning in a close collaboration with the TAS Node on Governance and Regulation, in which Dr Chockler is a co-I.

There is a need to identify unique requirements for explanations of image classifiers, stemming from the place of classifiers in the diagnosis process, from the clinicians’ feedback, and from the unique domain of brain MRI images. These requirements and constraints will be encoded in symbolic AI – an SMT solver, and will be used to direct the explanation techniques to produce better quality explanations.

The starting point is the explanation tool DeepCover, developed by the supervisor [3,4], which will be combined with symbolic AI tools. The student will be expected to also analyse other explanation tools, such as SHAP, LIME, etc for suitability for the medical domain.

The TAS node will provide a connection to radiologists from NHS hospitals, who are involved in the TAS node’s case studies.

The results of the project will be published in AAAI, NeurIPS, and MICCAI conferences.

While the project starts with a specific case, of brain MRI images, the techniques developed in this project will be extendable to other domains, such as mammograms (detection of breast cancer) and eye scans (detection of diabetic retinopathy).

[1] J.Y. Halpern and Y. Pearl: Causes and Explanations: A Structural Model Approach. Part I: Causes. The British Journal on the Philosophy of Science, 2005.
[2] H. Chockler and J.Y. Halpern: Responsibility and Blame: A Structural-Model Approach. JAIR 2004.
[3] Y. Sun, H. Chockler, X. Huang, D. Kroening: Explaining Image Classifiers Using Statistical Fault Localization. Proceedings of ECCV (28) 2020.
[4] H. Chockler, Y. Sun, D. Kroening: Explanations of Occluded Images. Proceedings of ICCV 2021.
[5] Stefanos Ioannou, Hana Chockler, Alexander Hammers, Andrew P. King:
A Study of Demographic Bias in CNN-Based Brain MR Segmentation. MLCN@MICCAI 2022: 13-22

Project ID



Hana Chocklerhttps://www.hanachockler.com/


Logic, Verification