Explainability of Agent-based Models as a Tool for Validation and Exploration

Agent-based models (ABMs) are an AI technique to help improve our understanding of complex real-world interactions and their “emergent behaviours”. ABMs are used to develop and test theories or to explore how interventions might change behaviour. For example, we are working on a model of staff and patient interaction in emergency medicine exploring how interventions affect efficiency and safety. With the Francis Crick Institute, we study, using ABMS, how cells coordinate during the growth of blood vessels and how changes to their behaviour / interactions could impact vascular disease outcomes.

To create trust in the results of ABM simulations, assurances are needed about their correctness. For example, in the biomedical domain, simulations are used to drive in vitro experiments. In vitro experiments are costly and carry risks, and it is, therefore, important that decisions about which experiments to undertake are made robustly. Without clear correctness assurances, simulation models are often facing scepticism as to their ability to support such decisions. This requires a systematic approach to validation and verification of ABMs that can be clearly documented as part of an overall fitness-for-purpose argument documenting and explaining why the ABM is a sufficiently accurate representation of reality and why its results should be trusted and can form a meaningful basis for expert conclusions and real-world interventions.

Using ABMs to understand real-world phenomena requires experimentation and exploration using the simulation software. To be successful, this requires a systematic approach that is well documented as exploration progresses. At the same time, every time a simulation result produces an unexpected result, there is a need to understand how this result came about: is it indicative of an interesting emergent behaviour – and thus a result that should be further explored, possibly through real-world experiments – or is it indicative of an error or artifact in the agent-based model design or simulation infrastructure? This is additionally complicated where aspects of the system behaviour go beyond simple rule systems and include heuristics and other forms of higher-level decision making.

This PhD project will tackle these challenges from the perspective of process and software tool support, drawing on the following areas of prior research:

1. Model-driven engineering (MDE) and domain-specific modelling languages (DSMLs) provide a framework for structuring the systematic experimentation and exploration of ABMs and the documentation of such experimentation and exploration as a fitness-for-purpose argument [1] and defining a library of commonly used experimentation and exploration queries;

2. Tools for systematic statistical analysis of ABMs (e.g., SPARTAN [2] and MC2MABS [3]) enable the structured, statistically sound analysis of results produced by ABMs, but are difficult to integrate into a well-documented systematic approach or provide explanations for unexpected results;

3. Some approaches to explainable AI (e.g., inverted optimisation [4]) search for changes to a model that make unexpected behaviours disappear or expected behaviours appear – this has the potential of providing explanations that allow pinpointing the source of a behaviour.

The hypothesis of this PhD project is, thus, that a more systematic approach to experimentation and exploration using ABMs is possible by combining techniques from the three areas above and that such an approach will make ABMs better understood, safer, and more trusted as the basis for decision making. We have access to cases studies from computational biology and organisational management, where the approach developed can be evaluated.

[1] Steffen Zschaler and Fiona Polack: A Family of Languages for Trustworthy Agent-Based Simulation. 13th International Conference on Software Language Engineering, 2020.
[2] Alden K, Read M, Timmis J, Andrews PS, Veiga-Fernandes H, et al. (2013) Spartan: A Comprehensive Tool for Understanding Uncertainty in Simulations of Biological Systems. PLOS Computational Biology 9(2): e1002916. https://doi.org/10.1371/journal.pcbi.1002916
[3] Herd, B., Miles, S., McBurney, P., Luck, M. (2014). Verification and Validation of Agent-Based Simulations Using Approximate Model Checking. In: Alam, S., Parunak, H. (eds) Multi-Agent-Based Simulation XIV. MABS 2013. Lecture Notes in Computer Science, vol 8235. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-54783-6_4
[4] Martim Brandão, Amanda Coles, and Daniele Magazzeni. Explaining path plan optimality: Fast explanation methods for navigation meshes using full and incremental inverse optimization. 31(1):56–64, May 2021.

Project ID

STAI-CDT-2023-KCL-11

Supervisor

Dr Steffen Zschalerwww.steffen-zschaler.de

Dr Katie Bentley

Category

Argumentation, Verification