The main objective of this project is to develop AI techniques to analyse the behaviour recorded in the past Stop and Search (S&S) operations. The AI system will be used to inform future operations, avoid unnecessary escalations that jeopardise the safety of the officers (and the searched people) and increase trust in S&S operations.
To this end, we will develop our techniques based on the causal analysis techniques based on causal theories elaborated by Halpern , Pearl , and later extensions by Mousavi . Our theories will build upon behavioural models provided by the Mayor’s Office for Policing And Crime (MOPAC). These models are the result of coding a rich dataset of videos from past Stop and Search operations by the Metropolitan Police. Coding enables analysis and extractions of rules of behaviour to inform model development. We will develop intuitive visualisations of the identified rules and causal relations, explaining the role of different events in a potential (de-)escalation of the Stop and Search operation. We quantify the responsibility of these events building upon the theory of responsibility developed by Chockler and Halpern [4,5].
The theories and tools developed in this project will be initially developed and evaluated for the specific abstractions (e.g., events and durations) in the particular domain of Stop and Search operations. However, we will ensure that they are transferable to other systems, e.g., using the case studies from the TAS Hub and Verifiability Node, ensuring the transformative value of the research for the broader domain of Safe and Trusted AI. In particular, our project will lead to a set of novel techniques for detecting causation and generating explanations. This will both enhance the general understanding of the issue of explainability and lead to new algorithms for generating explanation for AI models.