Project ID: STAI-CDT-2023-KCL-19
Themes: Logic, Verification
Supervisor: Mohammad Abdulaziz
Matching is a fundamental problem in combinatorial optimisation with multiple applications in AI, like in belief propagation [10], multi-agent resource allocation algorithms [6], and constraint solving [16], and in...
Read more
Project ID: STAI-CDT-2023-IC-2
Themes: Verification
Supervisor: Tolga Birdal
As part of the model-based approaches to safe and trusted AI, this project aims to shed light on the phenomenon of robust generalisation as a trade-off in geometric deep networks. Unfortunately, classical learning theory...
Read more
Project ID: STAI-CDT-2023-KCL-24
Themes: Logic, Verification
Supervisor: Hana Chockler
We developed a framework for causal explanations of image classifiers based on the principled approach of actual causality [1] and responsibility [2], the latter pioneered by Dr Chockler. Our framework already resulted in a...
Read more
Project ID: STAI-CDT-2023-KCL-23
Themes: Logic, Verification
Supervisor: Hana Chockler
We developed a framework for causal explanations of image classifiers based on the principled approach of actual causality [1] and responsibility [2], the latter pioneered by Dr Chockler. Our framework already resulted in a...
Read more
Project ID: STAI-CDT-2023-KCL-20
Themes: AI Planning, Logic, Verification
Supervisor: David Watson
The current state of the art in generative modelling is dominated by neural networks. Despite their impressive performance on many benchmark tasks, these algorithms do not provide tractable inference for common and...
Read more
Project ID: STAI-CDT-2023-IC-1
Themes: Verification
Supervisor: Dario Paccagnan
In recent years, AI has achieved tremendous success in many complex decision making tasks. However, when deploying these systems in the real world, safety concerns restrict — often severely — their adoption. One concrete...
Read more
Project ID: STAI-CDT-2023-KCL-10
Themes: Norms, Verification
Supervisor: Jie Zhang, Mohammad Mousavi
Background: Learning-based conversational agents can generate conversations that violate basic logical rules and common sense, which can seriously affect user experience and lead to mistrust and frustration. To create...
Read more
Project ID: STAI-CDT-2023-KCL-11
Themes: Argumentation, Verification
Supervisor: Dr Steffen Zschaler, Dr Katie Bentley
Agent-based models (ABMs) are an AI technique to help improve our understanding of complex real-world interactions and their “emergent behaviours”. ABMs are used to develop and test theories or to explore how interventions...
Read more
Project ID: STAI-CDT-2023-KCL-4
Themes: Verification
Supervisor: Yali Du
Reinforcement learning (RL) has become a new paradigm for solving complex decision-making problems. However, it presents numerous safety concerns in real world decision making, such as unsafe exploration, unrealistic reward...
Read more
Project ID: STAI-CDT-2022-KCL-10
Themes: AI Planning, Logic, Verification
Supervisor: Mohammad Abdulaziz
Constructing a world-model is a fundamental part of model-based AI, e.g. planning. Usually, such a model is constructed by a human modeller and it should capture the modeller’s intuitive understanding of the world dynamics...
Read more
Project ID: STAI-CDT-2022-KCL-9
Themes: Reasoning, Verification
Supervisor: Mohammad Mousavi, Rita Borgo
The main objective of this project is to develop AI techniques to analyse the behaviour recorded in the past Stop and Search (S&S) operations. The AI system will be used to inform future operations, avoid unnecessary...
Read more
Project ID: STAI-CDT-2022-IC-5
Themes: Verification
Supervisor: Nicolas Wu, Matthew Williams
Deep learning has shown huge potential in terms of delivering AI with real-world impact. Most current projects are built in either PyTorch, Tensorflow, or similar platforms. These tend to be written in languages where the...
Read more