Project ID: STAI-CDT-2023-KCL-26
Themes: Logic, Verification
Supervisor: David Watson
Causal reasoning is essential to decision-making in real-world problems. However, observational data is rarely sufficient to infer causal relationships or estimate treatment effects due to confounding signals. Pearl (2009)...
Read more
Project ID: STAI-CDT-2023-KCL-19
Themes: Logic, Verification
Supervisor: Mohammad Abdulaziz
Matching is a fundamental problem in combinatorial optimisation with multiple applications in AI, like in belief propagation [10], multi-agent resource allocation algorithms [6], and constraint solving [16], and in...
Read more
Project ID: STAI-CDT-2023-IC-4
Themes: Logic, Norms, Reasoning
Supervisor: Pedro Mediano
Neurosymbolic systems seek to combine the strengths of two major classes of AI algorithms: neural networks, able to recognise patterns in unstructured data, and logic-based systems, capable of powerful reasoning. One of the...
Read more
Project ID: STAI-CDT-2023-KCL-25
Themes: Logic, Norms, Reasoning
Supervisor: Yulan He
Recent efforts to Natural Language Understanding (NLU) have been largely exemplified in tasks such as natural language inference, reading comprehension and question answering. We have witnessed the shift of paradigms in NLP...
Read more
Project ID: STAI-CDT-2023-KCL-18
Themes: Logic
Supervisor: Nicola Paoletti
Temporal logic (TL) is arguably the primary language for formal specification and reasoning about system correctness and safety. They enable the specification and verification of properties such as “will the agent...
Read more
Project ID: STAI-CDT-2023-KCL-24
Themes: Logic, Verification
Supervisor: Hana Chockler
We developed a framework for causal explanations of image classifiers based on the principled approach of actual causality [1] and responsibility [2], the latter pioneered by Dr Chockler. Our framework already resulted in a...
Read more
Project ID: STAI-CDT-2023-KCL-23
Themes: Logic, Verification
Supervisor: Hana Chockler
We developed a framework for causal explanations of image classifiers based on the principled approach of actual causality [1] and responsibility [2], the latter pioneered by Dr Chockler. Our framework already resulted in a...
Read more
Project ID: STAI-CDT-2023-KCL-22
Themes: Logic
Supervisor: Sanjay Modgil, Odinaldo Rodrigues
An important long-term concern regarding the ethical impact of AI is the so called ‘value alignment problem’; that is, how to ensure that the decisions of autonomous AIs are aligned with human values. Addressing...
Read more
Project ID: STAI-CDT-2023-KCL-21
Themes: AI Provenance, Logic
Supervisor: Albert Meroño Peñuela, Luc Moreau
Our modern lives are increasingly governed by ubiquitous AI systems and an abundance of digital data. More and more products and services are providing us with better tools and recommendations for our professional,...
Read more
Project ID: STAI-CDT-2023-KCL-20
Themes: AI Planning, Logic, Verification
Supervisor: David Watson
The current state of the art in generative modelling is dominated by neural networks. Despite their impressive performance on many benchmark tasks, these algorithms do not provide tractable inference for common and...
Read more
Project ID: STAI-CDT-2023-KCL-17
Themes: Logic, Reasoning
Supervisor: Fabio Pierazzi
Data-driven approaches have been proven powerful in a variety of domains, from computer vision to NLP. However, in some domains – such as in attack detection in security – the arms race between...
Read more
Project ID: STAI-CDT-2022-KCL-10
Themes: AI Planning, Logic, Verification
Supervisor: Mohammad Abdulaziz
Constructing a world-model is a fundamental part of model-based AI, e.g. planning. Usually, such a model is constructed by a human modeller and it should capture the modeller’s intuitive understanding of the world dynamics...
Read more