Filter

  • Neurosymbolic approaches to causal representation learning

    Project ID: STAI-CDT-2023-KCL-26
    Themes: Logic, Verification
    Supervisor: David Watson

    Causal reasoning is essential to decision-making in real-world problems. However, observational data is rarely sufficient to infer causal relationships or estimate treatment effects due to confounding signals. Pearl (2009)...

    Read more

  • Verification of Matching Algorithms for Social Welfare

    Project ID: STAI-CDT-2023-KCL-19
    Themes: Logic, Verification
    Supervisor: Mohammad Abdulaziz

    Matching is a fundamental problem in combinatorial optimisation with multiple applications in AI, like in belief propagation [10], multi-agent resource allocation algorithms [6], and constraint solving [16], and in...

    Read more

  • Extracting interpretable symbolic representations from neural networks using information theory and causal abstraction

    Project ID: STAI-CDT-2023-IC-4
    Themes: Logic, Norms, Reasoning
    Supervisor: Pedro Mediano

    Neurosymbolic systems seek to combine the strengths of two major classes of AI algorithms: neural networks, able to recognise patterns in unstructured data, and logic-based systems, capable of powerful reasoning. One of the...

    Read more

  • Improving Robustness of Pre-Trained Language Models

    Project ID: STAI-CDT-2023-KCL-25
    Themes: Logic, Norms, Reasoning
    Supervisor: Yulan He

    Recent efforts to Natural Language Understanding (NLU) have been largely exemplified in tasks such as natural language inference, reading comprehension and question answering. We have witnessed the shift of paradigms in NLP...

    Read more

  • Causal Temporal Logic

    Project ID: STAI-CDT-2023-KCL-18
    Themes: Logic
    Supervisor: Nicola Paoletti

    Temporal logic (TL) is arguably the primary language for formal specification and reasoning about system correctness and safety. They enable the specification and verification of properties such as “will the agent...

    Read more

  • Explanations of Medical Images

    Project ID: STAI-CDT-2023-KCL-24
    Themes: Logic, Verification
    Supervisor: Hana Chockler

    We developed a framework for causal explanations of image classifiers based on the principled approach of actual causality [1] and responsibility [2], the latter pioneered by Dr Chockler. Our framework already resulted in a...

    Read more

  • Multiple Explanations of AI image classifiers

    Project ID: STAI-CDT-2023-KCL-23
    Themes: Logic, Verification
    Supervisor: Hana Chockler

    We developed a framework for causal explanations of image classifiers based on the principled approach of actual causality [1] and responsibility [2], the latter pioneered by Dr Chockler. Our framework already resulted in a...

    Read more

  • Integrating Sub-symbolic and Symbolic Reasoning for Value Alignment

    Project ID: STAI-CDT-2023-KCL-22
    Themes: Logic
    Supervisor: Sanjay Modgil, Odinaldo Rodrigues

    An important long-term concern regarding the ethical impact of AI is the so called ‘value alignment problem’; that is, how to ensure that the decisions of autonomous AIs are aligned with human values. Addressing...

    Read more

  • Learning and deploying safe and trustworthy models of data provenance

    Project ID: STAI-CDT-2023-KCL-21
    Themes: AI Provenance, Logic
    Supervisor: Albert Meroño Peñuela, Luc Moreau

    Our modern lives are increasingly governed by ubiquitous AI systems and an abundance of digital data. More and more products and services are providing us with better tools and recommendations for our professional,...

    Read more

  • Generative modelling with neural probabilistic circuits

    Project ID: STAI-CDT-2023-KCL-20
    Themes: AI Planning, Logic, Verification
    Supervisor: David Watson

    The current state of the art in generative modelling is dominated by neural networks. Despite their impressive performance on many benchmark tasks, these algorithms do not provide tractable inference for common and...

    Read more

  • Understanding Distribution Shift with Logic-based Reasoning and Verification

    Project ID: STAI-CDT-2023-KCL-17
    Themes: Logic, Reasoning
    Supervisor: Fabio Pierazzi

    Data-driven approaches have been proven powerful in a variety of domains, from computer vision to NLP. However, in some domains – such as in attack detection in security –  the arms race between...

    Read more

  • Formal Reasoning about Golog Programs

    Project ID: STAI-CDT-2022-KCL-10
    Themes: AI Planning, Logic, Verification
    Supervisor: Mohammad Abdulaziz

    Constructing a world-model is a fundamental part of model-based AI, e.g. planning. Usually, such a model is constructed by a human modeller and it should capture the modeller’s intuitive understanding of the world dynamics...

    Read more