Filter

  • Verification of Matching Algorithms for Social Welfare

    Project ID: STAI-CDT-2023-KCL-19
    Themes: Logic, Verification
    Supervisor: Mohammad Abdulaziz

    Matching is a fundamental problem in combinatorial optimisation with multiple applications in AI, like in belief propagation [10], multi-agent resource allocation algorithms [6], and constraint solving [16], and in...

    Read more

  • Verifying Geometric Learning Machines for Generalisation, Robustness and Compression

    Project ID: STAI-CDT-2023-IC-2
    Themes: Verification
    Supervisor: Tolga Birdal

    As part of the model-based approaches to safe and trusted AI, this project aims to shed light on the phenomenon of robust generalisation as a trade-off in geometric deep networks.  Unfortunately, classical learning theory...

    Read more

  • Explanations of Medical Images

    Project ID: STAI-CDT-2023-KCL-24
    Themes: Logic, Verification
    Supervisor: Hana Chockler

    We developed a framework for causal explanations of image classifiers based on the principled approach of actual causality [1] and responsibility [2], the latter pioneered by Dr Chockler. Our framework already resulted in a...

    Read more

  • Multiple Explanations of AI image classifiers

    Project ID: STAI-CDT-2023-KCL-23
    Themes: Logic, Verification
    Supervisor: Hana Chockler

    We developed a framework for causal explanations of image classifiers based on the principled approach of actual causality [1] and responsibility [2], the latter pioneered by Dr Chockler. Our framework already resulted in a...

    Read more

  • Generative modelling with neural probabilistic circuits

    Project ID: STAI-CDT-2023-KCL-20
    Themes: AI Planning, Logic, Verification
    Supervisor: David Watson

    The current state of the art in generative modelling is dominated by neural networks. Despite their impressive performance on many benchmark tasks, these algorithms do not provide tractable inference for common and...

    Read more

  • Towards Sharp Generalization Guarantees for All-data Training through Scenario Approach

    Project ID: STAI-CDT-2023-IC-1
    Themes: Verification
    Supervisor: Dario Paccagnan

    In recent years, AI has achieved tremendous success in many complex decision making tasks. However, when deploying these systems in the real world, safety concerns restrict — often severely — their adoption. One concrete...

    Read more

  • Automatic Testing and Fixing Learning-based Conversational Agents with Knowledge Graphs

    Project ID: STAI-CDT-2023-KCL-10
    Themes: Norms, Verification
    Supervisor: Jie Zhang, Mohammad Mousavi

    Background: Learning-based conversational agents can generate conversations that violate basic logical rules and common sense, which can seriously affect user experience and lead to mistrust and frustration. To create...

    Read more

  • Explainability of Agent-based Models as a Tool for Validation and Exploration

    Project ID: STAI-CDT-2023-KCL-11
    Themes: Argumentation, Verification
    Supervisor: Dr Steffen Zschaler, Dr Katie Bentley

    Agent-based models (ABMs) are an AI technique to help improve our understanding of complex real-world interactions and their “emergent behaviours”. ABMs are used to develop and test theories or to explore how interventions...

    Read more

  • Safe Reinforcement Learning from Human Feedback

    Project ID: STAI-CDT-2023-KCL-4
    Themes: Verification
    Supervisor: Yali Du

    Reinforcement learning (RL) has become a new paradigm for solving complex decision-making problems. However, it presents numerous safety concerns in real world decision making, such as unsafe exploration, unrealistic reward...

    Read more

  • Formal Reasoning about Golog Programs

    Project ID: STAI-CDT-2022-KCL-10
    Themes: AI Planning, Logic, Verification
    Supervisor: Mohammad Abdulaziz

    Constructing a world-model is a fundamental part of model-based AI, e.g. planning. Usually, such a model is constructed by a human modeller and it should capture the modeller’s intuitive understanding of the world dynamics...

    Read more

  • Trusted AI for Safe Stop and Search

    Project ID: STAI-CDT-2022-KCL-9
    Themes: Reasoning, Verification
    Supervisor: Mohammad Mousavi, Rita Borgo

    The main objective of this project is to develop AI techniques to analyse the behaviour recorded in the past Stop and Search (S&S) operations. The AI system will be used to inform future operations, avoid unnecessary...

    Read more

  • Composable Neural Networks

    Project ID: STAI-CDT-2022-IC-5
    Themes: Verification
    Supervisor: Nicolas Wu, Matthew Williams

    Deep learning has shown huge potential in terms of delivering AI with real-world impact. Most current projects are built in either PyTorch, Tensorflow, or similar platforms. These tend to be written in languages where the...

    Read more