Filter

  • Ensuring Trustworthy AI through Verification and Validation in ML Implementations: Compilers and Libraries

    Project ID: STAI-CDT-2023-KCL-30
    Themes: Logic, Verification
    Supervisor: Dr Hector Menendez Benito, Dr Karine Even Mendoza

    The issue of machine learning trust is a pressing concern that has brought together multiple communities to tackle it. With the increasing use of tools such as ChatGPT and the identification of fairness issues, ensuring the...

    Read more

  • Automated verification and robustification of tree-based models for safe and robust decision making

    Project ID: STAI-CDT-2023-IC-7
    Themes: Verification
    Supervisor: Prof Alessio Lomuscio

    Advances in machine learning have enabled the development of numerousapplications requiring the automation of tasks, such as computer vision, that were previously thought impossible to tackle. Although the success was...

    Read more

  • Reasoning about Stochastic Games of Imperfect Information

    Project ID: STAI-CDT-2023-IC-6
    Themes: Logic, Verification
    Supervisor: Dr Francesco Belardinelli

    In many games the outcome of the players’ actions is given stochastically rather than deterministically, e.g., in card games, board games with dice (Risk!), etc.However, the literature of logic-based languages for...

    Read more

  • Multi-Task Reinforcement Learning with Imagination-based Agents

    Project ID: STAI-CDT-2023-IC-5
    Themes: Logic, Verification
    Supervisor: Dr Francesco Belardinelli

    Deep Reinforcement Learning (DRL) has proved to be a powerful technique that allows autonomous agents to learn optimal behaviours (aka policies) in unknown and complex environments through models of rewards and...

    Read more

  • From Verification to Mitigation: Managing Critical Phase Transitions in Multi-Agent Systems

    Project ID: STAI-CDT-2023-KCL-29
    Themes: AI Planning, Verification
    Supervisor: Dr Stefanos Leonardos, Dr. William Knottenbelt (Imperial College London)

    Background: With recent technological advancements, multi-agent interactions have become increasingly complex, ranging from deep learning models and powerful neural networks to blockchain-based cryptoeconomies. However, as...

    Read more

  • Neurosymbolic approaches to causal representation learning

    Project ID: STAI-CDT-2023-KCL-26
    Themes: Logic, Verification
    Supervisor: David Watson

    Causal reasoning is essential to decision-making in real-world problems. However, observational data is rarely sufficient to infer causal relationships or estimate treatment effects due to confounding signals. Pearl (2009)...

    Read more

  • Verification of Neuro-Symbolic Multi-Agent Systems in Uncertain Environments

    Project ID: STAI-CDT-2023-KCL-28
    Themes: Multi-agent systems, Verification
    Supervisor: Nicola Paoletti

    The field of neuro-symbolic systems is an exciting area of research that combines the power of machine learning with the rigour of symbolic reasoning. Neural systems have shown great promise in a wide range of applications,...

    Read more

  • Verification of Matching Algorithms for Social Welfare

    Project ID: STAI-CDT-2023-KCL-19
    Themes: Logic, Verification
    Supervisor: Mohammad Abdulaziz

    Matching is a fundamental problem in combinatorial optimisation with multiple applications in AI, like in belief propagation [10], multi-agent resource allocation algorithms [6], and constraint solving [16], and in...

    Read more

  • Verifying Geometric Learning Machines for Generalisation, Robustness and Compression

    Project ID: STAI-CDT-2023-IC-2
    Themes: Verification
    Supervisor: Tolga Birdal

    As part of the model-based approaches to safe and trusted AI, this project aims to shed light on the phenomenon of robust generalisation as a trade-off in geometric deep networks.  Unfortunately, classical learning theory...

    Read more

  • Explanations of Medical Images

    Project ID: STAI-CDT-2023-KCL-24
    Themes: Logic, Verification
    Supervisor: Hana Chockler

    We developed a framework for causal explanations of image classifiers based on the principled approach of actual causality [1] and responsibility [2], the latter pioneered by Dr Chockler. Our framework already resulted in a...

    Read more

  • Multiple Explanations of AI image classifiers

    Project ID: STAI-CDT-2023-KCL-23
    Themes: Logic, Verification
    Supervisor: Hana Chockler

    We developed a framework for causal explanations of image classifiers based on the principled approach of actual causality [1] and responsibility [2], the latter pioneered by Dr Chockler. Our framework already resulted in a...

    Read more

  • Generative modelling with neural probabilistic circuits

    Project ID: STAI-CDT-2023-KCL-20
    Themes: AI Planning, Logic, Verification
    Supervisor: David Watson

    The current state of the art in generative modelling is dominated by neural networks. Despite their impressive performance on many benchmark tasks, these algorithms do not provide tractable inference for common and...

    Read more