Filter

  • Formal Reasoning about Golog Programs

    Project ID: STAI-CDT-2022-KCL-10
    Themes: AI Planning, Logic, Verification
    Supervisor: Mohammad Abdulaziz

    Constructing a world-model is a fundamental part of model-based AI, e.g. planning. Usually, such a model is constructed by a human modeller and it should capture the modeller’s intuitive understanding of the world dynamics...

    Read more

  • Trusted AI for Safe Stop and Search

    Project ID: STAI-CDT-2022-KCL-9
    Themes: Reasoning, Verification
    Supervisor: Mohammad Mousavi, Rita Borgo

    The main objective of this project is to develop AI techniques to analyse the behaviour recorded in the past Stop and Search (S&S) operations. The AI system will be used to inform future operations, avoid unnecessary...

    Read more

  • Composable Neural Networks

    Project ID: STAI-CDT-2022-IC-5
    Themes: Verification
    Supervisor: Nicolas Wu, Matthew Williams

    Deep learning has shown huge potential in terms of delivering AI with real-world impact. Most current projects are built in either PyTorch, Tensorflow, or similar platforms. These tend to be written in languages where the...

    Read more

  • Co-Evolution of Symbolic AI with Data and Specification

    Project ID: STAI-CDT-2022-KCL-6
    Themes: Verification
    Supervisor: Jan Oliver Ringert, Mohammad Mousavi

    Trusted autonomous systems (TAS) rely on AI components that perform critical tasks for stakeholders that have to rely on the services provided by the system, e.g., self-driving cars or intelligent robotic systems. Two...

    Read more

  • Causal Decentralised Finance

    Project ID: STAI-CDT-2022-KCL-3
    Themes: Logic, Norms, Verification
    Supervisor: Hana Chockler

    The goal of this project is to develop a causality-based framework for the analysis of decentralised finance (DeFi), based on the principled approach of actual causality [1] and responsibility [2], the latter pioneered by...

    Read more

  • Building Abstract Representations to Check Multi-Agent Deep Reinforcement-Learning Behaviors

    Project ID: STAI-CDT-2022-IC-3
    Themes: Logic, Verification
    Supervisor: Francesco Belardinelli

    Reinforcement Learning, and its extension Deep Reinforcement Learning (DRL), are Machine Learning (ML) techniques that allow autonomous agents to learn optimal behaviours (called policies) in unknown and complex...

    Read more

  • Explainable Reinforcement Learning with Causality

    Project ID: STAI-CDT-2022-IC-4
    Themes: Logic, Verification
    Supervisor: Francesco Belardinelli

    Reinforcement Learning (RL) is a technique widely used to allow agents to learn behaviours based on a reward/punishment mechanism [1]. In combination with methods from deep learning, RL is currently applied in a number of...

    Read more

  • Verified Multi-Agent Programming with Actor Models

    Project ID: STAI-CDT-2022-ICL-1
    Themes: Logic, Verification
    Supervisor: Prof Nobuko Yoshida

    Today, most computer applications are developed as ensembles of concurrent multi-agents (or components), that communicate via message passing across some network. Modern programming languages and toolkits provide...

    Read more

  • Creating and evolving knowledge graphs at scale for explainable AI

    Project ID: STAI-CDT-2022-KCL-1
    Themes: AI Provenance, Argumentation, Verification
    Supervisor: Prof Elena Simperl

    Knowledge graphs and knowledge bases are forms of symbolic knowledge representations used across AI applications. Both refer to a set of technologies that organise data for easier access, capture information about people,...

    Read more

  • Neuro-Symbolic Policy Learning and Representation for Interpretable and Formally-Verifiable Reinforcement Learning

    Project ID: STAI-CDT-2021-IC-24
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Run-time Verification for Safe and Verifiable AI

    Project ID: STAI-CDT-2021-IC-23
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Reward Synthesis from Logical Specifications

    Project ID: STAI-CDT-2021-IC-22
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more