Filter

  • Neuro-Symbolic Policy Learning and Representation for Interpretable and Formally-Verifiable Reinforcement Learning

    Project ID: STAI-CDT-2021-IC-24
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Run-time Verification for Safe and Verifiable AI

    Project ID: STAI-CDT-2021-IC-23
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Reward Synthesis from Logical Specifications

    Project ID: STAI-CDT-2021-IC-22
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Synthesizing and revising plans for autonomous robot adaptation

    Project ID: STAI-CDT-2021-IC-20
    Themes: AI Planning, Logic, Verification
    Supervisor: Dalal Alrajeh

    AI Planning is concerned with producing plans that are guaranteed to achieve a robot’s goals, assuming the pre-specified assumptions about the environment in which it operates hold. However, no matter how detailed these...

    Read more

  • Symbolic machine learning techniques for explainable AI

    Project ID: STAI-CDT-2021-KCL-15
    Themes: AI Planning, Verification
    Supervisor: Kevin Lano

    Machine learning (ML) approaches such as encoder-decoder networks and LSTM have been successfully used for numerous tasks involving translation or prediction of information (Otter et al, 2020). However, the knowledge...

    Read more

  • Neural-symbolic Reinforcement Learning.

    Project ID: STAI-CDT-2021-IC-2
    Themes: AI Planning, Logic
    Supervisor: Alessandra Russo

    Recent advances in deep reinforcement learning (DRL) have allowed computer programs to beat humans at complex games like Chess or Go years before the original projections. However, the SOTA in DRL misses out on some of the...

    Read more

  • Data-Driven and Explainable Discrete Optimization for Effective Transportation in Healthcare

    Project ID: STAI-CDT-2021-KCL-3
    Themes: AI Planning, Argumentation, Scheduling
    Supervisor: Dimitrios Letsios

    This project aims to contribute to the development of safe and trusted, artificially intelligent transportation in healthcare. The London Ambulance Service (LAS) operates more than 1100 ambulances to respond to medical...

    Read more

  • Enhancing Scale and Performance of Safe and Trusted Multi-Agent Planning

    Project ID: STAI-CDT-2021-IC-5
    Themes: AI Planning
    Supervisor: Wayne Luk

    Cooperative Multi-Agent Planning (MAP) is a topic in symbolic artificial intelligence (AI). In a cooperative MAP system, multiple agents collaborate to achieve a common goal. A cooperative MAP solver produces...

    Read more

  • Safe Rational Interactions in Data-driven Control

    Project ID: STAI-CDT-2021-IC-8
    Themes: AI Planning, Logic, Verification
    Supervisor: Alessio Lomuscio, David Angeli

    In autonomous and multi-agent systems players are normally assumed rational and cooperating or competing in groups to achieve their overall objectives. Useful methods to study the resulting interactions come from game...

    Read more

  • Correct-by-construction domain-specific AI planners

    Project ID: STAI-CDT-2021-KCL-4
    Themes: AI Planning, Verification
    Supervisor: Steffen Zschaler

    When using complex algorithms to make decisions within autonomous systems, the weak link is the abstract model used by the algorithms: any errors in the model may lead to unanticipated behaviour potentially risking...

    Read more