Filter

  • Neuro-Symbolic Policy Learning and Representation for Interpretable and Formally-Verifiable Reinforcement Learning

    Project ID: STAI-CDT-2021-IC-24
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Run-time Verification for Safe and Verifiable AI

    Project ID: STAI-CDT-2021-IC-23
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Reward Synthesis from Logical Specifications

    Project ID: STAI-CDT-2021-IC-22
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Specification, diagnosis and repair for deep learning systems

    Project ID: STAI-CDT-2021-IC-21
    Themes: Logic, Verification
    Supervisor: Dalal Alrajeh

    Recent times have witnessed a flurry of advancements in ML, enabling their widespread application in domains such healthcare, security and autonomous vehicles. However, this deployment has also come at cost, resulting in...

    Read more

  • Synthesizing and revising plans for autonomous robot adaptation

    Project ID: STAI-CDT-2021-IC-20
    Themes: AI Planning, Logic, Verification
    Supervisor: Dalal Alrajeh

    AI Planning is concerned with producing plans that are guaranteed to achieve a robot’s goals, assuming  the pre-specified assumptions about the environment in which it operates hold.  However, no matter how detailed these...

    Read more

  • Explainable Safety, Security and Trust in Human-AI Systems

    Project ID: STAI-CDT-2021-KCL-13
    Themes: AI Planning, AI Provenance, Logic, Norms, Verification
    Supervisor: Luca Vigano

    Explanations can help all the stakeholders of AI systems and Cybersystems to make better choices. In particular, they can help human users to understand and trust the choices of autonomous systems or to interact in a safe...

    Read more

  • Trustful Ontology Engineering and Reasoning through Provenance

    Project ID: STAI-CDT-2021-KCL-12
    Themes: AI Provenance, Logic
    Supervisor: Albert Meroño Peñuela

    Ontologies have become fundamental AI artifacts in providing knowledge to intelligent systems. The concepts and relationships formalised in these ontologies are frequently used to semantically annotate data, helping...

    Read more

  • Explainable AI by defeasible rules

    Project ID: STAI-CDT-2021-IC-17
    Themes: Argumentation, Logic
    Supervisor: Francesca Toni

    The field of explainable AI (XAI) is a particularly active area of research at the moment whose goal is to provide transparency to the decisions of traditionally more opaque machine learning techniques. Being able to assess...

    Read more

  • Trustworthy AI for DNA Sequencing

    Project ID: STAI-CDT-2021-IC-16
    Themes: Logic, Reasoning
    Supervisor: Thomas Heinis

    DNA sequencing is becoming ever more important for medical applications, be it for predictive medicine or precision/personalised medicine. At the same time, DNA sequencing is starting to use AI to map signals (from the...

    Read more

  • Neural-symbolic Reinforcement Learning.

    Project ID: STAI-CDT-2021-IC-2
    Themes: AI Planning, Logic
    Supervisor: Alessandra Russo

    Recent advances in deep reinforcement learning (DRL) have allowed computer programs to beat humans at complex games like Chess or Go years before the original projections. However, the SOTA in DRL misses out on some of the...

    Read more

  • Verifying Safety and Reliability of Robotic Swarms

    Project ID: STAI-CDT-2021-IC-6
    Themes: Logic, Verification
    Supervisor: Alessio Lomuscio

    The effective development and deployment of single-robot systems is known to be increasingly problematic in a variety of application domains including search and rescue, remote exploration, de-mining, etc. These and other...

    Read more

  • Verification of AI-based perception systems

    Project ID: STAI-CDT-2021-IC-7
    Themes: Logic, Verification
    Supervisor: Alessio Lomuscio

    State-of-the-art present perception systems, including those based on Lidar or cameras, are increasingly being used in a range of critical applications including security and autonomous vehicles. While the present deep...

    Read more