Filter

  • Hypothesis Knowledge Graphs

    Project ID: STAI-CDT-2021-KCL-7
    Themes: AI Provenance, Logic, Reasoning
    Supervisor: Albert Meroño Peñuela

    Generating hypotheses is a fundamental step in the scientific method, but also increasingly challenging due to the ever-growing observational data from which hypotheses are derived. Papers are published at an unmanageable...

    Read more

  • Explainable AI by defeasible rules

    Project ID: STAI-CDT-2021-IC-17
    Themes: Argumentation, Logic
    Supervisor: Francesca Toni

    The field of explainable AI (XAI) is a particularly active area of research at the moment whose goal is to provide transparency to the decisions of traditionally more opaque machine learning techniques. Being able to assess...

    Read more

  • Trustworthy AI for DNA Sequencing

    Project ID: STAI-CDT-2021-IC-16
    Themes: Logic, Reasoning
    Supervisor: Thomas Heinis

    DNA sequencing is becoming ever more important for medical applications, be it for predictive medicine or precision/personalised medicine. At the same time, DNA sequencing is starting to use AI to map signals (from the...

    Read more

  • Neural-symbolic Reinforcement Learning.

    Project ID: STAI-CDT-2021-IC-2
    Themes: AI Planning, Logic
    Supervisor: Alessandra Russo

    Recent advances in deep reinforcement learning (DRL) have allowed computer programs to beat humans at complex games like Chess or Go years before the original projections. However, the SOTA in DRL misses out on some of the...

    Read more

  • Verifying Safety and Reliability of Robotic Swarms

    Project ID: STAI-CDT-2021-IC-6
    Themes: Logic, Verification
    Supervisor: Alessio Lomuscio

    The effective development and deployment of single-robot systems is known to be increasingly problematic in a variety of application domains including search and rescue, remote exploration, de-mining, etc. These and other...

    Read more

  • Verification of AI-based perception systems

    Project ID: STAI-CDT-2021-IC-7
    Themes: Logic, Verification
    Supervisor: Alessio Lomuscio

    State-of-the-art present perception systems, including those based on Lidar or cameras, are increasingly being used in a range of critical applications including security and autonomous vehicles. While the present deep...

    Read more

  • Safe Rational Interactions in Data-driven Control

    Project ID: STAI-CDT-2021-IC-8
    Themes: AI Planning, Logic, Verification
    Supervisor: Alessio Lomuscio

    In autonomous and multi-agent systems players are normally assumed rational and cooperating or competing in groups to achieve their overall objectives. Useful methods to study the resulting interactions come from game...

    Read more

  • Verification of neural-symbolic agent-based systems

    Project ID: STAI-CDT-2021-IC-9
    Themes: Logic, Verification
    Supervisor: Alessio Lomuscio

    Considerable work has been carried out in the past two decades on Verification of Multi-Agent Systems. Various methods based on binary-decision diagrams, bounded model checking, abstraction, symmetry reduction have been...

    Read more

  • Abstract Interpretation for Safe Machine Learning

    Project ID: STAI-CDT-2021-IC-10
    Themes: Logic, Verification
    Supervisor: Sergio Maffeis

    Machine learning (ML) techniques such as Support Vector Machines, Random Forests and Neural Networks are being applied with great success to a wide range of complex and sometimes safety-critical tasks. Recent research in...

    Read more

  • A Novel Model-driven AI Paradigm for Intrusion Detection

    Project ID: STAI-CDT-2021-KCL-6
    Themes: Logic, Verification
    Supervisor: Fabio Pierazzi

    This project aims to investigate, design and develop new model-driven methods for AI-based network intrusion detection systems. The emphasis is on designing an AI model that is able to verify and explain its safety...

    Read more