Filter

  • Trusted Test Suites for Safe Agent-Based Simulations

    Project ID: STAI-CDT-2021-KCL-8
    Themes: Verification
    Supervisor: Steffen Zschaler

    Agent-based models (ABMs) are an AI technique to help improve our understanding of complex real-world interactions and their “emergent behaviours”. ABMs are used to develop and test theories or to explore how interventions...

    Read more

  • Discrete-continuous hybrid planning for adaptive systems

    Project ID: STAI-CDT-2021-IC-15
    Themes: AI Planning, Reasoning, Verification
    Supervisor: Antonio Filieri

    Adaptive cyber-physical systems rely on the composition and coordinated interaction of different decision-making procedures, each typically realized with specific AI methods. Cyber components capabilities and semantics are...

    Read more

  • Model Checking Agents that Learn

    Project ID: STAI-CDT-2021-IC-3
    Themes: AI Planning, Verification
    Supervisor: Francesco Belardinelli

    In Reinforcement Learning (RL) autonomous agents have typically to choose their actions in order to maximise some notion of cumulative reward [1]. Tools and techniques for RL have been applied successfully to domain as...

    Read more

  • Towards Trusted Epidemic Simulation

    Project ID: STAI-CDT-2021-IC-4
    Themes: Verification
    Supervisor: Wayne Luk

    Agent-based models (ABMs) are powerful methods to describe the spread of epidemics. An ABM treats each susceptible individual as an agent in a simulated world. The simulation algorithm of the model tracks the health status...

    Read more

  • Verifying Safety and Reliability of Robotic Swarms

    Project ID: STAI-CDT-2021-IC-6
    Themes: Logic, Verification
    Supervisor: Alessio Lomuscio

    The effective development and deployment of single-robot systems is known to be increasingly problematic in a variety of application domains including search and rescue, remote exploration, de-mining, etc. These and other...

    Read more

  • Verification of AI-based perception systems

    Project ID: STAI-CDT-2021-IC-7
    Themes: Logic, Verification
    Supervisor: Alessio Lomuscio

    State-of-the-art present perception systems, including those based on Lidar or cameras, are increasingly being used in a range of critical applications including security and autonomous vehicles. While the present deep...

    Read more

  • Safe Rational Interactions in Data-driven Control

    Project ID: STAI-CDT-2021-IC-8
    Themes: AI Planning, Logic, Verification
    Supervisor: Alessio Lomuscio

    In autonomous and multi-agent systems players are normally assumed rational and cooperating or competing in groups to achieve their overall objectives. Useful methods to study the resulting interactions come from game...

    Read more

  • Verification of neural-symbolic agent-based systems

    Project ID: STAI-CDT-2021-IC-9
    Themes: Logic, Verification
    Supervisor: Alessio Lomuscio

    Considerable work has been carried out in the past two decades on Verification of Multi-Agent Systems. Various methods based on binary-decision diagrams, bounded model checking, abstraction, symmetry reduction have been...

    Read more

  • Abstract Interpretation for Safe Machine Learning

    Project ID: STAI-CDT-2021-IC-10
    Themes: Logic, Verification
    Supervisor: Sergio Maffeis

    Machine learning (ML) techniques such as Support Vector Machines, Random Forests and Neural Networks are being applied with great success to a wide range of complex and sometimes safety-critical tasks. Recent research in...

    Read more

  • Digital twins for the verification of learning and adaptive software

    Project ID: STAI-CDT-2021-IC-13
    Themes: Verification
    Supervisor: Antonio Filieri

    Learning and decision-making AI components are gaining popularity as enablers of modern adaptive software. Common uses include, for example, the classification or regression of incoming data (e.g., face recognition), the...

    Read more

  • Correct-by-construction domain-specific AI planners

    Project ID: STAI-CDT-2021-KCL-4
    Themes: AI Planning, Verification
    Supervisor: Steffen Zschaler

    When using complex algorithms to make decisions within autonomous systems, the weak link is the abstract model used by the algorithms: any errors in the model may lead to unanticipated behaviour potentially risking...

    Read more

  • A Novel Model-driven AI Paradigm for Intrusion Detection

    Project ID: STAI-CDT-2021-KCL-6
    Themes: Logic, Verification
    Supervisor: Fabio Pierazzi

    This project aims to investigate, design and develop new model-driven methods for AI-based network intrusion detection systems. The emphasis is on designing an AI model that is able to verify and explain its safety...

    Read more