Filter

  • Explainable Safety, Security and Trust in Human-AI Systems

    Project ID: STAI-CDT-2021-KCL-13
    Themes: AI Planning, AI Provenance, Logic, Norms, Verification
    Supervisor: Luca Vigano

    Explanations can help all the stakeholders of AI systems and Cybersystems to make better choices. In particular, they can help human users to understand and trust the choices of autonomous systems or to interact in a safe...

    Read more

  • A model-based verification for safe and trusted concurrent robotics systems

    Project ID: STAI-CDT-2021-IC-19
    Themes: Logic, Verification
    Supervisor: Nobuko Yoshida

    Robotics applications involve programming concurrent componentssynchronising through messages while simultaneously executing motionactions that control the state of the physical world.  Today, theseapplications are...

    Read more

  • Discrete-continuous hybrid planning for adaptive systems

    Project ID: STAI-CDT-2021-IC-15
    Themes: AI Planning, Reasoning, Verification
    Supervisor: Antonio Filieri

    Adaptive cyber-physical systems rely on the composition and coordinated interaction of different decision-making procedures, each typically realized with specific AI methods. Cyber components capabilities and semantics are...

    Read more

  • Model Checking Agents that Learn

    Project ID: STAI-CDT-2021-IC-3
    Themes: AI Planning, Verification
    Supervisor: Francesco Belardinelli

    In Reinforcement Learning (RL) autonomous agents have typically to choose their actions in order to maximise some notion of cumulative reward [1]. Tools and techniques for RL have been applied successfully to domain as...

    Read more

  • Towards Trusted Epidemic Simulation

    Project ID: STAI-CDT-2021-IC-4
    Themes: Verification
    Supervisor: Wayne Luk

    Agent-based models (ABMs) are powerful methods to describe the spread of epidemics. An ABM treats each susceptible individual as an agent in a simulated world. The simulation algorithm of the model tracks the health status...

    Read more

  • Verifying Safety and Reliability of Robotic Swarms

    Project ID: STAI-CDT-2021-IC-6
    Themes: Logic, Verification
    Supervisor: Alessio Lomuscio

    The effective development and deployment of single-robot systems is known to be increasingly problematic in a variety of application domains including search and rescue, remote exploration, de-mining, etc. These and other...

    Read more

  • Verification of AI-based perception systems

    Project ID: STAI-CDT-2021-IC-7
    Themes: Logic, Verification
    Supervisor: Alessio Lomuscio

    State-of-the-art present perception systems, including those based on Lidar or cameras, are increasingly being used in a range of critical applications including security and autonomous vehicles. While the present deep...

    Read more

  • Safe Rational Interactions in Data-driven Control

    Project ID: STAI-CDT-2021-IC-8
    Themes: AI Planning, Logic, Verification
    Supervisor: Alessio Lomuscio

    In autonomous and multi-agent systems players are normally assumed rational and cooperating or competing in groups to achieve their overall objectives. Useful methods to study the resulting interactions come from game...

    Read more

  • Verification of neural-symbolic agent-based systems

    Project ID: STAI-CDT-2021-IC-9
    Themes: Logic, Verification
    Supervisor: Alessio Lomuscio

    Considerable work has been carried out in the past two decades on Verification of Multi-Agent Systems. Various methods based on binary-decision diagrams, bounded model checking, abstraction, symmetry reduction have been...

    Read more

  • Abstract Interpretation for Safe Machine Learning

    Project ID: STAI-CDT-2021-IC-10
    Themes: Logic, Verification
    Supervisor: Sergio Maffeis

    Machine learning (ML) techniques such as Support Vector Machines, Random Forests and Neural Networks are being applied with great success to a wide range of complex and sometimes safety-critical tasks. Recent research in...

    Read more

  • Digital twins for the verification of learning and adaptive software

    Project ID: STAI-CDT-2021-IC-13
    Themes: Verification
    Supervisor: Antonio Filieri

    Learning and decision-making AI components are gaining popularity as enablers of modern adaptive software. Common uses include, for example, the classification or regression of incoming data (e.g., face recognition), the...

    Read more

  • Correct-by-construction domain-specific AI planners

    Project ID: STAI-CDT-2021-KCL-4
    Themes: AI Planning, Verification
    Supervisor: Steffen Zschaler

    When using complex algorithms to make decisions within autonomous systems, the weak link is the abstract model used by the algorithms: any errors in the model may lead to unanticipated behaviour potentially risking...

    Read more