Filter

  • Neuro-Symbolic Policy Learning and Representation for Interpretable and Formally-Verifiable Reinforcement Learning

    Project ID: STAI-CDT-2021-IC-24
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Run-time Verification for Safe and Verifiable AI

    Project ID: STAI-CDT-2021-IC-23
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Reward Synthesis from Logical Specifications

    Project ID: STAI-CDT-2021-IC-22
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Specification, diagnosis and repair for deep learning systems

    Project ID: STAI-CDT-2021-IC-21
    Themes: Logic, Verification
    Supervisor: Dalal Alrajeh

    Recent times have witnessed a flurry of advancements in ML, enabling their widespread application in domains such healthcare, security and autonomous vehicles. However, this deployment has also come at cost, resulting in...

    Read more

  • Synthesizing and revising plans for autonomous robot adaptation

    Project ID: STAI-CDT-2021-IC-20
    Themes: AI Planning, Logic, Verification
    Supervisor: Dalal Alrajeh

    AI Planning is concerned with producing plans that are guaranteed to achieve a robot’s goals, assuming  the pre-specified assumptions about the environment in which it operates hold.  However, no matter how detailed these...

    Read more

  • Explainable Safety, Security and Trust in Human-AI Systems

    Project ID: STAI-CDT-2021-KCL-13
    Themes: AI Planning, AI Provenance, Logic, Norms, Verification
    Supervisor: Luca Vigano

    Explanations can help all the stakeholders of AI systems and Cybersystems to make better choices. In particular, they can help human users to understand and trust the choices of autonomous systems or to interact in a safe...

    Read more

  • A model-based verification for safe and trusted concurrent robotics systems

    Project ID: STAI-CDT-2021-IC-19
    Themes: Logic, Verification
    Supervisor: Nobuko Yoshida

    Robotics applications involve programming concurrent componentssynchronising through messages while simultaneously executing motionactions that control the state of the physical world.  Today, theseapplications are...

    Read more

  • Computational social choice and machine learning for ethical decision making

    Project ID: STAI-CDT-2021-KCL-11a
    Themes: AI Planning, Argumentation, Logic, Norms
    Supervisor: Maria Polukarov

    The problem of ethical decision making presents a grand challenge for modern AI research. Arguably, the main obstacle to automating ethical decisions is the lack of a formal specification of ground-truth ethical principles,...

    Read more

  • Trustful Ontology Engineering and Reasoning through Provenance

    Project ID: STAI-CDT-2021-KCL-12
    Themes: AI Provenance, Logic
    Supervisor: Albert Meroño Peñuela

    Ontologies have become fundamental AI artifacts in providing knowledge to intelligent systems. The concepts and relationships formalised in these ontologies are frequently used to semantically annotate data, helping...

    Read more

  • Hypothesis Knowledge Graphs

    Project ID: STAI-CDT-2021-KCL-7
    Themes: AI Provenance, Logic, Reasoning
    Supervisor: Albert Meroño Peñuela

    Generating hypotheses is a fundamental step in the scientific method, but also increasingly challenging due to the ever-growing observational data from which hypotheses are derived. Papers are published at an unmanageable...

    Read more

  • Explainable AI by defeasible rules

    Project ID: STAI-CDT-2021-IC-17
    Themes: Argumentation, Logic
    Supervisor: Francesca Toni

    The field of explainable AI (XAI) is a particularly active area of research at the moment whose goal is to provide transparency to the decisions of traditionally more opaque machine learning techniques. Being able to assess...

    Read more

  • Trustworthy AI for DNA Sequencing

    Project ID: STAI-CDT-2021-IC-16
    Themes: Logic, Reasoning
    Supervisor: Thomas Heinis

    DNA sequencing is becoming ever more important for medical applications, be it for predictive medicine or precision/personalised medicine. At the same time, DNA sequencing is starting to use AI to map signals (from the...

    Read more