Filter

  • Teaching Large Language Models To Perform Complex Reasoning

    Project ID: STAI-CDT-2023-IC-9
    Themes: AI Planning, Logic
    Supervisor: Dr Marek Rei

    Large language models have become the main backbone of most state-of-the-art NLP systems. By pre-training on very large datasets with unsupervised objectives, these models are able to learn good representations for language...

    Read more

  • Extending Large Language Models Through Querying Symbolic Systems

    Project ID: STAI-CDT-2023-IC-8
    Themes: AI Planning
    Supervisor: Dr Marek Rei

    Large language models have become the main backbone of most state-of-the-art NLP systems. By pre-training on very large datasets with unsupervised objectives, these models are able to learn good representations for language...

    Read more

  • From Verification to Mitigation: Managing Critical Phase Transitions in Multi-Agent Systems

    Project ID: STAI-CDT-2023-KCL-29
    Themes: AI Planning, Verification
    Supervisor: Dr Stefanos Leonardos, Dr. William Knottenbelt (Imperial College London)

    Background: With recent technological advancements, multi-agent interactions have become increasingly complex, ranging from deep learning models and powerful neural networks to blockchain-based cryptoeconomies. However, as...

    Read more

  • Common Sense Planning (for Robotics)

    Project ID: STAI-CDT-2023-KCL-28
    Themes: AI Planning
    Supervisor: Dr Gerard Canal, Dr Albert Meroño-Peñuela

    Task Planning (also known as Symbolic Planning or AI Planning) has proved to be a very useful technique to tackle the decision-making problem in robotics. Given a set of task goals, the planner can come up with a set of...

    Read more

  • Generative modelling with neural probabilistic circuits

    Project ID: STAI-CDT-2023-KCL-20
    Themes: AI Planning, Logic, Verification
    Supervisor: David Watson

    The current state of the art in generative modelling is dominated by neural networks. Despite their impressive performance on many benchmark tasks, these algorithms do not provide tractable inference for common and...

    Read more

  • Computational Social Choice and Machine Learning for Ethical Decision Making

    Project ID: STAI-CDT-2023-KCL-5
    Themes: AI Planning, Argumentation, Norms, Reasoning
    Supervisor: Maria Polukarov

    The problem of ethical decision making presents a grand challenge for modern AI research. Arguably, the main obstacle to automating ethical decisions is the lack of a formal specification of ground-truth ethical principles,...

    Read more

  • Detecting Deception and Manipulation in Planning and Explanation Systems

    Project ID: STAI-CDT-2023-KCL-2
    Themes: AI Planning
    Supervisor: Martim Brandao

    Planning algorithms are used in a variety of contexts, from navigation apps to recommendation algorithms, robot vacuums, autonomous vehicles, etc.Companies using such algorithms have financial incentives to manipulate (or...

    Read more

  • A Critical and Inclusive Approach to Robotics

    Project ID: STAI-CDT-2023-KCL-3
    Themes: AI Planning
    Supervisor: Martim Brandao

    Robotics are already being used in warehouses, factories, super markets, homes, hazardous sites and other applications. While many issues of stereotypes, disparate impact, and harmful impact of AI have been brought to the...

    Read more

  • Formal Reasoning about Golog Programs

    Project ID: STAI-CDT-2022-KCL-10
    Themes: AI Planning, Logic, Verification
    Supervisor: Mohammad Abdulaziz

    Constructing a world-model is a fundamental part of model-based AI, e.g. planning. Usually, such a model is constructed by a human modeller and it should capture the modeller’s intuitive understanding of the world dynamics...

    Read more

  • Neuro-Symbolic Policy Learning and Representation for Interpretable and Formally-Verifiable Reinforcement Learning

    Project ID: STAI-CDT-2021-IC-24
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Run-time Verification for Safe and Verifiable AI

    Project ID: STAI-CDT-2021-IC-23
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more

  • Reward Synthesis from Logical Specifications

    Project ID: STAI-CDT-2021-IC-22
    Themes: AI Planning, Logic, Verification
    Supervisor: Francesco Belardinelli

    The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including...

    Read more