Filter

  • Enhancing Trustworthiness of Neural Networks for Online Adaptive Radiotherapy

    Project ID: STAI-CDT-2023-IC-10
    Themes: Reasoning
    Supervisor: Prof Wayne Luk

    Magnetic Resonance (MR)-guided online adaptive radiotherapy has the potential to revolutionise cancer treatment. It exploits soft-tissue contrast of MR images obtained right before patient’s radiation treatment to...

    Read more

  • Fast Reinforcement Learning using Memory-Augmented Neural Networks

    Project ID: STAI-CDT-2023-KCL-27
    Themes: Norms, Reasoning
    Supervisor: Yali Du, Albert Meroño Peñuela

    Reinforcement learning resembles human learning with intelligence accumulated through experiment. To attain expert human-level performance on tasks such as Atari video games or chess, deep RL systems have required many...

    Read more

  • Extracting interpretable symbolic representations from neural networks using information theory and causal abstraction

    Project ID: STAI-CDT-2023-IC-4
    Themes: Logic, Norms, Reasoning
    Supervisor: Pedro Mediano

    Neurosymbolic systems seek to combine the strengths of two major classes of AI algorithms: neural networks, able to recognise patterns in unstructured data, and logic-based systems, capable of powerful reasoning. One of the...

    Read more

  • Improving Robustness of Pre-Trained Language Models

    Project ID: STAI-CDT-2023-KCL-25
    Themes: Logic, Norms, Reasoning
    Supervisor: Yulan He

    Recent efforts to Natural Language Understanding (NLU) have been largely exemplified in tasks such as natural language inference, reading comprehension and question answering. We have witnessed the shift of paradigms in NLP...

    Read more

  • Understanding Distribution Shift with Logic-based Reasoning and Verification

    Project ID: STAI-CDT-2023-KCL-17
    Themes: Logic, Reasoning
    Supervisor: Fabio Pierazzi

    Data-driven approaches have been proven powerful in a variety of domains, from computer vision to NLP. However, in some domains – such as in attack detection in security –  the arms race between...

    Read more

  • Dealing with Imperfect Rationality in AI Systems

    Project ID: STAI-CDT-2023-KCL-13
    Themes: Reasoning
    Supervisor: Carmine Ventre

    AI systems often collect their input from humans. For example, parents are asked to input their preferences over primary schools before a centralised algorithm allocates children to schools. Should the AI trust the input...

    Read more

  • Computational Social Choice and Machine Learning for Ethical Decision Making

    Project ID: STAI-CDT-2023-KCL-5
    Themes: AI Planning, Argumentation, Norms, Reasoning
    Supervisor: Maria Polukarov

    The problem of ethical decision making presents a grand challenge for modern AI research. Arguably, the main obstacle to automating ethical decisions is the lack of a formal specification of ground-truth ethical principles,...

    Read more

  • Trusted AI for Safe Stop and Search

    Project ID: STAI-CDT-2022-KCL-9
    Themes: Reasoning, Verification
    Supervisor: Mohammad Mousavi, Rita Borgo

    The main objective of this project is to develop AI techniques to analyse the behaviour recorded in the past Stop and Search (S&S) operations. The AI system will be used to inform future operations, avoid unnecessary...

    Read more

  • Detecting fake news

    Project ID: STAI-CDT-2022-KCL-8
    Themes: Reasoning
    Supervisor: Frederik Mallmann-Trenn

    The rise of fake news and misinformation is a threat to our societies. Even though we are not always able to quantify the effect of misinformation, it is clear that it is polarising the society and often leads to violence...

    Read more