Explainable Demand Prediction for Logistics

On a regular basis, organizations in the logistics domain need to be making tactical and operational decisions, e.g. fleet sizing, staff rostering, route planning, and shift scheduling, critically affecting their costs, customer satisfaction, and sustainability. In...

Systemic responsibility in AI systems

For AI to be trustworthy, its behaviour should be driven towards responsibility by design. In some settings where AI bots interact (e.g., autonomous vehicles) attempts have been made to promote responsibility through collaboration (e.g., vehicles might share...

Dealing with imperfect rationality in AI systems

AI systems often collect their input from humans. For example, parents are asked to input their preferences over primary schools before a centralised algorithm allocates children to schools. Should the AI trust the input provided by parents who may try to game the...

Robustness of argument mining models

The standard approach for evaluating machine learning models is to use held-out data and report various performance metrics such as accuracy and F1. Whilst these metrics summarise a model’s performance, they are ultimately aggregate statistics, limiting our...

CausalMED: Novel Causal Methods for Responsible AI in Healthcare

The project leverages the existing MedSat dataset [1], a comprehensive integration of health data with high-resolution satellite imagery covering 33,000 areas in England. In addition to what is described in the NeurIPS dataset publication, we will be able to offer...

Verification of Autonomous Agents in Uncertain Environments

With the widespread deployment of autonomous agents, such as autonomous cars and robots and the increasing focus on AI safety [24], this project aims to investigate the safety of neuro-symbolic agents. The field of neuro-symbolic systems is an exciting area of...