The field of responsible AI has been concerned with, among other things, governance and regulation within organisations. There are countless sets of principles and guidelines issued by companies, foundations and governments to support responsible development of AI...
In recent years, large language models (LLMs) have dominated the field of natural language processing (NLP), achieving state-of-the-art results. However, while these models are extremely proficient at many NLP tasks, their ability to perform complex reasoning, a...
Large Language Models (LLMs) have in recent years made it to mainstream users and NLP researchers alike, frequently topping leaderboards in a large variety of NLP tasks [1] and showing the public the possibilities of AI. However, their statistical,...
On a regular basis, organizations in the logistics domain need to be making tactical and operational decisions, e.g. fleet sizing, staff rostering, route planning, and shift scheduling, critically affecting their costs, customer satisfaction, and sustainability. In...
For AI to be trustworthy, its behaviour should be driven towards responsibility by design. In some settings where AI bots interact (e.g., autonomous vehicles) attempts have been made to promote responsibility through collaboration (e.g., vehicles might share...
Introduction:Large Language Models (LLMs) have significantly impacted natural language processing tasks, enabling unprecedented performance across various applications. However, understanding the inner workings of these models remains a challenge due to their complex...
AI systems often collect their input from humans. For example, parents are asked to input their preferences over primary schools before a centralised algorithm allocates children to schools. Should the AI trust the input provided by parents who may try to game the...
The standard approach for evaluating machine learning models is to use held-out data and report various performance metrics such as accuracy and F1. Whilst these metrics summarise a model’s performance, they are ultimately aggregate statistics, limiting our...
The training and fine-tuning of Machine Learning (ML) models is costly to the environment in terms of computational power (energy) and ultimately carbon emissions (Bender et al 2021, Schwartz et al 2020, Strubell et al 2019). For example, roughly 15% of Google’s...
This PhD proposal aligns with the primary objectives of the STAI CDT by focusing on the development of responsible AI solutions and introducing new formal techniques for Constitutional AI. The proposal aims to develop novel methods for modeling and fine-tuning...
The project leverages the existing MedSat dataset [1], a comprehensive integration of health data with high-resolution satellite imagery covering 33,000 areas in England. In addition to what is described in the NeurIPS dataset publication, we will be able to offer...
With the widespread deployment of autonomous agents, such as autonomous cars and robots and the increasing focus on AI safety [24], this project aims to investigate the safety of neuro-symbolic agents. The field of neuro-symbolic systems is an exciting area of...