STAI CDT Director Dr Elizabeth Black and STAI CDT supervisor Professor Elena Simperl, are part of a major project exploring how we can reap the benefits of AI across a range of fields, whilst minimising its potential risk.
The project, Participatory Harm Auditing Workbenches and Methodologies (PHAWM), is a cross-partner consortium funded by Responsible AI UK keystone. Responsible AI UK is a UKRI funded multi-partner programme which is driving research and innovation in responsible AI. The £3.5 million project is being led by the University of Glasgow and is the first of its kind to involve people without a technical background to audit AI systems.
Dr Elizabeth Black and Professor Elena Simperl, together with Professor Daniele Quercia and Professor Dan Hunter from King’s, will join 25 other researchers from seven UK universities and 23 partner organisations. The project is the first of its kind to give people without a technical background the tools to audit AI systems.
Regulators, end-users and people likely to be affected by decisions made by AI systems will play a key role in ensuring that those systems provide fair and reliable outputs, with a focus on the use of AI in health and media content. This includes analysing data sets for predicting hospital readmissions, assessing child attachment for potential bias, and examining fairness in search engines and hate speech detection on social media.
Dr Elizabeth Black and Professor Elena Simperl will be working with the Wikimedia Foundation to understand how AI can be used responsibly to diversify knowledge and content in different languages, especially those that are underrepresented in Wikipedia. The team is also exploring how AI models can be made more safe and trusted, by using a mix of different techniques alongside generative AI. These include knowledge representation – a field of AI that organises information for computer systems to solve complex questions, and argumentation techniques – which provide a structured approach to representing diverse viewpoints and conflicting information, allowing for evaluation and reasoning that more closely emulates human discourse.
Dr Elizabeth Black said, “This project enables us to explore a range of techniques that can make AI more safe and reliable, with input from the people who will be using it or affected by it. The workbench of tools developed in the project will allow people without a technical background to audit AI systems, helping to create fairer outcomes for end-users.
“By collaborating in this major consortium of leading experts in responsible AI, we are building on King’s long-standing work in this vital area, through the UKRI Trustworthy Autonomous Systems Hub and our UKRI Centre for Doctoral Training in Safe & Trusted AI, equipping the next generation of responsible AI practitioners.”
You can find out more about the project at https://phawm.org/.