My PhD project creates methods that check whether the dataset used for training ML models is biased in order to avoid unfair outputs produced by the AI system. The idea is to use computational argumentation as a basis of explaining the outputs of the systems to be able to detect the bias and potentially include feedback from users back into the models to try to mitigate the bias.
My interests throughout my undergraduate degree centred around using cross-disciplinary approaches to solving problems in the field of AI. I completed an independent research project that looked into the risk of using predictive analytic algorithms in local government, which required considering technical, legal and ethical perspectives. My interest in this area aligns well with the aims of the CDT – ensuring AI methods and systems can be fully safe and trusted. As a final year undergraduate project, I performed experiments on Amazon’s Alexa to see if it responds to users differently based on personal characteristics, fully establishing my aim to assess the impact of AI systems on users.
A big advantage to the CDT is working alongside a cohort who share the same interests. This, along with the training provided by the CDT, including lectures and seminars on other research areas in the department, contributes to making it a very interesting and worthwhile undertaking.
Undergraduate Qualification: BSc in Computer Science, King’s College London