My PhD project is on value alignment; that is, how can we ensure that AI agents are beneficial to society rather than harmful. To this end, I’m looking at how AI agents may reason about values like humans do. I’m especially interested in this in the context of multi-agent systems, where norms control the behaviour of the agents.
I did my undergraduate project in machine ethics and immediately became interested in research in that area. I’d heard about a new CDT in London specialising in Safe and Trusted AI, which seemed like a perfect match with my interests. As a result, I applied and I’m very happy with this choice.
Undergraduate Qualification: BSc (Hons) Artificial Intelligence and Computer Science, University of Edinburgh