Machine learning has a wide range of uses, thus concerns regarding their reliability and safety are valid. One way to ensure the properties of safety and trustworthiness of a machine learning system is to test it. Testing machine learning systems, however, is...
My PhD project focuses on the intersection between multi-agent systems and explainable AI. I became particularly interested in AI during my one-year master’s degree in computer science, and wanted to dive deeper into the subject after completing the introductory-level...
My PhD focuses on how cooperative behaviour between agents is shaped by the environment. By understanding how different incentives give rise to different system dynamics, I aim to be able to design mechanisms that will promote pro-social behaviour. My project utilises...
I am researching ways for robots to independently learn interaction with human users in a safe and non-invasive way. This way, trustworthiness is built through increase in comfort of the interaction, as the robot is able to adapt to its user...
My PhD project is on the explainability of human-machine dialogue through visualisation. Recently, research on dialogue systems has been shifting focus from a traditional task-oriented rule-based paradigm to the more scalable, but...
My research focuses on neural architectures, specifically what makes which architecture successful when. It turns out that geometry plays an important role: probing the neural network to see what inputs it finds indistinguishable characterises important qualities of...
My PhD project focuses on techniques for safe reinforcement learning. More generally, I’m interested in the problem of AI alignment – how we can create artificial agents that act in accordance with our values, even if those values are somewhat...
My PhD project creates methods that check whether the dataset used for training ML models is biased in order to avoid unfair outputs produced by the AI system. The idea is to use computational argumentation as a basis of explaining the outputs of...
As data-driven AI methods develop rapidly within academia, industry, and government, the need to ensure AI does not harm individuals and groups when making predictions about them is of the utmost importance. While powerful, these methods carry with them a risk of...
My PhD research explores how to better align automated content moderation and recommender systems in online platforms with societal interests. In particular, I use ideas from computational social choice to aggregate people’s preferences about the...
For my PhD project, I research explainable agents and how they can adapt their explanations to a specific user based on physiological data and social cues. This could, for instance, mean finding out when to explain or how to find the right level of detail for an...
The current focus of my PhD is on scenarios involving multi-agent interactions between both humans and artificial agents. Technically, my interests lie in the intersection of reinforcement and reward learning, game theory, and symbolic approaches to AI. ...