Machine learning has a wide range of uses, thus concerns regarding their reliability and safety are valid. One way to ensure the properties of safety and trustworthiness of a machine learning system is to test it. Testing machine learning systems, however, is...
My PhD project focuses on the intersection between multi-agent systems and explainable AI. I became particularly interested in AI during my one-year master’s degree in computer science, and wanted to dive deeper into the subject after completing the...
I am researching ways for robots to independently learn interaction with human users in a safe and non-invasive way. This way, trustworthiness is built through increase in comfort of the interaction, as the robot is able to adapt to its user...
My PhD project focuses on techniques for safe reinforcement learning. More generally, I’m interested in the problem of AI alignment – how we can create artificial agents that act in accordance with our values, even if those values are somewhat...
My PhD project creates methods that check whether the dataset used for training ML models is biased in order to avoid unfair outputs produced by the AI system. The idea is to use computational argumentation as a basis of explaining the outputs of...
My PhD research explores how to better align automated content moderation and recommender systems in online platforms with societal interests. In particular, I use ideas from computational social choice to aggregate people’s preferences about the...
For my PhD project, I research explainable agents and how they can adapt their explanations to a specific user based on physiological data and social cues. This could, for instance, mean finding out when to explain or how to find the right level of detail for an...
My PhD project focuses on the exploration and analysis of natural language texts, specifically of philosophical debates on society’s ethical and moral issues, through argument schemes and critical questions. The aim is to develop a new corpus which will enable...
My PhD project focuses on verification of AI systems, in particular neural networks. Nowadays, a broad variety of systems employ AI algorithms to perform certain tasks. However, these models do not lend themselves to traditional verification methods, and for some...
My PhD project is about developing human-in-the-loop explainability framework for CNNs. My final year project during undergraduate studies motivated me to pursue a career in the field of interpretable or explainable AI. This CDT program brings out the aspect...