My project aims to analyse existing robotics research in terms of bias and inclusivity and create new methodologies for inclusive and participatory robotics development, with a focus on promoting safety and ensuring that robot systems benefit all social groups,...
Causal methods hold significant potential to improve the explainability and robustness of AI systems. These methods enable the discovery and estimation of cause and effect, which is critical for human-like cognition. While research on causal techniques for supervised...
My research project is in the field of Explainable AI (XAI). As AI models become more powerful and widely-adopted, providing explanations for the decision they make becomes very important for engendering trust in AI. I’m interested in the nature of explanation...
My research involves applying neurosymbolic AI to argumentation, in order to make it more effective and safer. The use of structured knowledge representations, such as argumentation frameworks, in conjunction with neural models, such as large language models, combines...
My research lies at the intersection of model-based AI and Digital Twins theory, aiming to improve the safety and trustworthiness of socio-technical systems. I currently design learning and reasoning architectures for trading mechanisms operating in financial markets....
My main research interests are in studying and developing methods to make machine learning (ML) algorithms more robust to enable their application in safety critical domains. Specifically, I am interested in the verification of ML algorithms, which involves building...
My PhD project focuses on Algorithmic Game Theory, in particular Fair Division. The problem of Fair Division concerns with dividing items between selfish and strategizing agents with a focus both on the definition of fairness, and methods and algorithms....
My PhD research focuses on creating a tool to adequately explain why deepfaked images are classified as deepfakes by DNNs, particularly images which are not easily identifiable by human sight as deepfakes. I plan to study fragments of previously classified images,...
My research is investigating how reinforcement learning, when used in human-robot interactions by robots, can perpetuate unfairness between different groups of people. I find this really important as society moves increasingly towards automating aspects of our day to...
My research project focuses on teaching AI to think logically. Current methods based on neural networks just produce an output based on an input. Knowledge emerges from patterns of neural activations, but there is no concrete representation of concepts. I am working...
My research involves the behavioural regulation of reinforcement learning agents such that these agents are better able to integrate with the societies in which they operate. My research also looks to develop interpretable methods such that policy makers and...
My PhD project focuses on the monitoring of AI systems, specifically neural networks, to detect out of distribution inputs and unexpected behaviour. The goal is to create lightweight and accurate monitoring algorithms that work alongside machine learning models at...