Muhammad Malik

Muhammad Malik

My project aims to analyse existing robotics research in terms of bias and inclusivity and create new methodologies for inclusive and participatory robotics development, with a focus on promoting safety and ensuring that robot systems benefit all social groups,...
Maksim Anisimov

Maksim Anisimov

Causal methods hold significant potential to improve the explainability and robustness of AI systems. These methods enable the discovery and estimation of cause and effect, which is critical for human-like cognition. While research on causal techniques for supervised...

Israel Shitta

My research project is in the field of Explainable AI (XAI). As AI models become more powerful and widely-adopted, providing explanations for the decision they make becomes very important for engendering trust in AI. I’m interested in the nature of explanation...

Gabriel Freedman

My research involves applying neurosymbolic AI to argumentation, in order to make it more effective and safer. The use of structured knowledge representations, such as argumentation frameworks, in conjunction with neural models, such as large language models, combines...
Andrei-Bogdan Balcau

Andrei-Bogdan Balcau

My research lies at the intersection of model-based AI and Digital Twins theory, aiming to improve the safety and trustworthiness of socio-technical systems. I currently design learning and reasoning architectures for trading mechanisms operating in financial markets....
Atri Vivek Sharma

Atri Vivek Sharma

My main research interests are in studying and developing methods to make machine learning (ML) algorithms more robust to enable their application in safety critical domains. Specifically, I am interested in the verification of ML algorithms, which involves building...
Alexander Konev

Alexander Konev

My PhD project focuses on Algorithmic Game Theory, in particular Fair Division. The problem of Fair Division concerns with dividing items between selfish and strategizing agents with a focus both on the definition of fairness, and methods and algorithms....
Aditi Ramaswamy

Aditi Ramaswamy

My PhD research focuses on creating a tool to adequately explain why deepfaked images are classified as deepfakes by DNNs, particularly images which are not easily identifiable by human sight as deepfakes. I plan to study fragments of previously classified images,...