Aditi Ramaswamy

My PhD research focuses on creating a tool to adequately explain why deepfaked images are classified as deepfakes by DNNs, particularly images which are not easily identifiable by human sight as deepfakes. I plan to study fragments of previously classified images, using pattern recognition to detect the presence of any signatures of generative AI.

After graduating from the University of Michigan in 2019, I worked as a software engineer for three years. During my time in industry, I developed an interest in AI, particularly how it can be used to solve societal issues and promote justice––something I have been passionate about for years. In order to understand the social aspects of technology better, I pursued a Master’s in Digital Humanities at King’s College London, and learnt about the STAI CDT through one of my professors.

I was immediately interested in the work of Dr Hana Chockler’s team on explainability, as my broader research interests entail using explainable computer vision to better society, from getting rid of malicious Internet misinformation to potentially solving crimes. 

Masters Qualification: MA in Digital Humanities from King’s College London

Undergraduate Qualification: BS in Computer Science from the University of Michigan

Work Experience: Software Engineer at (2019-2022)