Huge congratulations to STAI CDT graduate, Dr Avinash Kori, on successfully defending his thesis, ‘Towards Human like Visual Reasoning: Theory and Applications of Object-centric Learning’.
We spoke to Avinash about his PhD thesis and recent successes. As well as his reflections on being part of STAI CDT and his exciting next steps.
What is your thesis about?
My thesis is about enabling AI agents to perceive the world more like we do, as object-centric entities (as a collection of distinct things that interact). I explored multiple ways to learn to perceive these “objects” and their relationships, rather than just patterns, so that it can build a deeper understanding of the world. This shift makes AI not only better at solving problems, but also more adaptable and trustworthy in how it reasons. I developed methods for structured representation learning that are identifiable and help AI agents to generalize, reason, and adapt across different domains.
What has your experience been like as part of the STAI CDT?
Being part of the STAI CDT has been a great experience. It gave me a supportive community of peers, access to interdisciplinary training, and the chance to engage with cutting-edge research across Imperial and King’s. I especially valued the flexibility the CDT provided, it allowed me to pursue the research I am most passionate about, particularly in the fundamental areas of representation learning that is directly relevant to AI safety.
You’ve published multiple papers at top conferences this year, what were the most important takeaways or moments that stood out for you?
2025 was a really exciting year for sharing my research. I took part in several major international conferences. At the 13th International Conference on Learning Representations (ICLR 2025), I presented multiple of my early-stage ideas, one on unifying object-centric learning with causal representation learning and addressing the partial visibility problem in object-centric learning in workshops that sparked valuable feedback and cross-institute collaborations.
At the 24th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2025), I presented FAX framework, a formal explainability framework to understand deep convolutional image classifiers.
Finally, at the 42nd International Conference on Machine Learning (ICML 2025), we had the chance to present three different pieces of work on topics of causal discovery, object-centric learning under spatial ambiguities, and counterfactual reasoning with diffusion models. The biggest takeaway for me was seeing how often people connected their own research to the ideas I presented, which led to some really engaging and unexpected discussions/collaborations.

What are your future plans now that you’ve passed your PhD viva?
I’ve recently received an EPSRC PhD prize fellowship, which has allowed me to continue my research as a postdoctoral researcher at Imperial College London, deepening my work on object-centric and causal generative models for next 12 months. Post that I’ll be applying for faculty/fellowship roles.
Looking back on your own journey, what tips would you share with those still working on their PhD or just about to begin?
Don’t compare your progress directly with others, research often moves in bursts. Take the time to formalise your ideas before jumping into implementation. And try to write little and often; it helps you clarify your thinking along the way.
We can’t wait to hear more about what Avinash does over the next 12 months during his fellowship. You can find the papers mentioned above here:
Kori, A., Rago, A., & Toni, F. (2025). Free Argumentative Exchanges for Explaining Image Classifiers. In Proceedings of the 24th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2025).
Kori, A., Toni, F. & Glocker, B. (2025). Identifiable Object Representations under Spatial Ambiguities. In Proceedings of the 42nd International Conference on Machine Learning (ICML 2025).
Kori, A., Balsells-Rodas, C., Glocker, B., Li, Y. & Locatello, F. (2025). Causal Representation Learning and Inference via Mixture-Based Priors. In Proceedings of the Workshop on Deep Generative Models in Machine Learning: Theory, Principle and Efficacy (DeLTa Workshop) held at the 13th International Conference on Learning Representations (ICLR 2025).
Kori, A., Glocker, B., Schölkopf, B. & Locatello, F. (2025). Unifying Causal and Object-centric Representation Learning allows Causal Composition. In Proceedings of the Workshop on Deep Generative Models in Machine Learning: Theory, Principle and Efficacy (DeLTa Workshop) held at the 13th International Conference on Learning Representations (ICLR 2025).
