AI Existential Safety

Dr. David Krueger

18 July 2023

9:30 am - 12:15 pm

This event is part of the Safe and Trusted AI Summer School 2023.The Summer School is core for STAI CDT PhD students, and open to a limited number of other students, by invitation.

I’ll give an overview of AI existential safety (“x-safety”) and the kind of work I view as useful in preventing out-of-control AI from leading to human extinction.

I will motivate AI x-safety concerns, and discuss big picture strategic considerations and AI governance.

I will introduce the field of AI Alignment, discuss the role it could play in AI x-safety, and cover recent work and problems in this area.

But I will also emphasize the role other areas of technical research could play in AI x-safety, for instance work supporting AI governance.

Please see a video of Dr. David Krueger’s talk below:

Please find Dr. David Krueger’s slides here .

About the speaker

Dr. David Krueger is an Assistant Professor in Machine Learning at the University of Cambridge and a member of Cambridge’s Computational and Biological Learning lab (CBL) and Machine Learning Group (MLG). David’s research group focuses on Deep Learning, AI Alignment, and AI safety. David is broadly interested in work (including in areas outside of Machine Learning, e.g. AI governance) that could reduce the risk of human extinction (“x-risk”) resulting from out-of-control AI systems. David completed his graduate studies at the University of Montreal and Mila, where he was supervised by Roland Memisevic, Yoshua Bengio, and Aaron Courville.