I’ll give an overview of AI existential safety (“x-safety”) and the kind of work I view as useful in preventing out-of-control AI from leading to human extinction.
I will motivate AI x-safety concerns, and discuss big picture strategic considerations and AI governance.
I will introduce the field of AI Alignment, discuss the role it could play in AI x-safety, and cover recent work and problems in this area.
But I will also emphasize the role other areas of technical research could play in AI x-safety, for instance work supporting AI governance.
Please see a video of Dr. David Krueger’s talk below:
Please find Dr. David Krueger’s slides here .