We often assume that each agent has a well-defined identity, well-defined preferences over outcomes, and well-defined beliefs about the world. However, when designing agents, we in fact need to specify where the boundaries between one agent and another in the system lie, what objective functions these agents aim to maximize, and to some extent even what belief formation processes they use. What is the right way to do so? As more and more AI systems are deployed in the world, this question becomes increasingly important. In this tutorial, I will show how it can be approached from the perspectives of decision theory, game theory, social choice theory, and the algorithmic and computational aspects of these fields.
(No previous background required.)
Please find Prof. Vincent Conitzer’s talk below: