Humans are able to parse arguments and use them to update their beliefs about the world. This is an important skill for any agent, and aspects of this skill have long been studied by the AI community. Much of argumentation theory and belief revision has focused on investigating how an AI agent might deal with contradictory data. Our setting is slightly different, in that we assume there is a probability distribution over knowledge and an agent has access to an approximation of this distribution. The question that we seek to answer is then: how might the agent improve their approximation? During this talk I will explain this question in more detail and explain how the answer could be used in AI safety via debate.