Constructing a world-model is a fundamental part of model-based AI, e.g. planning. Usually, such a model is constructed by a human modeller and it should capture the modeller’s intuitive understanding of the world dynamics and how an intelligent agent should behave in the world, e.g. what is its goal and, in some cases, how it should or should not achieve it. Since these models are constructed by human modellers, they can be error-prone. E.g. they might not capture the modellers’ intuition due to a modelling bug. Also, the modeller’s intuition could be flawed, e.g. the model might not capture the world dynamics “correctly” or there might be unintended consequences to the way a goal is specified. All these types of errors impose limits on the applicability of model-based AI in safety-critical applications.
Although verifying that a model correctly captures the world dynamics or the modeller’s intuition is in many cases impossible, employing formal methods to prove that the model satisfies expected properties can increase the model’s trustworthiness. Previous authors have used formal methods, chiefly automated methods like model-
checking, to reason about models [5, 9]. However, a limitation of these automated methods arises when reasoning about richer planning formalisms, where proving most relevant properties is undecidable. Examples of that are proving plan validity for planning with continuous change, which amount to deciding solutions to differential equations [15], or the correctness of Golog programs, of which limited versions amounts to undecidable theorem proving problems [1] [9, Chapter 5].
In this project you would investigate the use of interactive theorem provers (ITPs) to formally reason about AI models. To prove a fact in an ITP, the user provides high-level steps of a proof, and the ITP fills in the details, at the level of axioms, culminating in a formal proof. The fact that ITPs can use human expertise is a source of their strength in many applications, e.g. when the properties to prove are undecidable. E.g. ITPs are the most successful approach to implement verified realistic software systems [11, 10, 7], where automated methods stand no chance. In the area of AI modelling, Golog [14] programs are such an application, where human aided-reasoning is necessary to prove interesting properties.
Golog is a programming language for describing the behaviour of intelligent agents. ITPs have been successful when used as tools to reason about program correctness in a multitude of programming languages [13, 8, 4], so it seems promising to use them to reason about Golog programs. In this work you would use an ITP to implement a
Hoare logic to reason about Golog programs, perhaps building on previous work [12], and implement tooling in an ITP to ease the reasoning about Golog programs. You could also extend that work to reason about Golog programs with (continuous) probabilistic environments or epistemic state information, which are both active research areas, and build on existing program logics for reasoning about quantitative properties of programs [6, 2, 3] to reason about these richer Golog programs. Outcomes of this research would include practical tools to formally reason about Golog programs, making it more likely to use Golog in safety-critical applications. Theoretical outcomes include comparing the ITP’s logic, e.g. Higher-Order Logic, as a framework to formalise semantics of Golog versus situation calculus, which is the current way Golog semantics are usually modelled. This project would also be interesting from a programming language theory perspective, since Golog would be the first fifth-generation language to formalise an ITP.
Formal Reasoning about Golog Programs
[1] Giuseppe De Giacomo, Eugenia Ternovska, and Ray Reiter. Non-terminating processes in the situation calculus. Ann. Math. Artif. Intell., 2020.
[2] Kevin Batz, Benjamin Lucien Kaminski, Joost-Pieter Katoen, Christoph Matheja, and Thomas Noll. Quan-
titative separation logic: a logic for reasoning about probabilistic pointer programs. POPL, 2019.
[3] Maximilian P. L. Haslbeck and Tobias Nipkow. Hoare logics for time bounds – A study in meta theory. In
TACAS, 2018.
[4] Ralf Jung, Jacques-Henri Jourdan, Robbert Krebbers, and Derek Dreyer. Rustbelt: securing the foundations
of the rust programming language. POPL, 2018.
[5] Alessandro Cimatti, Andrea Micheli, and Marco Roveri. Validating domains and plans for temporal planning
via encoding into infinite-state linear temporal logic. In AAAI, 2017.
[6] Benjamin Lucien Kaminski, Joost-Pieter Katoen, Christoph Matheja, and Federico Olmedo. Weakest precon–
dition reasoning for expected run-times of probabilistic programs. In ESOP, 2016.
[7] Sudeep Kanav, Peter Lammich, and Andrei Popescu. A conference management system with verified document confidentiality. In CAV, 2014.
[8] Ramana Kumar, Magnus O. Myreen, Michael Norrish, and Scott Owens. Cakeml: a verified implementation
of ML. In POPL, 2014.
[9] Jens Claßen. Planning and verification in the agent language Golog. PhD thesis, RWTH Aachen University,
2013.
[10] Gerwin Klein, Kevin Elphinstone, Gernot Heiser, June Andronick, David Cock, Philip Derrin, Dhammika
Elkaduwe, Kai Engelhardt, Rafal Kolanski, Michael Norrish, et al. seL4: Formal verification of an OS kernel.
In SOSP, 2009.
[11] Xavier Leroy. Formal verification of a realistic compiler. Communications of the ACM, 2009.
[12] Yongmei Liu. A hoare-style proof system for robot programs. In AAAI, 2002.
[13] Michael Norrish. C formalised in HOL. Technical report, University of Cambridge, Computer Laboratory,
1998.
[14] Hector J. Levesque, Raymond Reiter, Yves Lespérance, Fangzhen Lin, and Richard B. Scherl. GOLOG: A
logic programming language for dynamic domains. J. Log. Program., 1997.
[15] Stanis law Jaśkowski. Example of a class of systems of ordinary differential equations having no decision method for existence problems. 1954.
Project ID
Supervisor
Mohammad Abdulazizmohammad.abdulaziz@kcl.ac.ukhttps://www.kcl.ac.uk/people/mohammad-abdulaziz