Neuro-Symbolic Policy Learning and Representation for Interpretable and Formally-Verifiable Reinforcement Learning

The growing societal impact of AI-based systems has brought with it a set of risks and concerns [1, 2].
Indeed, unintended and harmful behaviours may emerge from the application of machine learning (ML) algorithms, including negative side effects, reward hacking, unsafe exploration [3].
Hence, the safe and trusted deployment of AI systems calls for their verification and certification. In particular, verifiable artificial intelligence (VAI) has been described as “the goal of designing AI-based systems that have strong, ideally provable, assurances of correctness with respect to mathematically-specified requirements.” [4]
Verifiable AI can be achieved by applying formal methods, which have already proved successful in the specification, design, and verification of reactive, distributed, and multi-agent systems.
However, the application of formal methods in AI raises a number of challenges, from modelling continuous behaviours and formalising quantitative requirements, to high-dimensional input and parameters spaces, and online adaptation and evolution.

Recently, there has been a lot of interest in the area of reinforcement learning, directed towards neuro-symbolic techniques for policy synthesis and representation [5]. These techniques combine planning-style control-flow instructions together with basic atomic actions, learnt and represented through (deep) neural networks [6]. Such combination allows to leverage on a high-level, symbolic representation of the policies learnt by agents with the efficiency of deep RL techniques, in order to improve the interpretability and transparency of the agent’s behaviour. However, the enhanced transparency of policies comes often at the price of a degradation in performance.

You will contribute towards the development of neuro-symbolic methods for policy learning in reinforcement learning. In particular, the PhD project focuses on the automatic extraction of plans from policies generated through neural networks, as well as the evaluation of such plans. You will develop metrics to evaluate the transparency and interpretability of the generated neuro-symbolic policies, as opposed to purely neural policies, in order to assess the benefits of the hybrid approach proposed.
This neuro-symbolic framework might then be adapted to settings with multiple agents, acting either cooperatively or competitively, possibly with only partial observability on the current state of the environment.

[1] Russell, S.; Dietterich, T.; Horvitz, E.; Selman, B.; Rossi, F.; Hassabis, D.; Legg, S.; Suleyman, M.; George, D. & Phoenix, S. Letter to the Editor: Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter AI Magazine, 2015, 36, 3-4

[2] Dietterich, T. G. & Horvitz, E. J. Rise of Concerns about AI: Reflections and Directions Commun. ACM, Association for Computing Machinery, 2015, 58, 38–40

[3] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul F. Christiano, John Schulman, and Dan Mané.
Concrete problems in AI safety. CoRR, abs/1606.06565, 2016.

[4] Sanjit A. Seshia and Dorsa Sadigh. Towards verified artificial intelligence. CoRR, abs/1606.08514, 2016.

[5] Sutton, Richard S.; Barto, Andrew G. (1998). Reinforcement Learning: An Introduction. MIT Press. ISBN 0-262-19398-1.

[6] Greg Anderson, Abhinav Verma, Isil Dillig, Swarat Chaudhuri; Neurosymbolic Reinforcement Learning with Formally Verified Exploration, in NeurIPS 2020.

Project ID

STAI-CDT-2021-IC-24

Supervisor

Francesco Belardinellihttps://www.doc.ic.ac.uk/~fbelard/

Category

AI Planning, Logic, Verification