Verification of Neuro-Symbolic Multi-Agent Systems in Uncertain Environments

The field of neuro-symbolic systems is an exciting area of research that combines the power of machine learning with the rigour of symbolic reasoning. Neural systems have shown great promise in a wide range of applications, from robotics and autonomous systems to natural language processing and decision-making. However, verifying the correctness of these systems remains a significant challenge. While neural networks are excellent at learning patterns in data, they can be difficult to interpret and analyse. On the other hand, symbolic reasoning is highly transparent and understandable, but it can be challenging to scale up to complex non-linear and high-dimensional systems.

In this project we are interested in the analysis of multi-agent neuro-symbolic systems (NSS), that is, systems comprised by multiple agents interacting with each other and with the environment. The behaviour of such agents is determined by a combination of mechanistic dynamics (e.g., laws of motion) and machine learning components, which are used, for instance, for perception and control. This kind of systems is relevant in many applications, such as multi-agent (deep) reinforcement learning [1], swarm robotics, and traffic management.

We aim to develop verification algorithms for multi-agent NSSs, to provide formal guarantees about the satisfaction of some requirements of interest (reach-avoid, multi-stage tasks, or other kinds of temporal properties). Formal reasoning about these systems is, however, computationally challenging, owing to the presence of (complex) neural network models, multiple agents, uncertain environments, and sequential decision-making over multiple time steps.

Considerable progress has been made in the verification of one-step reachability for neural networks (i.e., input-output specifications), including probabilistic deep models, using techniques like bound propagation [2-4], constraint solving [5-7], and abstract interpretation [8]. These techniques have been recently extended to the verification of single-agent sequential decision making [9-13]. However, the multi-agent case remains a largely unexplored research area, except for [14-17].

This project will focus on developing new methods to verify the behaviour of multi-agent NSSs under uncertain environments, where uncertainty can be reasoned about in a probabilistic or non-deterministic fashion. We envision that the solution methods will build on and improve existing verification techniques for single-agent systems, possibly investigating suitable abstractions [12,15,16] for dimensionality reduction as well as the combination with data-driven methods like [18-20] to obtain probabilistic guarantees in the most complex cases where purely symbolic approaches fail.

The research project will contribute to the development of trustworthy and reliable multi-agent systems, which can have a significant impact on many applications. The proposed techniques will be evaluated in standard multi-agent RL benchmarks like [21,22] and different real-world scenarios coming from the REXASI-PRO EU project [23], which will focus on safe navigation of autonomous wheelchairs in crowded environments for people with reduced mobility.

[1] Hernandez-Leal, Pablo, Bilal Kartal, and Matthew E. Taylor. “A survey and critique of multiagent deep reinforcement learning.” Autonomous Agents and Multi-Agent Systems 33, no. 6 (2019): 750-797.
[2] Gowal, S., et al. (2018). On the effectiveness of interval bound propagation for training verifiably robust models. arXiv:1810.12715.
[3] Wang, Shiqi, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. “Efficient formal safety analysis of neural networks.” NeurIPS (2018).
[4] Wicker, Matthew, Luca Laurenti, Andrea Patane, and Marta Kwiatkowska. “Probabilistic safety for bayesian neural networks.” In UAI, pp. 1198-1207. PMLR, 2020.
[5] Ehlers, Ruediger. “Formal verification of piece-wise linear feed-forward neural networks.” In ATVA 2017, pp. 269-286.
[6] Katz, Guy, Derek A. Huang, Duligur Ibeling, Kyle Julian, Christopher Lazarus, Rachel Lim, Parth Shah et al. “The marabou framework for verification and analysis of deep neural networks.” In CAV 2019, pp. 443-452.
[7] Botoeva, E., Kouvaros, P., Kronqvist, J., Lomuscio, A., & Misener, R. (2020, April). Efficient verification of relu-based neural networks via dependency analysis. AAAI.
[8] Singh, G., Gehr, T., Püschel, M., & Vechev, M. (2019). An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, POPL.
[9] Tran, H. D., Yang, X., Manzanas Lopez, D., Musau, P., Nguyen, L. V., Xiang, W., Bak, S., & Johnson, T. T. (2020). NNV: the neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In CAV.
[10] Ivanov, R., Carpenter, T., Weimer, J., Alur, R., Pappas, G., & Lee, I. (2021). Verisig 2.0: Verification of neural network controllers using taylor model preconditioning. In CAV.
[11] Wicker, M., Laurenti, L., Patane, A., Paoletti, N., Abate, A., & Kwiatkowska, M. (2021). Certification of iterative predictions in Bayesian neural networks. In Uncertainty in Artificial Intelligence, UAI.
[12] Bacci, E., & Parker, D. (2020). Probabilistic guarantees for safe deep reinforcement learning. In Formal Modeling and Analysis of Timed Systems, FORMATS.
[13] Hosseini, M. & Lomuscio, M., (2023) Bounded and Unbounded Verification of RNN-based Agents in Non-deterministic Environments. In AAMAS.
[14] Akintunde, Michael E., Elena Botoeva, Panagiotis Kouvaros, and Alessio Lomuscio. “Verifying strategic abilities of neural-symbolic multi-agent systems.” In Proceedings of the International Conference on Principles of Knowledge Representation and Reasoning, vol. 17, no. 1, pp. 22-32. 2020.
[15] Mqirmi, P. E., Belardinelli, F., & León, B. G. (2021). An Abstraction-based Method to Check Multi-Agent Deep Reinforcement-Learning Behaviors. In AAMAS.
[16] Riley, J., Calinescu, R., Paterson, C., Kudenko, D., & Banks, A.: Reinforcement Learning with Quantitative Verification for Assured Multi-Agent Policies. In ICAART.
[17] Yan, Rui, Gabriel Santos, Gethin Norman, David Parker, and Marta Kwiatkowska. “Strategy synthesis for zero-sum neuro-symbolic concurrent stochastic games.” arXiv:2202.06255 (2022).
[18] Cubuktepe, Murat, Nils Jansen, Sebastian Junges, Joost-Pieter Katoen, and Ufuk Topcu. “Scenario-based verification of uncertain mdps.” In TACAS 2020, pp. 287-305.
[19] Bortolussi, Luca, Francesca Cairoli, and Nicola Paoletti. “Conformal Quantitative Predictive Monitoring of STL Requirements for Stochastic Processes.” In HSCC 2023.
[20] Dixit, Anushri, et al. “Adaptive Conformal Prediction for Motion Planning among Dynamic Agents.” arXiv:2212.00278 (2022).
[21] Mordatch, Igor, and Pieter Abbeel. “Emergence of grounded compositional language in multi-agent populations.” In AAAI 2018.
[22] Lanctot, M., et al., 2019. OpenSpiel: A framework for reinforcement learning in games. arXiv:1908.09453.
[23] REliable & eXplAinable Swarm Intelligence for People with Reduced mObility (REXASI-PRO), https://cordis.europa.eu/project/id/101070028.

Project ID

STAI-CDT-2023-KCL-28

Supervisor

Nicola Paolettihttps://nicolapaoletti.com

Category

Multi-agent systems, Verification