The field of neuro-symbolic systems is an exciting area of research that combines the power of machine learning with the rigour of symbolic reasoning. Neural systems have shown great promise in a wide range of applications, from robotics and autonomous systems to natural language processing and decision-making. However, verifying the correctness of these systems remains a significant challenge. While neural networks are excellent at learning patterns in data, they can be difficult to interpret and analyse. On the other hand, symbolic reasoning is highly transparent and understandable, but it can be challenging to scale up to complex non-linear and high-dimensional systems.
In this project we are interested in the analysis of multi-agent neuro-symbolic systems (NSS), that is, systems comprised by multiple agents interacting with each other and with the environment. The behaviour of such agents is determined by a combination of mechanistic dynamics (e.g., laws of motion) and machine learning components, which are used, for instance, for perception and control. This kind of systems is relevant in many applications, such as multi-agent (deep) reinforcement learning [1], swarm robotics, and traffic management.
We aim to develop verification algorithms for multi-agent NSSs, to provide formal guarantees about the satisfaction of some requirements of interest (reach-avoid, multi-stage tasks, or other kinds of temporal properties). Formal reasoning about these systems is, however, computationally challenging, owing to the presence of (complex) neural network models, multiple agents, uncertain environments, and sequential decision-making over multiple time steps.
Considerable progress has been made in the verification of one-step reachability for neural networks (i.e., input-output specifications), including probabilistic deep models, using techniques like bound propagation [2-4], constraint solving [5-7], and abstract interpretation [8]. These techniques have been recently extended to the verification of single-agent sequential decision making [9-13]. However, the multi-agent case remains a largely unexplored research area, except for [14-17].
This project will focus on developing new methods to verify the behaviour of multi-agent NSSs under uncertain environments, where uncertainty can be reasoned about in a probabilistic or non-deterministic fashion. We envision that the solution methods will build on and improve existing verification techniques for single-agent systems, possibly investigating suitable abstractions [12,15,16] for dimensionality reduction as well as the combination with data-driven methods like [18-20] to obtain probabilistic guarantees in the most complex cases where purely symbolic approaches fail.
The research project will contribute to the development of trustworthy and reliable multi-agent systems, which can have a significant impact on many applications. The proposed techniques will be evaluated in standard multi-agent RL benchmarks like [21,22] and different real-world scenarios coming from the REXASI-PRO EU project [23], which will focus on safe navigation of autonomous wheelchairs in crowded environments for people with reduced mobility.