Verification of neural-symbolic agent-based systems

Considerable work has been carried out in the past two decades on Verification of Multi-Agent Systems. Various methods based on binary-decision diagrams, bounded model checking, abstraction, symmetry reduction have been developed. Model checkers such as MCMAS and various extensions have been released to the community and applied to various areas including autonomous vehicles, services, robotics and beyond.

While this work has reached a level of maturity it is based on the assumption that agents can be entirely modelled symbolically, e.g. via a BDI rule system. This was entirely appropriate at the time. A new model of agents, however, is rapidly emerging in applications. In this model, agents are composed by a perception layer (vision, sound processing, etc.) which is realised via neural networks, combined with a decision making and planning layer realised via more traditional declarative methods. We call this emerging class of agents “neural-symbolic agents”.

This project concerns the development of methods and tools for the verification of neural agents composed as such. The starting point will be [1] where a neural-symbolic agent-based model is developed. Several possible objectives can be targetted, including:

* The development of efficient (ie scalable) methods for verifying neural-symbolic agent-based systems via novel compilations and data-structures,

* The development of a verification toolkit for the verification of neural-symbolic agents,

* The development of highly parallel approaches for the verification of neural-symbolic agents.

============

Requirements

Expertise with Logic, and Verification will be highly beneficial. Some knowledge of machine learning and optimisation is also advantageous.

M. Akintunde, E. Botoeva, P. Kouvaros, A. Lomuscio. Verifying Strategic Abilities of Neural Multi-agent Systems. Proceedings of the 17th International Conference on Principles of Knowledge Representation and Reasoning (KR20). Rhodes, Greece. IJCAI Press.

M.Akintunde, E. Botoeva, P. Kouvaros, A. Lomuscio. Formal Verification of Neural Agents in non-Deterministic Environments. Proceedings of the 19th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS20). Auckland, New M Zealand. IFAAMAS Press.

M.Akintunde, A. Lomuscio, L. Maganti, E. Pirovano. Reachability Analysis for Neural Agent-Environment Systems. Proceedings of the 16th International Conference on Principles of Knowledge Representation and Reasoning (KR18). Tempe, Arizona. AAAI Press.

Preprints accessible from: Alessio R. Lomuscio (ic.ac.uk)

 

Project ID

STAI-CDT-2021-IC-9

Supervisor

Alessio Lomusciohttps://www.doc.ic.ac.uk/~alessio/

Category

Logic, Verification