Today, most computer applications are developed as ensembles of concurrent multi-agents (or components), that communicate via message passing across some network. Modern programming languages and toolkits provide developer-friendly concurrency abstractions for multi-agent systems, such as actors (found in, e.g. Go, Rust, Akka, Scala 3, Erlang, Python). Actor model of computation is introduced by Carl Hewitt at MIT’s AI Lab and has spanned over 30 years for modelling communicating multi-agents. Still, ensuring that communicating systems or message-passing applications are safe, secure, and reliable is very difficult: mainstream programming languages and toolkits do not provide methods to verify that programs run and interact according to desired protocols of multi-agents, thus leaving an open door for bugs and security issues.
This PhD project develops model-based verification techniques for ensuring the safety and trustworthiness of multi-agent systems supported by actor-based programming languages.
This PhD proposal will address this problem by using a model-checking tool, integrated with actor-based programming languages. The project develops a toolkit for real-world actor programming in either the new version 3 of the Scala Dotty programming language, Go language or Rust language.
Unlike other concurrent and distributed programming frameworks, the tool will allow to write verified programs, proving that they interact according to a desired protocol and enjoy properties like deadlock-freedom assisted by a model checking tool, mCLR2. It will be integrated with security aspects such as authentication and authorisation guided by the protocol.
HOW TO ACHIEVE:
The student will develop a model-based verification theory extending [1,2,3,4] and implement it by integrating popular modern programming languages (either Go, Rust or Scala) for verifying multi-agent systems using the model checking tool, mCLR2.
YEAR 1: Learning basic concurrent models, programming theories and a basic theory of model-checking and programming languages. Design and implement simple use cases based on a specification and programming languages developed in [1,2,3,4,5,6].
YEAR 2: Developing a new model-based technique to specify and reason behaviours of complex multi-agent systems compositionally and identifying their requirements. We apply counterexample guided model-checking approach to ensure reliability. This feature from model-checking is never integrated with process models or message-passing programming languages before, which is one of the novel parts of this PhD project.
The task requires an integration of model-checking systems, advanced concurrency models  and programming language theories to model multi-agent systems.
YEAR 3: Applying the model and verification techniques by extending the framework developed in Year 2&3 and integrating into multi-agent systems. At this stage, the toolkit is also integrated with security aspects.
YEAR 4: Refining the implementation and the theory, expanding the case studies, and writing up his/her thesis.