Swarm robotics bases its success upon the collaboration of many agents who contribute towards the overall goal of the system. Such a collaboration can take the form of cooperative efforts or even competition, both resulting in behaviours that are far more complex than individual robots can achieve. The capacity for complexity is becoming increasingly important as robots are presented with ever more challenging tasks including: search and rescue, cleaning up space debris, and robotic assembly. Such tasks are beyond the capabilities of individual robots and so we must leverage the capacity for agents to collectively achieve our goals. Fortunately, a growing subset of the Control Theory asks this very question: how does the dynamical behaviour of a population of agents change under the influence of control?
Another technique which has made incredible strides towards improving the capability of autonomous systems to communally achieve results is Multi-Agent Reinforcement Learning (MARL). This builds on the ideas of classical Reinforcement Learning which allows an agent to optimise its strategies towards tackling a task through repeated exposure to that task.
The goal of this study is to bridge the gap between control theory and multi-agent reinforcement learning to enable the study of complex populations of agents, who are capable of independent decision making, as well as producing control methodologies to enable the safe operation of intelligent swarms.