Background: With recent technological advancements, multi-agent interactions have become increasingly complex, ranging from deep learning models and powerful neural networks to blockchain-based cryptoeconomies. However, as these systems continue to grow and evolve, they become vulnerable to uncertain or adversarial conditions that can disrupt their normal functionality. Even small perturbations in their critical parameters can lead to dramatic changes in their equilibrium states, which are difficult to predict and control.
Such critical phase transitions are commonly observed in various settings, including laboratory experiments and simulations of multi-agent sysmtes, real-world financial markets, disease outbreaks, opinion shifts in social networks, and mass adoptions of new technologies. Despite their prevalence, the underlying dynamics of these transitions remain poorly understood. This poses a significant bottleneck for scientists and policymakers seeking to enhance the safety, trustworthiness, fairness and resilience of these systems against adversarial manipulation. As a result, there is a need for further research to investigate these phase transitions and develop strategies to mitigate their effects.
Objectives: The aim of this PhD project is to develop a game-theoretic framework for enhancing the safety and trustworthiness of AI systems with complex decision-making processes, including those that involve human interaction such as virtual or cryptoeconomies. The framework will integrate symbolic AI and mathematical methods with game dynamics and machine learning techniques to model and analyze these systems over time. Utilizing this framework, the project will proceed to design robust mechanisms and protocols for uncertain and adversarial environments. The project will also focus on identifying critical transitions and developing strategies for predicting and mitigating such events. This part will analyze the impact of different game-theoretic strategies on system stability and robustness, and will identify optimal approaches for achieving safe and trustworthy behavior.
Methods: This PhD project will utilize game-theoretic modeling and rigorous mathematical analysis to study multi-agent and economic systems, complemented by the design, empirical evaluation, and simulation of algorithms aimed at improving system performance. The project will verify its findings by analyzing real-world datasets.
Impact: The potential impact of this project is significant and manifold. By improving the safety and trust in AI and cryptoeconomic systems, this project has the potential to enable the wider adoption of these technologies, and to unlock their full potential for a wide range of applications. This could have significant benefits for society, including improving the efficiency and reliability of critical infrastructure, and enabling the creation of innovative and reliable services.