Systemic responsibility in AI systems

For AI to be trustworthy, its behaviour should be driven towards responsibility by design. In some settings where AI bots interact (e.g., autonomous vehicles) attempts have been made to promote responsibility through collaboration (e.g., vehicles might share information to avoid crashes). However, in many practical cases, AI bots compete (e.g., trading algorithms deployed in marketplaces) and are designed to either maximise their own payoff or outperform each other. This can lead to systemic irresponsibility (e.g., pricing cartels or flash crashes).

In this project, we will study emergent systemic behaviour of AI bots interacting in marketplaces. The aim is that of establishing a framework to study equilibrium states reached through the incentives of the different bots. This will allow to establish the extent to which individual goals can be aligned with the system’s objectives.

Project ID