Deceptive AI & Society

Our societies are being challenged by a multitude of problems due to deceptive AI. This project will aim to explain the many facets of deceptive AI, that is, its meanings. These facets are historical (the goals of deceptive AI research), behavioral (how machines communicate in a deceptive manner), cognitive (how machines “think” when they attempt deception), socio-ethical (how machines relate to others around them, and vice versa), and ecosystemic, that is, how external evolutionary pressures influence all the previous aspects.

Deceptive AI also has multiple ethical facets. Deception, by definition, clearly falls into the category of dishonest and potentially unethical behavior which opposes the current emerging trend of ethical design in AI. However, it can also be beneficial to society because its ethics depend on the aim of the AI agents and the context in which they are allowed to operate.

To be able to reap the benefits of pro-social deceptive AI and avoid the negative effects of developing more advanced deceptive technologies, the AI community must not just prescriptively aim for the ethical design of machines, but it should continuously reflect on what these technologies truly are and on the consequences that emerge due to their deployment in real-world contexts.

In this project, you will delve in the fascinating topic of deception in human-human, human-machine, and machine-machine interactions and relations. You will investigate deception and deception-related factors in the age of AI-powered machines, either by modelling socio-cognitive processes, conducting controlled user-studies, building formal models and AI architectures of complex and adaptive systems, or a combination of these methods and approaches. Deceptive AI requires us to take a truly interdisciplinary approach to answer pertinent questions, such as how can we use AI to explain deception? Or when is it permissible or not for AI agents to deceive? When is it permissible for humans to deceive machines? How will human-machine societies evolve under the socio-cognitive pressures of deception? What does it really mean for an AI agent to deceive compared to a human, e.g. what kind of cognitive processes does it need to engage in? What is the effect of deceptive behaviour in human-machine relations?

You will become part of the HIDE (Hybrid Intelligence and DEception) Lab and will have the opportunity to work with an excellent multidisciplinary and interdisciplinary international network of researchers in this area. Hence, you will have access to a broad range of skills and expertise and will be able to quickly learn about the complex nature of Deceptive AI.

Moreover, as part of the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence, you will be trained by world leading experts from King’s College London and Imperial College London to be part of a new generation of researchers who will develop AI systems that are safe and trustworthy.

Suggested reading:

Sarkadi, Ş., 2023. Deceptive AI and Society. IEEE Technology and society magazine, 42(4), pp.77-86.

Citation Link: https://kclpure.kcl.ac.uk/ws/portalfiles/portal/245185257/IEEE_Deceptive_AI_and_Society_2_.pdf

Project ID

STAI-CDT-2024-KCL-23