Trusted AI with WikiEthics and Agent Reasoning

Deep learning has recently brought various AI breakthroughs in areas like computer vision, speech recognition, and language models. Thanks to these, digital photography is now at our fingertips, we can ask anything to home voice assistants, and self-driving cars might be better drivers than humans. However, these advances come with costs: models learned from data can not explain their decisions and generally suffer from bias, which can result in unethical decisions.

Various approaches aim at a more ethical, safe and trustful AI, like DARPA’s eXplainable AI and Robust.AI’s Hybrid Intelligence, by e.g. integrating symbolic and subsymbolic methods, augmenting data from minorities, etc. Other projects look into more socio-technical solutions. For example, the Moral Machine experiment crowdsources judgments from thousands of volunteers about choices they would make in self-driving car ethical dilemmas. In these, loss of human lives is often unavoidable, and thus crowdsourced judgements give us a better understanding of human ethical decision-making. However, these results are hard to integrate in AI systems, because: (a) they are generally not machine-processable; (b) they do not generalise well to other ethical dilemmas, and thus require repeated fact collection from scratch; and (c) they do not scale well with the size of ethical knowledge from which machines could obtain useful input descriptions. In some cases, we simply do not know how much knowledge a reasoning agent will need to reach an ethical conclusion within a time limit.

This project proposes the idea of building WikiEthics, a crowdsourced knowledge graph (KG) of ethical dilemmas, and using it in AI systems like reasoning agents for ethical AI decision making. KGs are symbolic knowledge bases that reasoning agents can access, process and reason about online -some of them, like Wikidata, are collaboratively built. Concretely, the project:

– Starts a community of volunteers to document ethical dilemmas in a structured, generic and reusable way; and replicates recent experiments on psychology that comparatively judge tasks performed by humans and machines (e.g. [Hidalgo et al. 2021]), focusing on group discussions and decisions rather than individuals, to populate the WikiEthics KG

– Uses modal logics to describe ethical dilemmas with optionality and obligation; and translates them into description logic ontologies

– Equips agents in simulated scenarios from [Hidalgo et al. 2021] with description logics reasoners, investigating the minimum knowledge they need from the WikiEthics KG to reach ethical conclusions in tasks such as: recognizing a situation posing an ethical dilemma; taking a minimal cost or fastest decision; and identifying the best qualified agent (human or artificial) to decide

– Proposes evaluation metrics for the impact of the WikiEthics KG in agent reasoning.


At the end of the project, WikiEthics will be the first collaborative knowledge base on ethical dilemmas, providing rich ethical descriptions that can be used by machines in various situations. For example, machines will be able to decide whether they are confronting an ethical dilemma in the first place; what decisions humans typically consider more or less ethical in such a dilemma; what are the foreseeable consequences of taking one decision or another; or whether it is best to defer responsibility to a human.

Marcus, G.; Davis, E. (2019). Rebooting AI: Building Artificial Intelligence We Can Trust. Pantheon/Random House
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.F. and Rahwan, I., 2018. The moral machine experiment. Nature, 563(7729), pp.59-64.
Vrandečić, D. and Krötzsch, M., 2014. Wikidata: a free collaborative knowledgebase. Communications of the ACM, 57(10), pp.78-85.

Project ID

STAI-CDT-2021-KCL-14

Supervisor

Albert Meroño Peñuelahttps://www.albertmeronyo.org

Category

AI Provenance, Norms