Hana Kopecka, of the 2019 cohort of the UKRI Centre for Doctoral Training in Safe and Trusted AI, has had her paper accepted for publication and presentation at the Workshop on Dialogue, Explanation and Argumentation for Human-Agent Interaction (DEXA HAI) at the 24th European Conference on Artificial Intelligence (ECAI 2020). Originally scheduled for August, Hana is now due to present the paper, co-authored with her supervisor, Dr Jose M. Such, at the rescheduled conference in September.
The focus of explainable Artificial Intelligence (XAI) is to provide users with explanations of automated decision-making processes in order to facilitate trust between AI systems and users. XAI aims to empower users by helping them to understand the reasoning mechanism of an AI system. To date, however, most research in XAI has tended to ignore the influence of a user’s socio-cultural background on their explanation needs. In this position paper, a novel approach is suggested, in which the socio-cultural background of users is taken into consideration. The authors build on social scientific research suggesting that the socio-cultural background is constitutive of one’s ‘mental programming’ and thus influences perception of explanations. The authors outline a research agenda and challenges of more socio-culturally aware XAI.
The DEXA HAI workshop at ECAI 2020 focuses on the approaches, concepts and applications relevant to supporting dialogue and explainability in the context of human interaction with agent-based systems. ECAI 2020 is Europe’s leading research conference for artificial intelligence. This year, the conference takes place in Santiago de Compostela in Spain, and is supported by organisations such as Huawei, Microsoft and Accenture.