User-aware plan explanation generation for human-robot interaction

Robots are progressing out from research laboratories into human environments, motivated by addressing the societal challenges such as aging, loneliness, education and many more. All of such applications requires that robots interact with and assist to humans in their daily lives, therefore it is increasingly necessary to design robotic platforms that can be better understandable by humans and thus enhance user trust and acceptance of robots. In recent years there has been a growing effort in explaining the plans robots use to reach their goals, using contrastive explanations and plan verbalization. However, current explainable planning algorithms do not take into account the states of the user themselves during explanation, as they mostly rely on verbal or interaction logs, which results in unidirectional information flow and lack of adaptation to user’s needs.

The main objective of this project is to use the user’s state to develop novel interactive explainable planning algorithms. This will be enabled by using a prediction of mental states from multimodal cues (e.g., captured via robot’s onboard sensors only), such as confusion and engagement, as inputs to explainable planning algorithms. Such methods will allow robots not only to use human behavioral cues to determine when to explain their behaviour, i.e., to be proactive, but also to estimate how satisfactory an explanation has been, learn from past interactions and improve their explanations and actions (e.g., rephrasing, or including further details) if necessary. In particular, this project aims to combine plan verbalization and execution narrative techniques, planning when to explain; with deep learning techniques applied to multimodal face and gesture recognition. 

The developed methods will be evaluated in real-world human-robot interaction settings, involving assistive and service robotics tasks. 

In addition to the available support by the CDT, the candidates will have the opportunity of contributing to the REXAR (UK) and COHERENT (international) research projects, while collaborating with and being supported by a network of researchers in aligned areas. These projects focus on reasoning for autonomous robots in assistive scenarios, dealing with explanations at different levels of the robotics system, and reasoning about goals and plans.

This project will be jointly supervised by Dr Oya Celiktutan, Dr Andrew Coles and Dr Gerard Canal.

[1] Canal, G., Borgo, R., Coles, A., Drake, A., Huynh, T. D., Keller, P., Krivic, S., Luff, P., Mahesar, Q-A., Moreau, L., Parsons, S., Patel, M., & Sklar, E. (2020). Building Trust in Human-Machine Partnerships. Computer Law & Security Review, 39.

[2] Celiktutan, O. and Demiris, Y. (2018) Inferring Human Knowledgeability from Eye Gaze in Mobile Learning Environments. CVF ECCVW.

[3] Paletta, L., Pszeida, M., Ganster, H., Fuhrmann, F., Weiss, W., Ladstätter, S., Dini, A., Reiterer, B., Brijacak, I., Breitenhuber, G., Murg, S., & Mayer, H. Evaluation of Human-Robot Collaboration Using Gaze based Situation Awareness in Real-time.

[4] Canal, G., Alenyà, G., & Torras, C. (2019). Adapting robot task planning to user preferences: an assistive shoe dressing example. Autonomous Robots, 43(6), 1343-1356.

[5] Chakraborti, Tathagata, Sarath Sreedharan, and Subbarao Kambhampati. “The emerging landscape of explainable AI planning and decision making. IJCAI 2020.

[6] Celiktutan, O. and Gunes, H. (2015). Computational analysis of human-robot interactions through first-person vision: Personality and interaction experience. IEEE RO-MAN.

Project ID

STAI-CDT-2021-KCL-10

Supervisor

Oya Celiktutanhttps://nms.kcl.ac.uk/oya.celiktutan/

Category

AI Planning