Goal-based explanations for autonomous systems and robots

Autonomous systems such as robots may become another appliance found in our homes and workplaces. In order to have such systems helping humans to perform their tasks, they must be as autonomous as possible, to prevent becoming a nuisance instead of an aid. 

Autonomy will require the systems or robots to set up their own agenda (in line with the tasks they are meant to do), defining the next goals to achieve and discarding those who can’t be completed. However, this may create misunderstandings with the users around the system, who may expect something different from the robot.

Therefore, it is important that these autonomous systems are able to explain why they achieved one task and not another, or why some new (unexpected) task was achieved that was not scheduled. Other sources of misunderstandings may come from action failures and replanning, where the robot finds a new plan to complete an on-going task. In this case, the new plan may be different to the original one, thus changing the behavior that the robot was performing.

This project will explore how to generate goal-based explanations for robots in assistive/home-based scenarios, extracted from goal-reasoning techniques. It will also look at plan repair to enforce cohesion after a replanning to ideally increase the trust and understanding of the users about the system. Those explanations should also contemplate unforeseen circumstances, therefore explaining things based on “excuses” that the robot may give to the user. Finally, we will investigate how to obtain and provide those explanations at execution time, so explaining on-the-go. The methods developed shall be integrated into a robotic system, in an assistive/service robot scenario.

In addition to the available support by the CDT, the candidates will have the opportunity of contributing to the REXAR (UK) and COHERENT (international) research projects, while collaborating with and being supported by a network of researchers in aligned areas. These projects focus on reasoning for autonomous robots in assistive scenarios, dealing with explanations at different levels of the robotics system, and reasoning about goals and plans.

This project will be jointly supervised by Dr Andrew Coles and Dr Gerard Canal.

[1] Canal, G., Borgo, R., Coles, A., Drake, A., Huynh, T. D., Keller, P., Krivic, S., Luff, P., Mahesar, Q-A., Moreau, L., Parsons, S., Patel, M., & Sklar, E. (2020). Building Trust in Human-Machine Partnerships. Computer Law & Security Review, 39.

[2] Hawes, N., Burbridge, C., Jovan, F., Kunze, L., Lacerda, B., Mudrova, L., … & Hanheide, M. (2017). The strands project: Long-term autonomy in everyday environments. IEEE Robotics & Automation Magazine, 24(3), 146-156.

[3] Aha, D. W. (2018). Goal reasoning: Foundations, emerging applications, and prospects. AI Magazine, 39(2), 3-24.

[4] Bercher, Pascal, et al. “Plan, repair, execute, explain—how planning helps to assemble your home theater.” Proceedings of the International Conference on Automated Planning and Scheduling. Vol. 24. No. 1. 2014.

[5] Chakraborti, Tathagata, Sarath Sreedharan, and Subbarao Kambhampati. “The emerging landscape of explainable AI planning and decision making. IJCAI 2020.

[6] Göbelbecker, M., Keller, T., Eyerich, P., Brenner, M., & Nebel, B. (2010, April). Coming up with good excuses: What to do when no plan can be found. In Proceedings of the International Conference on Automated Planning and Scheduling (Vol. 20, No. 1).

Project ID

STAI-CDT-2021-KCL-11

Supervisor

Andrew Coles

Category

AI Planning