Argumentation-based Interactive Explainable Scheduling

AI is continuing to make progress in many settings, fuelled by data availability and computational power, but it is widely acknowledged that it cannot fully benefit society without addressing its widespread inability to explain its outputs, causing human mistrust. Extensive research efforts are currently devoted towards explainable AI, in particular as regards explaining black-box machine learning methods and tools. However, the need for explainability is very much felt also with white-box algorithmic methods, e.g. in optimisation, especially when these methods are to be used under uncertainty.

This project aims to define explainable AI methods, based on argumentation, to explain discrete optimisation problems, especially those relevant in scheduling. Given a mathematical optimisation model with well-defined numerical variables, objective function(s), and constraints, a solver generates an efficient and ideally optimal solution. If the model and solver are correct, then implementing the optimal solution can have major benefits. But how can we explain the optimal solutions to a user? Currently, solvers express necessary and sufficient optimality conditions with formal mathematics, so users often consider the optimisation solver as an unexplainable black box.

This project builds upon our prior work (AAAI’19, https://arxiv.org/pdf/1811.05437.pdf) defining a novel paradigm using argumentation to empower interaction between optimisation solvers and users, supported by tractable explanations which certify or refute solutions. Explainable scheduling is a critical application and our test bed for explainable optimisation. Consider the fundamental makespan scheduling problem, a discrete optimisation problem for effective resource allocation. This problem arises in for example nurse rostering where staff of different skill qualification categories, e.g. Registered Nurse, Nurse’s Aide, need to be assigned to shifts. In the planning period, staff are scheduled, e.g. for the next 4 weeks. But nursing personnel, hospital managers, or patients may inquire about the fairness or optimality of the schedule and possible changes. Further, when unexpected events occur, e.g. staff illness or an unusually high influx of patients, feasible schedules must be recovered.

We have taken the first steps towards enabling users to interact with, and obtain explanations from, optimal scheduling in general and makespan scheduling in particular. But the proposed PhD topic will introduce notions of: (i) explanations beyond singleton and pairwise swaps, (ii) decision-making under uncertainty, (iii) social welfare and preferences, (iv) fairness to agents in the system, e.g. patients, and (v) verification of all solutions for safety.

Project ID

STAI-CDT-2020-IC-40

Supervisor