AI is continuing to make progress in many settings, fuelled by data availability and computational power, but it is widely acknowledged that it cannot fully benefit society without addressing its widespread inability to explain its outputs, causing human mistrust. Extensive research efforts are currently devoted towards explainable AI, in particular as regards explaining black-box machine learning methods and tools. However, the need for explainability is very much felt also with white-box algorithmic methods, e.g. in optimisation, especially when the outputs of these algorithms need to be used by humans working in uncertainty environments. Currently, solvers express necessary and sufficient optimality conditions with formal mathematics, so users often consider the optimisation solver as an unexplainable black box.
This project develops explainable optimisation methods based on argumentation to explain supply chain optimisation. Supply chain optimisation involves shipping physical goods around the world: many human operators need to interact with this complicated shipping network. The issue is that human operators often mistrust solutions provided by a computational optimisation tool.
BASF is exploring the possibility of funding this project. If the project is funded by BASF, then we will consider supply chain networks relevant to BASF. If the project is not funded by BASF, we will collaborate with colleagues in the Department of Chemical Engineering (for example Dr Maria Papathanasiou) to consider supply chains relevant to vaccine distribution.
This work builds on our previous work (published in AAAI and awarded best demo at AAMAS) in developing explainable optimisation using argumentation. We have not previously studied supply chain optimisation, but it has promising features that may make it amenable to argumentation-based explanations.