Explainable Demand Prediction for Logistics

On a regular basis, organizations in the logistics domain need to be making tactical and operational decisions, e.g. fleet sizing, staff rostering, route planning, and shift scheduling, critically affecting their costs, customer satisfaction, and sustainability. In this context, demand prediction software enables effectively making these decisions by proactively anticipating future outcomes, allocating resources, and developing contingencies to mitigate fluctuations and disturbances.

However, demand prediction models, e.g. gradient-boosted trees and deep neural networks, are subject to prediction inaccuracies, especially when aiming at high spatial and temporal granularity. Additionally, these models are often considered black boxes. In particular, the complexity of model building algorithms hinders the ability of lay users (e.g. drivers or customers) to understand predictive outputs. Also, companies developing demand prediction software often withhold full visibility into the inner workings of their products to maintain a competitive edge. To this end, enhancing the transparency of demand prediction software is essential for the reliability and regulation compliance of logistics companies.

As a motivating application, consider courier delivery services that enable societal access to products (e.g. food, clothing, and equipment) for covering basic human needs and fostering public welfare. Estimating the time of arrival for deliveries is important for their operation. However, certain deliveries are unavoidably subject to delays. Why did a delivery delay occur? Is it due to unforeseeable uncertainty, resource limitations, or demand prediction errors?

This project aims to provide answers to the above questions from the lens of Explainable AI. In particular, it aims to develop approaches for effectively generating counterfactual and adversarial explanations. For this purpose, discrete optimisation methodologies will be employed. Current approaches rely on mixed-integer programming. Because generating these explanations can be computationally intensive, the first part of the project will be devoted to developing more scalable and computationally less intensive approaches, e.g. based on local search. The second part of the project will be devoted to a user study for assessing the benefits of the computed explanations to practitioners, in partnership with a company in the transport domain.

– M. A. Carreira-Perpiñán, S. S. Hada. Counterfactual explanations for oblique decision trees: Exact, efficient algorithms. AAAI 2021.

-K. Kanamori, T. Takagi, K. Kobayashi, Y. Ike. Counterfactual explanation trees: Transparent and consistent actionable recourse with decision trees. AISTATS, 2022.

– V. V. Mišić. Optimization of tree ensembles. Operations Research, 2020.

– M. Mistry, D. Letsios, G. Krennrich, R. M. Lee, R. Misener. Mixed-integer convex nonlinear optimization with gradient-boosted trees embedded. INFORMS Journal on Computing, 2021.

Project ID