Reliable Learning for Safe Autonomy with Conformal Prediction

For their high expressive power and accuracy, machine learning (ML) models are now found in countless application domains. These include autonomous and cyber-physical systems found in high-risk and safety-critical domains, such as healthcare and automotive. These systems nowadays integrate multiple ML components for e.g., sensing, end-to-end control, predictive monitoring, and anomaly detection. Hence, data-driven analysis has become necessary in this context, one where rigorous model-driven techniques like model checking have been the go-to solution for years.

In this project you will develop data-driven analysis techniques for autonomous systems based on conformal prediction (CP) [1,2], an increasingly popular approach to providing guarantees on the generalization error of ML models: it can be applied on top of any supervised learning model and it provides so-called prediction regions (instead of single-point predictions) guaranteed to contain the (unknown) ground truth with a given probability. Crucially, these coverage guarantees are finite-sample (as opposed to asymptotic) and do not rely on any parametric or distributional assumptions.

Our group has a track record of developing CP-based methods for predictive monitoring of autonomous and cyber-physical systems [3-6]. With this project, you will contribute to this endeavor by working on challenge problems including off-policy prediction [7,8], data-driven optimization, causal inference [9,10], robust inference under distribution shifts [11,12], and uncertain distributions [13,14].
The proposed techniques will be evaluated in standard relevant benchmarks and different real-world scenarios coming from the REXASI-PRO EU project [15], which focuses on the safe navigation of autonomous wheelchairs in crowded environments for people with reduced mobility.

The overall goal of this project is to provide correctness guarantees for ML models used in high-stake and safety-critical autonomous and cyber-physical systems, by expanding conformal prediction techniques to tackle novel and challenging problem domains.

[1] Vovk, Vladimir, Alexander Gammerman, and Glenn Shafer. Algorithmic learning in a random world. Vol. 29. New York: Springer, 2005.

[2] Angelopoulos, Anastasios N., and Stephen Bates. “A gentle introduction to conformal prediction and distribution-free uncertainty quantification.” arXiv preprint arXiv:2107.07511 (2021).

[3] Cairoli, Francesca, Nicola Paoletti, and Luca Bortolussi. “Conformal quantitative predictive monitoring of STL requirements for stochastic processes.” Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control. 2023.

[4] Cairoli, Francesca, Luca Bortolussi, and Nicola Paoletti. “Learning-Based Approaches to Predictive Monitoring with Conformal Statistical Guarantees.” International Conference on Runtime Verification. Cham: Springer Nature Switzerland, 2023.

[5] Bortolussi, Luca, et al. “Neural predictive monitoring and a comparison of frequentist and Bayesian approaches.” International Journal on Software Tools for Technology Transfer 23.4 (2021): 615-640.

[6] Cairoli, Francesca, Luca Bortolussi, and Nicola Paoletti. “Neural predictive monitoring under partial observability.” Runtime Verification: 21st International Conference, RV 2021, Virtual Event, October 11–14, 2021, Proceedings 21. Springer International Publishing, 2021.

[7] Russo, Alessio, Daniele Foffano, and Alexandre Proutiere. “Conformal Off-Policy Evaluation in Markov Decision Processes.” 62nd IEEE Conference on Decision and Control, Dec. 13-15, 2023, Singapore. IEEE, 2023.

[8] Taufiq, Muhammad Faaiz, et al. “Conformal off-policy prediction in contextual bandits.” Advances in Neural Information Processing Systems 35 (2022): 31512-31524.

[9] Lei, L., & Candès, E. J. (2021). Conformal inference of counterfactuals and individual treatment effects. Journal of the Royal Statistical Society Series B: Statistical Methodology, 83(5), 911-938.

[10] Chernozhukov, V., Wüthrich, K., & Zhu, Y. (2021). An exact and robust conformal inference method for counterfactual and synthetic controls. Journal of the American Statistical Association, 116(536), 1849-1864.

[11] Barber, R. F., Candes, E. J., Ramdas, A., & Tibshirani, R. J. (2023). Conformal prediction beyond exchangeability. The Annals of Statistics, 51(2), 816-845.

[12] Gibbs, Isaac, and Emmanuel Candes. “Adaptive conformal inference under distribution shift.” Advances in Neural Information Processing Systems 34 (2021): 1660-1672.

[13] Cauchois, M., Gupta, S., Ali, A., & Duchi, J. C. (2020). Robust validation: Confident predictions even when distributions shift. arXiv preprint arXiv:2008.04267.

[14] Gendler, A., Weng, T. W., Daniel, L., & Romano, Y. (2021, October). Adversarially robust conformal prediction. In International Conference on Learning Representations.

[15] REliable & eXplAinable Swarm Intelligence for People with Reduced mObility (REXASI-PRO), https://rexasi-pro.spindoxlabs.com/.

Project ID

STAI-CDT-2024-KCL-19