Causal Temporal Logic

Temporal logic (TL) is arguably the primary language for formal specification and reasoning about system correctness and safety. They enable the specification and verification of properties such as “will the agent eventually reach the target while avoiding unsafe states?” or “will the system return to some desired service level after a fault?” TL has been successfully applied to the analysis of a wide range of systems, including hardware and software systems, cyber-physical systems (aka control systems) [1], and stochastic models [2] like Markov Decision Processes (MPDs), one of the main models in reinforcement learning and planning.

A limitation of existing TLs is that TL specifications are evaluated on a fixed configuration of the system, e.g., a fixed choice of policy or process/environment dynamics. This project aims to enable formal reasoning about different system configurations (or conditions) to express properties like “would the system remain safe if we switch to a high-performance controller?” or “what is the probability that a failure had happened if we had applied a different configuration in the past?” These queries fall under causal inference (CI) [3]. Even though both CI and TL are very well-established on their own, their combination hasn’t been sufficiently explored in the literature.

With this project, we aim to develop a new formal verification framework combining temporal logic and causality. Such new logic will include the usual temporal and causal operators as well, allowing to express interventional (“what will happen if…”) and counterfactual (“what would have happened if”) properties. To make the framework comprehensive and generally applicable, we aim to support different models, such as MDPs, control systems, and data-driven models like neural networks. This approach can be applied to reason about any kind of sequential decision-making process, but we expect to focus primarily on models of cyber-physical systems and autonomous and multi-agent systems. In particular, by enabling formal reasoning about the effect of interventions, our framework is relevant for safety and security analysis in healthcare and in epidemiology [4-11]. Safety will be an overarching theme as the approach is designed for safety verification of AI systems under nominal and counterfactual scenarios.

This project will result in the first logic-based framework to combine temporal and causal reasoning, topics that are central to AI safety and interpretability. See [12] for a preliminary version of this framework. This project will require extending existing (probabilistic) temporal logics with causal operators, defining the formal semantics of the introduced operators, and implementing effective decision procedures (based on symbolic/numerical or Monte-Carlo techniques). A significant challenge will have to do with the fact that counterfactuals can be precisely identified only for a restricted class of models, a problem exacerbated by partial observability. We envision that our framework will allow deriving bounds on the identification error for such problematic models, bounding as a result the satisfaction probability of the property. Another relevant direction will be investigating methods for the automated synthesis of optimal policies w.r.t. such logic specifications.

[1] Bartocci, E., Deshmukh, J., Donzé, A., Fainekos, G., Maler, O., Ničković, D., & Sankaranarayanan, S. (2018). Specification-based monitoring of cyber-physical systems: a survey on theory, tools and applications. In Lectures on Runtime Verification (pp. 135-175). Springer, Cham.

[2] Kwiatkowska, M., Norman, G., & Parker, D. (2007, May). Stochastic model checking. In International School on Formal Methods for the Design of Computer, Communication and Software Systems (pp. 220-270). Springer, Berlin, Heidelberg.

[3] Glymour, M., Pearl, J., & Jewell, N. P. (2016). Causal inference in statistics: A primer. John Wiley & Sons.

[4] Tsirtsis, Stratis, Abir De, and Manuel Rodriguez. “Counterfactual explanations in sequential decision making under uncertainty.” Advances in Neural Information Processing Systems 34 (2021): 30127-30139.

[5] Abate, Alessandro, et al. “ARCH-COMP21 Category Report: Stochastic Models.” 8th International Workshop on Applied Verification of Continuous and Hybrid Systems, ARCH 2021. EasyChair, 2021.

[6] Althoff, Matthias, et al. “ARCH-COMP21 Category Report: Continuous and Hybrid Systems with Linear Continuous Dynamics.” ARCH@ ADHS. 2021.

[7] OpenAI Gym documentation:

[8] Chen, Hongkai, et al. “Committed moving horizon estimation for meal detection and estimation in type 1 diabetes.” 2019 American Control Conference (ACC). IEEE, 2019.

[9] Komorowski, Matthieu, et al. “The artificial intelligence clinician learns optimal treatment strategies for sepsis in intensive care.” Nature medicine 24.11 (2018): 1716-1720.

[10] Oberst, Michael, and David Sontag. “Counterfactual off-policy evaluation with gumbel-max structural causal models.” International Conference on Machine Learning. PMLR, 2019.

[11] Krish, Veena, et al. “Synthesizing Pareto-Optimal Stealthy and Effective Signal-Injection Attacks on ICDs.” IEEE Access (2022).

[12] Kazemi, Milad, and Nicola Paoletti. “Towards Causal Temporal Reasoning for Markov Decision Processes.” arXiv preprint arXiv:2212.08712 (2022).

Project ID



Nicola Paoletti