Digital twins for the verification of learning and adaptive software

Learning and decision-making AI components are gaining popularity as enablers of modern adaptive software. Common uses include, for example, the classification or regression of incoming data (e.g., face recognition), the selection of the most likely explanation for an observed behavioral pattern (e.g., to identify which set of features an ongoing user session is likely to exercise), or automated decisions about adaptation actions (e.g., model-free controllers deciding online how computational resources should be allocated to different containers).

Most established modeling and verification procedures are focused on either specific classes of AI methods or classic software verification and with pre-defined property specifications. Nonetheless, when AI and software components are integrated within a larger, complex system, their interactions and decision coupling may lead to emerging behaviors with possibly detrimental effects and intractably large execution spaces.

Digital twins consist of abstraction and simulation models of a component or a group of interacting components that are continuously updated to mirror their real counterparts running in production.
This project aims at studying formalization, design, and operation principles for digital twins that can enable flexible verification, condition monitoring, and automated adaptation of the running system. In particular, digital twins can allow maintaining during runtime a safe environment for the analysis and co-simulation of critical components and their interactions, allowing context-tailored, high-fidelity “in vitro” experiments to expose causal dependencies among the decisions and observable properties of interacting AI and non-AI components and how these are affected by specific system states and input data. Forks and replicas of up-to-date digital twins can be instantiated as an abstract representation of (portions of) the current systems state to analyze the effects of specific inputs and actions on the satisfaction of the overall system requirements. For example, the inputs generated by a specific user can be amplified on a digital twin to verify they are not introducing an unjustifiable semantic drift in the learning modules, allowing the timely neutralization of an attack.

Please get in touch with Dr Antonio Filieri or Dr Giuliano Casale if interested in discussing this project in further detail.

Project ID

STAI-CDT-2020-IC-41

Supervisor