Specification, diagnosis and repair for deep learning systems

Recent times have witnessed a flurry of advancements in ML, enabling their widespread application in domains such healthcare, security and autonomous vehicles. However, their deployment has also come at cost, resulting in racial discrimination, and in some instances loss of life.

Formal methods (such as verification) have a long history of developing techniques for ensuring the safety and correctness of software by construction. Their use for ML- enabled software is yet in its infancy due to a number of challenges such types of software and the environments for which they are designed impose, including: (i) the ML software are typically large, complex blackbox models; and (ii) the data from which the ML software is constructed and the environment in which they are deployed typically evolve continually yielding the models less useful overtime and potentially unsafe.

Current work has focused on tackling these problems independently. For example, [Elboher et al 2020] proposed a method to tackle the former challenge by abstracting complex deep neural networks (DNN), into small more manageable sized networks, and checking these against a given set of system level properties that prescribe input-output relations of the network. However, it is assumed that the concrete models and properties against which these are checked are fixed. [Gopinath et al 2019] on the other hand allow for the inference of low-level properties of ML models (specifically feed-forward neural networks), but does not provide means for reasoning about the models at the system level nor for adapting the inferred properties in response to change in the data or its environment.

This PhD will focus on developing new theories and tools for the verification and the learning/revision of specifications of ML models, and their behaviour in response to: (i) violations detected to such specifications, and (ii) shifts in the data. More specifically, the research will focus on the theory and design of counterexample-guided approaches to tackles two challenges: complexity of the models and their behaviour, and evolution of the environment in which they operate, within the context of safety- and mission critical applications (such as the ACAS XU system: an airborne collision-avoidance systems for drones).

input-output relations of the network. However it is assumed that the concrete models and properties against which these are checked are fixed.  [Gopinath et al 2019] on the other hand allow for the inference of low-level properties of ML models (specifically feed-forward neural networks), but does not provide means for reasoning about the models at the system level nor for adapting the inferred properties in response to change in the data or its environment.

This PhD will focus on developing new theories and tools for the verification and the learning/revision of specifications of ML models, and their behaviour in response to: (i) violations detected to such specifications,  and  (ii) shifts in the data. More specifically, the research will focus on the theory and design of counterexample-guided approaches to tackles two challenges: complexity of the models and their behaviour, and evolution of the environment in which they operate, within the context of safety- and mission critical applications (such as the ACAS XU system: an airborne collision-avoidance systems for drones).

D. Gopinath, H. Converse, C. Pasareanu and A. Taly, “Property Inference for Deep Neural Networks,” 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), San Diego, CA, USA, 2019, pp. 797-809, doi: 10.1109/ASE.2019.00079.

Yizhak Yisrael Elboher, Justin Gottschlich, Guy Katz: An Abstraction-Based Framework for Neural Network Verification. CAV (1) 2020: 43-65.

 

Project ID

STAI-CDT-2021-IC-21

Supervisor

Dalal Alrajehwww.doc.ic.ac.uk/~da04

Category

Logic, Verification