|State-of-the-art present perception systems, including those based on Lidar or cameras, are increasingly being used in a range of critical applications including security and autonomous vehicles. While the present deep learning systems based on CNNs achieve high accuracy in some domains, they remain brittle and susceptible to attacks, particularly in the case of complex input domains.|
Considerable research has gone into identifying adversarial examples and defense mechanisms against them. Comparably less research has sought to formally verify whether a perception systems is robust with respect to possible transformations of key features in the input space [KL19].
In this project the student will:
– Develop novel methods to formally verify the correctness of perception systems against adversarial examples, geometric and contrast/luminosity transformations.
– Develop a method for using any counterexample found to perform model repair.
– Evaluate the applicability of the approach in cases of practical relevance such as lidar detection, medical image classifications.
Panagiotis Kouvaros, Alessio Lomuscio. Formal Verification of CNN-based Perception Systems. https://arxiv.org/abs/1811.11373