A normative multi-agent framework to ensure resilience of autonomous vehicles’ AI algorithms against adversarial machine learning attacks.

An increasing number of depth sensors and surrounding-aware cameras are being installed in the new generation of cars. For example, Tesla Motors uses a forward radar, a front-facing camera, and multiple ultrasonic sensors to enable its Autopilot feature. Similarly, Google’s version of the self-driving car also uses Lidar and cameras to collect necessary data to enable autonomous driving. Recently, several academic works demonstrated that an adversary can launch adversarial machine learning evasion attacks against 2D (RGB) road sign classifiers and 3D (depth) object detectors. Such attacks can have severe safety repercussions: they can manipulate the vehicle to misclassify a stop sign as a speed limit sign, while the latter can force the vehicle to come to an abrupt stop by introducing ghost objects in the surroundings of the ego-vehicle.

The goal of the project is to develop a framework for enforcing the UK Highway Code on interconnected autonomous vehicles. The project will model autonomous vehicles in an interconnected road infrastructure as a normative multi-agent system. It will explore the development of an ontology and relevant traffic norms that vehicles should adopt; and encode appropriate responses (sanctions) for violations. The project will apply the above to ensure safety and accountability in the face of machine learning attacks on 2D (RGB) and 3D (depth) object classifiers and motion planning algorithms.