Automated verification and robustification of tree-based models for safe and robust decision making

Advances in machine learning have enabled the development of numerous
applications requiring the automation of tasks, such as computer vision, that were previously thought impossible to tackle. Although the success was driven mainly by neural networks, applications that require the explainability of their decisions are tackled using more interpretable models. Decision trees, which are more “symbolic” than “connectionist” structures, are models widely used in practice.

Despite being more interpretable, decisions trees are still susceptible to similar vulnerabilities that hinder the adoption of neural networks in safety-critical applications. For example, small perturbations to an input (typically called adversarial perturbations), can dramatically change the output, thereby raising concerns on model robustness and safety. To address this, recent work has put forward formal methods to automatically assess the robustness of decision tree models against adversarial perturbations [1].

This work is still preliminary, however, and more efforts are required to ensure that
the verification methods scale to very large decision trees and forests that are used
in applications ranging from finance, insurance, and decision making in general.
Furthermore, the state-of-the-art does not address any remidial action to be taken
once a vulnerability is discovered.

This project is concerned with the development of methods that enable the scalable
verification and automated robustification of symbolic decision making models enabling the refinement of unsafe decision trees so that they satisfy their safety specifications.

In particular, the project is likely to include:

(1) The development of highly efficient verification methods for decision trees and forests.

(2) The development of methods using outputs from step (1) to modify the model under analysis
towards improved robustness.

(3) Applications of the methods in practice, including finance, insurance, and beyond.

The student will be fully immersed in the VAS research team at Imperial-X (https://vas.doc.ic.ac.uk/). Collaboration with companies and start-ups is possible, including placements.

https://www.doc.ic.ac.uk/~alessio/papers.html

https://vas.doc.ic.ac.uk/

Project ID

STAI-CDT-2023-IC-7

Supervisor

Prof Alessio Lomuscio http:www.doc.ic.ac.uk/~alessio

Category

Verification