Intrusion Detection Systems (IDSs) are commonly deployed in networks and hosts to identify malicious activities representing misuse of computer systems. The numbers and types of attacks have been constantly increasing, and detection based on manually-defined signature is no longer a viable option. Hence, AI-powered IDS solutions have been explored to keep up the arms race and scale to new threats, but they are not yet deployed at scale in companies; this is mostly because such AI-powered systems cannot be trusted and are not interpretable , and they suffer from a lot of false positives preventing their applicability in real-world scenarios. In particular, a major limitation is that most existing solutions for AI-powered IDSs are data-driven, where the relationships learned from the data are often artifacts or domain-agnostic, and thus harder to trust and interpret even for network administrators.
This project aims to explore the design of a novel model-driven AI paradigm for intrusion detection, where expert knowledge is embedded in a model to characterize user behaviors (e.g., through formal logic ), with the purpose of identifying malicious activities with trust, interpretability and verifiability of the IDS decisions, in particular when deployed to real-world contexts. In other words, this project aims to advance the state-of-the-art in AI-powered IDSs by integrating expert knowledge in the models to achieve trust, interpretability and verifiability of decisions. This will increase the overall safety of protected users by making IDS systems more effective and reliable, and progress towards industry-wide deployment of AI-based solutions for intrusion detection.
 R. Sommer and V. Paxson, “Outside the closed world: On using machine learning for network intrusion detection.” IEEE Symp. Security and Privacy, 2010.
 S. Jajodia, N. Park, F. Pierazzi, A. Pugliese, E.Serra, G.I. Simari, V.S. Subrahmanian. A probabilistic logic of cyber deception. IEEE Transactions on Information Forensics and Security, 2017.