Neural-symbolic learning: a solution for generalisable and explainable AI

This research proposal focuses on developing a novel hybrid neural-symbolic learning approach that combines the capabilities of deep learning methods in extracting features from unstructured data with the ability of symbolic learning in learning interpretable and therefore explainable models to solve a given downstream learning task. Various hybrid approaches have been proposed in the literature in the recent few years, but they tend to focus mainly on neural-symbolic reasoning for tasks such as for instance VQA and complex event detection. None of the existing approaches tackles the problem of hybrid neural-symbolic learning. In some few attempts the task (of learning ”symbolic” rules) from unstructured data has been tackled in a pure differentiable manner leading to suboptimal solutions and learned models whose interpretation is still subjective to post-processing discretization of learned outcomes. In this project we intend instead to preserve both neural and symbolic learning paradigms and to develop seamless ways to integrate them hopefully in an end-to-end fashion. Successful contributions in this project will have the potential to provide significant impact in many application domains as well as in providing new insights on ways in which symbolic learning, and other form of machine learning such as Deep RL could be integrated.

Project ID

STAI-CDT-2020-IC-43

Supervisor