In recent years, AI has achieved tremendous success in many complex decision making tasks. However, when deploying these systems in the real world, safety concerns restrict — often severely — their adoption. One concrete example is that of automated ventricular fibrillation classification (AVFC) , where a learned model needs to decide (i.e., classify) whether, given a certain ECG trace, a defibrillatory shock is to be preferred to cardiac massage .
In this context, and in many others alike, the safety of a learned model is gauged by its out-of-sample guarantees, i.e., formal probabilistic guarantees on how “well” such model behaves against unseen data-points taken from the same distribution. While for some applications such formal guarantees have limited appeal, for others they are of vital importance, such as for AVFC where the European Community imposes strict requirements . Providing such out-of-sample guarantees is particularly challenging in these settings, and many others alike, due to the limited available data, which are often difficult or very costly to obtain.
The impact of data scarcity on the quality of the trained models is particularly severe within the classical (and still sharpest) training-and-testing approach, which operates by dividing the dataset in a training set, solely utilized for training, and a test dataset, solely utilized for deriving generalization guarantees (e.g., through Chernhoff or Binomial test set bounds ). At its core, the central deficiency of such an approach lies in the fact that only a subset of the data can be used for training purposes if one wants to then provide formal generalization guarantees, thus significantly limiting the performance of the resulting model. In this context, a natural question arises: Can we develop new approaches where all data is used for training while still providing formal guarantees? While a number of very recent approaches have pursued this direction, even the sharpest of them (PAC-bayes learning ) lags behind the training-and-testing approach.
The goal of this project is to break this fundamental barrier. To do so, we will develop i) novel algorithms and ii) novel mathematical tools (i.e., generalization theorems) that allow to utilize all available data for training purposes, while providing formal generalization guarantees that are superior to those existing in the literature. The project will capitalize on very recent breakthroughs in the field of Scenario Approach , an area where the PI actively collaborates with the inventors of the approach. Preliminary results (available upon request) show that braking the “training-and-testing” barrier is indeed possible, and have thus opened the door for a new field of research, which this project will explore. In particular, we will focus on the following directions:
— Provide positive and negative results quantifying the extent to which the training-and-testing barrier can be broken, through the development of algorithms and mathematical tools;
— Integrate the approach with model-based learning approaches, such as Neuro-symbolic learning, to enhance data efficiency and interpretability ;
— Apply the developed techniques in data-scarce real world-settings, with particular focus on the AVFC problem discussed earlier.