Implementing Differential Privacy in Neural Networks to Enhance Data Security and Anonymization

Abstract: This PhD project aims to address the increasing need for robust privacy-preserving mechanisms in machine learning, particularly focusing on the application of differential privacy within neural networks. With the pervasive use of deep learning in processing sensitive information, there is a critical need to develop techniques that can protect individual data points from being reverse-engineered or identified. This research will explore innovative methods to integrate differential privacy into neural network architectures, ensuring the confidentiality of training datasets while maintaining the utility of the models.

Introduction: As neural networks become more ingrained in handling sensitive data, the potential for privacy breaches escalates. Differential privacy provides a framework to quantify and control the privacy loss incurred when releasing information about a dataset. This project will delve into the optimization of differential privacy in neural networks, balancing the trade-off between privacy protection and the predictive performance of the models.

Objectives:

  • To conduct a comprehensive literature review on current approaches and challenges of applying differential privacy in neural networks.
  • To develop a theoretical framework for differential privacy that is specifically tailored to neural network applications.
  • To design, implement, and evaluate new algorithms that integrate differential privacy into neural network training processes without significantly degrading model accuracy.
  • To create a benchmark dataset and evaluation metrics for assessing the performance of privacy-preserving neural networks.
  • To investigate the impact of differential privacy on various neural network architectures and learning tasks, such as classification, regression, and generative models.

Methodology:The project will utilize a combination of theoretical, experimental, and empirical methods. Initial efforts will focus on the theoretical underpinnings of differential privacy and its mathematical integration into neural network algorithms. Following this, experimental simulations using synthetic and real-world datasets will be conducted to assess the viability and performance of the proposed models. Empirical validation will be performed by comparing the new models with state-of-the-art privacy-preserving techniques.

https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf

https://arxiv.org/pdf/1607.00133.pdf

Project ID

STAI-CDT-2024-KCL-09

Supervisor

Dr Frederik Mallmann-Trennrandomlab.uk