Data Bias Evaluation and Mitigation via Rule-based Classification

Motivation: Training data can be severely biased. The existing metrics of data bias are based on data balance situations conditioned on protected attributes. This is coarse-grained and does not consider the relationship among different attributes as well as different data instances. Such limitation may lead to ignorance in data bias, which ultimately yields more ML model bias and prohibits the bias debugging process.

Proposal: This project proposes to use rule-based classifiers, such as decision trees, to represent training data and then evaluate data bias. The importance of protected attributes (e.g., the depth of a protected attribute in nested if else decision rules) indicates the severity of data bias. To mitigate the data bias, we apply rule editing to reduce the importance of protected attributes while retaining the decision structure of non-protected attributes. The edited new decision rules will be adopted to generate new data and argument the original training data to mitigate data bias.

WP1: Data bias evaluation via Decision Tree representation. This package aims to propose a new data bias metric based on the feature importance of protected attributes in a rule-based classifier (taking decision trees as the first step). We will investigate the relationship between the new metric and the existing data bias metrics and compare their effectiveness in terms of their association with model bias.

WP2: Empirical investigation of different rule-based classifiers and feature importance calculations. This package aims to conduct empirical studies to understand which rule-based classifier and feature importance measurement have the best effectiveness in measuring data bias.

WP3: Decision rule editing for data augmentation and bias mitigation. This package aims to edit the decision rules to reduce the importance of protected attributes in decision-making. Example editing and optimization techniques are tree pruning, either based on heuristics or search algorithms. The new decision rules will be adopted to generate new training data to augment or replace the original training data, reduce data bias, and ultimately reduce ML model bias.

Hooker, Sara. “Moving beyond “algorithmic bias is a data problem”.” Patterns 2, no. 4 (2021): 100241.

Zhang, Wenbin, and Eirini Ntoutsi. “FAHT: an adaptive fairness-aware decision tree classifier.” In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp. 1480-1486. 2019.

Kim, Byungju, Hyunwoo Kim, Kyungsu Kim, Sungjin Kim, and Junmo Kim. “Learning not to learn: Training deep neural networks with biased data.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9012-9020. 2019.

Hort, Max, Zhenpeng Chen, Jie M. Zhang, Federica Sarro, and Mark Harman. “Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey.” arXiv preprint arXiv:2207.07068 (2022).

Project ID