Mackenzie Jorgensen, (2020 STAI CDT PhD cohort) presented the paper ‘Not So Fair: The Impact of Presumably Fair Machine Learning Models,‘ at the 6th AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES 23) in Montreal, Canada.
Mackenzie explains the focus of the paper: “Classification systems that make consequential decisions about people, like human decision-makers, are imperfect. They can learn from data that highlights societal inequities and injustices and as a result, discriminate against people from underprivileged or underrepresented groups. Bias mitigation methods aim to mitigate this discrimination in systems. In our paper, we investigated if bias mitigation methods used during training these systems actually benefit individuals from a disadvantaged group in a loan repayment use case. Some of the bias mitigated models actually resulted in more harm than benefit for that group. Our results emphasize that implementing fairness in a system is not a box checking exercise. Rather, the context, impact, and implications of different model outcomes must be taken into account when deciding what bias mitigation method to use.”
AIES 23 was Mackenzie’s third in-person conference and she loved her time in Montreal. She said, “The conference was smaller than previous ones I had attended which meant that all attendees could listen to everything and discuss the papers, panels, and talks more easily. I loved the interdisciplinary nature of the conference and the focus on the impacts of technology on our society.”
The paper is co-authored with Hannah Richert, Elizabeth Black, Natalia Criado and Jose Such. The full paper can be read here.