Mackenzie Jorgensen of the 2020 cohort of the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence presented her paper, ‘Supposedly Fair Classification Systems and Their Impacts’ at the 2nd Workshop on Adverse Impacts and Collateral Effects of Artificial Intelligence Technologies – AIofAI 2022 at International Joint Conferences on Artificial Intelligence (IJCAI 2022).
The paper focuses on the impact of discrimination mitigation methods on Machine Learning (ML) model predictions. In particular, it examines whether supposedly “fair” models have an impact on disadvantaged groups and individuals.
As Mackenzie explains, “The paper covered experiments I conducted to investigate whether using fairness interventions actually benefits the disadvantaged group in a financial domain scenario. We showed that simply because an outcome is fair does not mean that a practitioner can assume that it has a positive impact.
Mackenzie and her fellow authors emphasize why the interplay between impact, fairness metrics, fairness interventions, and ML models needs to be considered to avoid undesired impacts, particularly on disadvantaged groups.
After starting her PhD in 2020, Mackenzie has enjoyed the return to in-person opportunities to meet and learn from her colleagues and peers in the field. The conference in Vienna was also an opportunity for her to present her PhD research for the first time. As she says, “It was my first time attending a conference in person since beginning my PhD. The AIofAI ’22 workshop organizers were so kind and the speakers were fantastic. Ronald C. Arkin’s talk about robots and deception was my favourite! I found that talking about my research really brought it to life and reinvigorated me to keep making progress. I loved learning from others too and meeting other passionate Responsible AI researchers from early career stage levels like myself to leaders in the field. To top it off, Vienna was a beautiful location for IJCAI in July 2022.”
The paper is co-authored with Elizabeth Black, Natalia Criado and Jose Such.