Mackenzie Jorgensen

While algorithmic decision-making systems are powerful, they carry with them a risk of discrimination and unfairness. AI practitioners can struggle to identify and rectify this unfairness. While bias mitigation methods have been developed, they do not necessarily prevent AI from making potentially harmful predictions and in some cases do not follow the UK Equality Act. The main aim of this project is to decrease negative impact arising from these AI systems, while also aiming to uplift underprivileged groups and help ensure that these systems abide by the UK Equality Act and are anti-discriminatory.  

My research project, AID: Attesting AI Discrimination, and working alongside my supervisors, were my primary draws for joining the STAI CDT. My project aligns with my interest in AI and with my passion for AI/data ethics. I was also eager to join the CDT because of the way it is structured such that I would be a member of a cohort of researchers on the cutting edge of Safe and Trusted AI. The training programme was another positive for me since it helps me not only become a better researcher, but a more informed and developed person as well.   

— 

Undergraduate Qualification: BSc in Computer Science, Villanova University, Villanova, PA, USA 

Website: Mackenzie Jorgensen