258 - Reducing Bias in Forensic Facial Recognition Using Psychology and Machine Learning
Find out more
Improving the performance of AI-based systems used for face identity recognition from surveillance camera images is crucial to reducing racially biased false positives and enhancing the overall effectiveness of the justice system. The results of the project should have an impact on individuals, family members, policing, and reduce societal consequences such as false arrests.
While facial identification in forensics had taken a back seat to fingerprints and DNA, the surge in captured videos and photos of criminal incidents has brought face comparison to the forefront of investigative and judicial proceedings (Jacquet and Champod, 2020). Whether conducted manually or through automatic biometric systems, forensic face comparison has become a pervasive and indispensable tool for guiding investigations. Facial recognition technology (FRT) is now employed for secure smartphone access and identifying criminal suspects from surveillance images within the justice system. Concerns regarding the uncritical use of FRT algorithms, highlighted by citizen's rights groups, social justice advocates, and the research community, point to undesirable societal consequences such as false arrests and excessive government surveillance. These repercussions disproportionately affect non-white people, as algorithms have historically demonstrated lower accuracy when applied to non-white individuals (Perkowitz, 2021).
Conventional Neural Network (CNN) models of face identification map widely variable images of a face onto a representation that supports identification accuracy comparable to that of humans. Well-established algorithms exceeded human performance on frontalimages with moderate changes in illumination and appearance (Kumar et al. 2009, Phillips & O’Toole 2014). However, the ability to match identity for in-the-wild images (not always front facing) appeared less accurate.