Automated face recognition assists with low-prevalence face identity mismatches but can bias users

Br J Psychol. 2024 Nov 15. doi: 10.1111/bjop.12745. Online ahead of print.

Abstract

We present three experiments to study the effects of giving information about the decision of an automated face recognition (AFR) system to participants attempting to decide whether two face images show the same person. We make three contributions designed to make our results applicable to real-word use: participants are given the true response of a highly accurate AFR system; the face set reflects the mixed ethnicity of the city of London from where participants are drawn; and there are only 10% of mismatches. Participants were equally accurate when given the similarity score of the AFR system or just the binary decision but shifted their bias towards match and were over-confident on difficult pairs when given only binary information. No participants achieved the 100% accuracy of the AFR system, and they had only weak insight about their own performance.

Keywords: attitudes towards AI; automated face recognition; decision making; deep neural networks; face matching; face recognition.