An independent evaluation of demographic bias in ID R&D’s biometric facial liveness detection product has been performed by BixeLab and shows that IDLive Face delivers fair performance for people regardless of their gender, age and race.
As companies race to leverage edge AI to improve the speed and accuracy of facial recognition models, it is also important that companies address issues of fundamental bias in algorithms. ID R&D notes that theirs is the first independent test of its kind in the industry, according to the announcement.
Liveness detection is required to prevent spoofing or presentation attacks against biometric systems used for identity verification and authentication, but like matching algorithms, can deliver higher rates of false positive or false negative results for individuals in particular demographic groups.
The bias evaluation performed by BixeLab serves as recognition of the potential negative impacts of demographic performance differentials of not just matching algorithms, but also liveness detection systems, the companies say. It also establishes test methodologies that can be used for future test standards and evaluations.
The test methodology is derived, where possible, from the ISO/IEC 19795-2 and ISO/IEC 30107-3 standards for accuracy and presentation attack detection effectiveness, respectively.
ID R&D says the evaluation demonstrates the fairness of IDLive Face’s passive liveness detection for target demographics and will inform the company’s ongoing development of machine learning-based products as Responsible AI.
“Spoof detection can introduce bias just as biometric matching algorithms can, but was being neglected as a bias contributor until now,” comments Alexey Khitrov, CEO and co-founder of ID R&D. “We are delighted to see our products confirmed by experts to be fair, and we hope this groundbreaking work helps others in our industry achieve the same.”
Khitrov adds, “The ability of IDLive Face to operate with fairness across demographic groups is essential to its global success, and so we designed its development process specifically to prevent bias. It’s now being used in over 70 countries.”
BixeLab produced a comprehensive report that gives details about the testing methodology, and a confirmation letter for public attestation of the results. The letter refers to the evaluation as “a Level B Evaluation to assess liveness accuracy and bias in target demographics including gender, age group, and race.”
ID R&D has also produced a 13-page white paper that sets out the problem, as well as ID R&D’s primary tenets and principles for Responsible AI. The paper delves into issues around the constitution of datasets, the metrics used for calculating bias, and examples of successful approaches to bias reduction.
Biometric Update will host and moderate a webinar on bias in liveness detection and how to mitigate it, featuring insights from ID R&D, on Thursday, June 30 at 9:30 am EDT. Register here for the webinar.
Originally published on Biometric Update
algorithm | edge AI | facial recognition | ID R&D | liveness detection