Facial-recognition systems, long touted as a quick and dependable way to identify everyone from employees to hotel guests, are in the crosshairs of fraudsters. For years, researchers have warned about the technology’s vulnerabilities.
[F]rom spoofing people’s faces to access the digital wallets on their phones, to getting through high-security entrances at hotels, business centers or hospitals, [criminals are taking advantage of facial recognition,] according to Alex Polyakov, the CEO of Adversa.ai.
Mr. Polyakov regularly tests the security of facial-recognition systems for his clients and says there are two ways to protect such systems from being fooled. One is to update the underlying AI models to beware of novel attacks by redesigning the algorithms that underpin them. The other is to train the models with as many examples as possible of the altered faces that could spoof them, known as adversarial examples.
Unfortunately, it can take 10 times the number of images needed to train a facial-recognition model to also protect it from spoofing—a costly and time-consuming process. “For each human person you need to add the person with adversarial glasses, with an adversarial hat, so that this system can know all combinations,” Mr. Polyakov says.