August Man SG|Issue 158
IT WASN’T THAT LONG AGO when experts saw facial recognition technology as the “it” thing in the biometric security landscape. The premise was simple and sensible enough: with the right algorithm, the unique composition of any person’s face can be reduced to a digital key that’s used as a security token. It’s an incredible modern convenience, but how do we make sure that these measures work as intended while remaining in the possession of their rightful owners?
The growing trend of utilising automated facial recognition (AFR) in a variety of sectors has raised numerous concerns. What started as a novel feature for unlocking smartphones started seeing applications in more sensitive areas such as online banking, surveillance and law enforcement. Naturally, this has led to questions about the accuracy and legitimacy of AFR.
Facial Nuances Are Tricky
For biometric security to be viable, it must be accurate. Unfortunately, facial recognition software is not immune to false positives. In fact, they can perform rather poorly: earlier in February, the British Metropolitan Police deployed facial recognition tech on 8,600 pedestrians in London (without consent, by the way, in a clear infringement of privacy). The system, which interfaced with the Met Police database, generated eight alerts. Here’s the kicker: only one of them was an accurate identification that led to an arrest. In other words, the error rate was a whopping 87.5 per cent.
You can read up to 3 premium stories before you subscribe to Magzter GOLD
Log in, if you are already a subscriber
Get unlimited access to thousands of curated premium stories and 5,000+ magazines
READ THE ENTIRE ISSUE