Biometrics are scary, coming fast and by no means fool proof

You can change a stolen password, but you can’t change your face

Despite the Australian government’s recent termination of its contract with NEC for a multimillion-dollar biometric identity project citing delays, the race to wide-scale biometric verification continues.

Despite the hype surrounding biometric screening, technologies such as facial recognition are no silver bullet. Whether being used for identification verification, law enforcement or one facet in multi-factor authentication, flaws are routinely being discovered in this burgeoning technology.

Almost as soon as these solutions are released, cyber attackers and security researchers discover ways to thwart them. In 2014, German hacker Jan “Starbug” Krissler demonstrated how fingerprints could be faked with only a few high-resolution photos of someone’s hands.

Underscoring the potential severity of the technique, the target in his presentation was German Defence Minister Ursula von der Leyen.

Just a year earlier, Stargbug made headlines when within 24 hours of Apple’s iPhone 5S release, he was able to spoof Apple’s TouchID sensors. He did so by lifting a fingerprint from a smudge on an iPhone screen.

Scammer tricks

And it’s not just fingerprints. Voice recognition – the fax machine of biometrics – was also thought to be unbeatable, until scammers started recording victim’s voices and using them to bypass controls.

It won’t be long until we see someone’s face being “reverse-engineered” with an algorithm that needs only a few 2-D images. Then, with the help of a 3-D printer, scammers could be walking around in hyper-realistic masks of their victims, taking thousands of dollars out of ATMs.




In 2014, German hacker Jan “Starbug” Krissler demonstrated how fingerprints could be faked with only a few high-resolution photos of someone’s hands.

Not all of biometrics’ flaws, however, are being discovered by external sources. As the technology is increasingly adopted, its shortcomings become clear.

Despite improving levels of accuracy, false positives continue to plague implementations – this is on top of the exceedingly difficult and expensive process of developing, configuring and deploying the technology in any manner that isn’t Google or Facebook rolling out an update from somewhere in California.

Australian facial recognition problems

In a recent example from this month, the Australian government cancelled a $52 million contract to build the nation’s “Biometric Identification Service”. According to reports, it was ripped up due to cost blow-outs, delays and a high number of false positives that cast clouds over its accuracy.

One biometric technology’s greatest downfalls is the widely divergent “percentage accuracy” metric. While some, such as Facebook, claim their facial recognition algorithm is 98 per cent accurate, at the other end of the spectrum, research from privacy campaigners found the technology used by London’s Metropolitan Police was incorrect “98 per cent of the time“.

Forgetting for a second how absurd it is that a social media platform has a higher accuracy rate than one of the world’s finest law enforcement agencies, the risk of any false positives in a policing context should give us pause for thought.

Perhaps the technology’s biggest flaw, however, is that biometric details are static. Unlike passwords, which you can change if they’re stolen, you can’t replace your fingerprints, iris pattern or facial features if a database containing that information is breached.

Despite this, facial recognition is being rolled out or trialled in airports, schools and even shopping centres.

Banks too are getting in on the act. In Macau, some ATMs also require users to stare into a camera so their identities can be verified by facial recognition software. Back on the mainland, the Chinese government uses facial recognition software to catch everyone from jaywalkers to career criminals.

Sensible implementation

Ultimately, although biometric screening is an exciting new technology it must be implemented in a sober, calculated and strategic way in order to protect against its pitfalls.

When being used to verify credentials, biometrics must be only one of at least two factors of authentication.

When an algorithm is helping police identify a suspect, a trained human analyst must then confirm the results. And, perhaps most importantly, all biometric data must be stored as if it were the most precious asset an organisation owned.

The issue of secure storage should make anyone considering biometric screening think long and hard about whether it is actually necessary.

Sure, it makes sense from a border security perspective, but would it be worth the risk to have my face or fingerprint linked to my frequent fliers account or loyalty card?

All of this goes without raising the very real concerns over privacy and whether the ongoing adoption of biometric screening is effectively eliminating any semblance of privacy we once enjoyed.

Biometrics are an exciting technology, but we must be careful not to drink to deeply from the Kool-Aid. We must consider the consequences of inaccurate algorithms.

After all, not being falsely accused of a crime is vastly more important to me than a website accurately identifying my aunt in a photograph.

Tags: No tags

Comments are closed.