Biometric Spoofing ExamplesReal-World Case Studies in Biometric Deception
High-Profile Biometric Spoofing Examples & Case Studies
If you’re reading about biometric spoofing for the first time, you may wonder why a fraudster would go out of their way to create a prosthetic fingerprint or bother with printed images of a person’s face.
The uncomfortable answer is because these scams work. For this reason, biometric spoofing happens constantly in the wild, and can often be carried out using alarmingly simple tools.
In this article, we will look at several high-profile examples that prove that even seemingly secure biometric security systems can be fooled.
Biometric Spoofing
Your face is more unique than your password: that’s the basic idea behind biometrics authentication. Biometrics are powerful, but they can still be spoofed. Today, we're discussing how biometric spoofing works, why it’s a problem, and ways to guard against the danger.
In 2002, Japanese researcher Tsutomu Matsumoto was curious to see how easily he could spoof a biometric identity. So, he found perhaps the most innocuous medium possible with which to commit fraud: a gummi bear.
That’s right. Matsumoto attempted to capture a latent fingerprint on glass, using just gelatin from a Gummi Bear candy and a plastic mold. And, by most accounts, his experiment was a success: the fingerprint he captured was “real” enough to fool a fingerprint sensor in 80% of cases. When I try to use fingerprint detection to unlock my phone, that probably doesn’t even work 80% of the time!
Matsumoto’s experiment wasn’t just for fun. It was meant to demonstrate the ease with which biometric security can be fooled using some very rudimentary tools.
In one instance, the fingerprints, admin panels, dashboards, facial recognition data, face photos of users, unencrypted usernames and passwords of over 1 million people were discovered on a publicly accessible database.
Israeli security researchers who accessed the database for forensic purposes found that it contained 27.8 million records totaling 23 gigabytes worth of biometric data. The researchers revealed that they were also able to “change data and add new users,” which meant that they could gain access to secured areas by replacing an existing user’s fingerprint data with their own.
The researchers published their findings in a paper and later provided data to The Guardian regarding their findings.
In March 2019, scammers used AI voice cloning technology to mimic the voice of a British energy firm’s German CEO. Using this cloned voice, they convinced a managing director at the company to wire more than $240,000 to a Hungarian bank account.
Scammers used urgency and other social engineering tactics to pressure the manager into initiating the wire, claiming that failure to do so would result in late-payment penalties.
According to a spokesperson from Euler Hermes, an insurance firm from which the British firm had taken out a policy, the voice cloning “software was able to imitate the voice, and not only the voice: the tonality, the punctuation, [and] the German accent” of the company’s CEO.
Because they were insured, the firm expected to be fully reimbursed for the amount lost in the scam. But, this example should raise alarm bells for any firm that does not have insurance which specifically covers this type of incident.
In 2024, cybersecurity firm Group-IB profiled an attack, conducted by a Chinese hacking group, in which a victim was lured into a malicious app and tricked into performing face scans.
This biometric data was then downloaded by the fraudsters and used to create AI-generated deepfakes that could be used to bypass biometric authentication systems. The fraudsters used these deepfakes to access the victim’s bank account and ultimately drained it of over $40,000.
To appear legitimate, the hacking group also “developed a tool that facilitates direct communication between victims and cybercriminals posing as legitimate bank call centers.”