Biometrics are top pick to counter rising deepfake risk

The risk of deepfakes is rising with almost half (47%)  of organisations having encountered a deepfake and 70% believing such attacks, which are created using generative AI tools, will have a high impact on their companies, according to a report from iProov.

Yet, perceptions of AI are hopeful as two-thirds (68%) of organisations believe that while it’s impactful at creating cybersecurity threats, more (84%) find it’s instrumental in protecting against them. 

This is based on a new global survey of technology decision-makers, which also found three-quarters (75%) of solutions being implemented to address the deepfake threat are biometric solutions.

- Advertisement -

The survey commissioned by iProov and conducted earlier this year covered 500 technology decision makers from the United Kingdom, United States, Brazil, Australia, New Zealand and Singapore.

While organisations recognise the increased efficiencies that AI can bring, these benefits are also enjoyed by threat technology developers and bad actors. 

Almost three quarters-(73%) of organisations are implementing solutions to address the deepfake threat but confidence is low with the study identifying an overriding concern that not enough is being done by organisations to combat them. 

More than two-thirds (62%) worry their organisation isn’t taking the threat of deepfakes seriously enough.

The survey shows recognition by organisations that the threat of deepfakes is a real and present threat. They can be used against people in numerous harmful ways including defamation and reputational damage but perhaps the most quantifiable risk is in financial fraud. 

“We’ve been observing deepfakes for years but what’s changed in the past six to 12 months is the quality and ease with which they can be created and cause large-scale destruction to organisations and individuals alike,” said Andrew Bud, founder and CEO, iProov. 

“Perhaps the most overlooked use of deepfakes is the creation of synthetic identities which because they’re not real and have no owner to report their theft go largely undetected while wreaking havoc and defrauding organisations and governments of millions of dollars,” said Bud.

He added that despite what some might believe, it’s now impossible for the naked eye to detect quality deepfakes.  

“With the rapid pace at which the threat landscape is innovating, organisations can’t afford to ignore the resulting attack methodologies and how facial biometrics have distinguished themselves as the most resilient solution for remote identity verification,” said Bud.

Amidst the ever-shifting terrain of the threat landscape, the tactics employed to breach organisations often mirror those used in identity fraud. 

Deepfakes are now tied for third place amongst the most prevalent concerns for survey respondents with the following order — password breaches (64%), ransomware (63%), phishing/social engineering attacks (61%), and deepfakes (61%).

Biometrics have emerged as the solution of choice by organisations to address the threat of deepfakes. Organisations stated that they are most likely to use facial and fingerprint biometrics however, the type of biometric can vary based on tasks. 

For example, the study found organisations consider facial to be the most appropriate additional mode of authentication to protect against deepfakes for account access/log-in, personal details account changes, and typical transactions.

Further, organisations view biometrics as a specialist area of expertise with nearly all (94%) agreeing a biometric security partner should be more than just a software product. 

Organisations surveyed stated that they are looking for a solution provider that evolves and keeps pace with the threat landscape with continuous monitoring (80%), multi-modal biometrics (79%), and liveness detection (77%) all featuring highly on their requirements to adequately protect biometric solutions against deepfakes.