Cyber deepfakes and double takes

Most people understand deepfake technology through the lens of apps such as Reface or Avatarify, which allow users to visually swap faces of friends or celebrities on videos and GIFs. But how exactly does this technology work? Deepfakes are audio or video that is either wholly created or altered by artificial intelligence (AI) or machine learning (ML) to convincingly misrepresent someone as doing or saying something that was not actually done or said.

The more innocent use cases may be good for a laugh, but deepfake technology is also being weaponized by threat actors to create dangerous material for cybercrime activities. For example, you may have heard about the CEO whose voice was imitated convincingly enough to initiate a wire transfer of US$243,000. So, while the general public employs deepfake technology to mimic friends and famous faces, the implications of bad actors using deepfake technology for everything from social engineering attacks to misinformation campaigns has the security community on edge.

Abuse of deepfake technology is deeply concerning 

Deepfakes’ growth in sophistication and use for criminal activity is warranting international concern. Recently, the Singapore Government introduced a S$700,000 global competition as part of its national AI program to develop tools to detect and verify deepfakes. The Federal Investigation Bureau (FBI) in the United States also released a report in March this year, declaring that malicious actors almost certainly will leverage ‘synthetic content’ for cyber and foreign influence operations in the next 12-18 months. 

Over the last few months, I’ve spoken with several CISOs of prominent global companies about the rise in deepfake technology during security incidents. These were some of their concerns. 

  1. Facilitating Phishing

While the basic premise of social engineering attacks remain unchanged, security teams are now noticing deepfakes being used for additional subterfuge in business communication compromise (BCC) or as a component of phishing attempts. These attacks take advantage of the remote work environments to trick employees with a well-timed fake voicemail or voice message created to sound like it’s from a familiar person.

In VMware’s Global Incident Response Threat Report, 32% of respondents observed attackers using business communication platforms such as Microsoft Teams or Slack to facilitate lateral movement as part of their attacks. In fact, phishing campaigns via email or business communication platforms are particularly ripe for this kind of manipulation, as they leverage the implicit trust of employees and users when they are in their familiar virtual work environment.

  1. Biometrics obfuscation

The proliferation of deepfake technology opens a Pandora’s Box of identity authentication issues. Multifactor authentication is a cornerstone tactic of cyber vigilance, and biometrics have become a key factor due to its inherent uniqueness. Multiple governments have adopted biometrics to verify national digital identities as a result. India’s national ID project Aadhaar uses retinal scans and fingerprints as part of the enrollment process and is now the world’s largest identity program – having generated 1.3 billion Aadhaar numbers as of 2021.

Tracking identities on the move is pivotal to an organization’s internal security. However, deepfakes that can fool biometric authentication factors greatly increase the risk of compromise. A report from Experian identified synthetic identity fraud as the fastest growing type of financial crime. With cybercriminals using deepfake faces to dupe biometric verification, businesses reliant on facial recognition software will face significant new roadblocks in their identity and access management strategy.

  1. Dark web how-tos

Just as ransomware evolved into ransomware-as-a-service (RaaS) models, we’re also seeing deepfakes do the same. Threat intelligence company Recorded Future noted that threat actors on the dark web now offer custom services and tutorials on bypassing security measures that incorporate visual and audio deepfake technologies.

This intel demonstrates how attackers are taking it a step further than the deepfake-fueled influence operations that the FBI warned about in March by leveraging synthetic audio and video to evade security controls.

Furthermore, threat actors are using the dark web, as well as other sources such as Internet forums and messengers, to share tools and techniques involving deepfake technologies for the purpose of compromising organizations. 

Cybercriminals are first adopters of deepfakes

Today’s distributed workers are reliant on video conference tools to hold meetings and simply connect with each other. This creates a wealth of audio and video data that can be fed to machine learning software to create compelling deepfake material. In today’s digital world, nothing is off-limits to modern threat actors. In fact, cybercriminals are often first adopters of advanced technologies, as they hope to leverage unfamiliarity to gain penetration into security perimeters.

The prevalence of deepfake technology on the dark web and the potential for its use in future attacks should serve as a warning to all CISOs and security professionals that we’re entering a new era of distrust and distortion at the hands of attackers. Never trust, always verify – especially in the age of manipulated reality.