The dangers of deepfake are real for enterprises

Deepfakes are no longer limited to the realm of internet memes.

Anyone spending a modicum of time online may have heard of or seen fake videos of big names like Barack Obama, Kim Jong-un, or Vladimir Putin in varying levels of believability.

Some look obviously fraudulent, while others – like those Tom Cruise deepfake videos on YouTube – seem authentic at first blush.

For the uninitiated, deepfakes are defined as digitally manipulated video or audio recordings that replace someone’s face or voice with that of someone else, in a way that appears real.

According to VMware’s Global Incident Response Threat Report for 2022, deepfake attacks have increased by 13%, with 66% of respondents saying that they witnessed deepfakes during the past 12 months.

Rick McElroy, Principal Cybersecurity Strategist at VMware, said the term “deepfake” comes from the underlying technology called deep learning, which is a more advanced type of machine learning.

Deep learning algorithms help to solve problems when given large sets of data, and are used to swap faces in video and digital content to make realistic-looking fake media,” McElroy said.

Albert Roux, VP of Product Management for Fraud, Onfido. Image courtesy of Onfido.

Albert Roux, VP of Product Management for Fraud at Onfido, noted that deepfakes rely mostly on machine learning algorithms, especially neural networks like generative adversarial networks (or GANs) to create synthetic media.

“These algorithms are particularly suitable for generating deepfakes because the ML algorithm learns similarly to a human brain. It first observes and learns from a training dataset, such as real photos, videos, or audio – then generates the media to duplicate the elements of a human face including movements,” Roux explained.

Early deepfakes looked innocent enough. Nicolas Cage, for instance, was face-swapped onto dozens of random movies. But concerns arise when realistic-looking or sounding fake media portrays a person saying or doing things that did not happen in a malicious context.

John Shier, Senior Security Advisor at Sophos, remarked that while deepfakes have a playful aspect – like Snapchat filters – they also have a dark element.

“You can take somebody’s face and put it on another person’s body and make them dance around. A lot of people’s identities have been abused through generated pornography as well,” he said.

How businesses are vulnerable

For enterprises, deepfakes also pose a considerable threat.

John Shier, Senior Security Advisor, Sophos. Image courtesy of Sophos.

Imagine, said Shier, if an employee gets a phone call from the company’s CFO, who asks the subordinate to transfer money to an unknown account, but the person on the other end of the line is a fake voice.

“There have been instances where that has allegedly occurred. In 2019, a company was defrauded of 220,000 EUR because somebody got a phone call from an executive asking to transfer money. Upon further investigation, journalists found out that there was really no evidence that that was the case, but it doesn’t mean it can’t happen,” he said.

From a business perspective, deepfakes are ideal for scams, said Gary Davis, Chief Cybersecurity Advocate at BlackBerry.

“The goal is to obtain money or access to sensitive company data with manipulated audio recordings through deception,” he continued.

Davis also noted that deepfakes can be applied to written media, with technologies used to imitate the writing style and wording of company executives.

“This can result in phishing emails with fraudulent links that prompt employees to disclose passwords or share sensitive information. In the context of corporate fraud, deepfakes represent a more advanced form of social engineering,” said Davis.

Indeed, cybercriminals today have evolved beyond using deepfakes for influence or disinformation.

Rick McElroy, Principal Cybersecurity Strategist, VMware. Image courtesy of VMware.

“Their new goal is to use deepfake technology to compromise organisations and gain access to their environment,” said VMware’s McElroy. “Attackers are now looking to infiltrate business emails, for instance, to perform an unauthorised transfer of funds that can also leverage deepfake technology.”

Onfido’s Roux believes that any organisation that uses identity verification to conduct their business and protect themselves from cybercriminals can be susceptible to deepfake attacks.

“These could lead to a number of things including the creation of fake or fraudulent accounts for money laundering,” Roux stressed.

In his 2022 (ISC)² Security Congress presentation, Dr Thomas Scanlon, CISSP, Technical Manager – CERT Data Science, Carnegie Mellon University, revealed that deepfakes can also be used for bogus contracts (e.g., a fake supplier), corporate espionage, theft of intellectual property, and fake virtual employees.

For instance, a threat actor can create a deepfake of a programmer on LinkedIn, someone with plenty of social media photos. The threat actor poses as this programmer on video interviews, gets hired as a virtual employee, and is given access to their employer’s online platforms.

Scanlon revealed that this has actually happened.

In addition to individual enterprise fraud, Sophos’ Shier sees larger-scale possibilities, like if a deepfake of Joe Biden announced that the United States would enter the war in Ukraine as a combatant.

“That would have a massive impact on all sorts of things – because the markets would react right away. Even if it is demonstrably false later, the damage can already be done,” he remarked.

Ways to minimise fraud risk

So, what can organisations do to safeguard themselves from threat actors looking to deceive through deepfakes?

To BlackBerry’s Davis, there is no one-size-fits-all solution. However, he suggests several measures to reduce risks.

Gary Davis, Chief Cybersecurity Advocate, BlackBerry. Image courtesy of BlackBerry.

“To avoid becoming a victim of fraudulent practices, multi-level authentication procedures for transfers or data release can be implemented,” he suggested. “Such procedures should be outlined in internal company guidelines and communicated to each employee and used to educate employees about the dangers of deepfakes.”

Davis added that training and workshops on how to deal with such risks will also contribute to closing the gap for the organisation’s weakest link – humans.

VMware’s McElroy agrees.

“Most companies are in their early detection planning and mitigation stage, hence (they) require adequate training and employee awareness to create an additional line of defence. Training should include how to identify deepfake-based attempts and an overview of how the technology is leveraged maliciously. Early detection of false media can also help to minimise the impact on organisations,” he recommended.

Sophos’ Shier echoed the need for verification measures.

“Just make sure that you have robust processes around how you handle changes in financial transactions,” advised Shier. “If the CFO calls you and says, ‘I need you to transfer US$100,000 to this account for one of our biggest suppliers,’ you need to independently verify that with the supplier to make sure they changed their account number. You can also call back the individual that called you, to verify that it was the CFO you’re talking to.”

He added that digital watermarking can also be used for certain types of media, which could potentially then have a system that checks to see if the watermark exists.

While deepfakes are improving at an alarming rate, Onfido’s Roux thinks the technology which helps to detect them is getting better too.

“AI-powered biometric technology very accurately determines whether the video that is presented is real or a forgery. Techniques like motion tracking, lip sync, and texture analysis can verify whether the user is physically present,” he said.

BlackBerry’s Davis pointed out that on top of detection, AI and ML can be used to sift through hundreds or thousands of alerts, images, videos, and other assortment of data.

Dev Dhiman, Managing Director, GBG, Asia Pacific. Image courtesy of GBG.

“These technologies help us examine the ‘normal’ behaviour of the organisation and its users, and then either detect anomalies that do not match the behaviour of any user within the organisation, and/or make predictions as to whether a particular networking behaviour has lower or higher probability of being associated with a particular user,” Davis explained.

Dev Dhiman, Managing Director at GBG, Asia Pacific, believes that the root of the deepfake problem lies in identity verification.

“We do not have to accept this reality where deepfakes are elevating today’s fraud attempts. Businesses need to consider that digital identity has a major part to play in future economies. To better leverage digital IDs, organisations need to get better at identifying, onboarding, and protecting their users as they continue to fine-tune their digital strategies,” said Dhiman.

He added that organisations can layer traditional data sources with alternative identity data, such as mobile and social media data, as well as utilities payments.

“By combining data sets, businesses can construct richer, more contextualised understanding of individual users to perform more accurate risk assessments. Doing this also has the benefit of improving customer experiences,” Dhiman noted.

Roux recommends a multi-layered approach to detecting deepfakes, specifically when processing digital identity verification checks. This includes measures including asking for an e-ID, checking for passive fraud signals (e.g., device location, network integrity), and corroborating ID information with known databases.

Telltale signs of deepfakes

Dr Thomas Scanlon, CISSP, Technical Manager – CERT Data Science, Carnegie Mellon University. Image courtesy of (ISC)².

AI-powered software may eventually get better at spotting deepfakes, but right now, according to Carnegie Mellon’s Dr Scanlon, there is no retail product capable of reliably doing so.

As a result, it’s still better to be familiar with how to recognise a deepfake.

“You’re not going to just get a deepfake detector and run it; there’s no such product for you,” said Dr Scanlon. “Microsoft and Facebook do have pretty good deepfake detectors, (but) they do not share them with the general public – it’s not something you can purchase. They let government news media organisations use them, but there isn’t an off-the-shelf product or anything like that.”

Fortunately, there are a few simple cues to watch out for, which Dr Scanlon shared during his (ISC)² talk. These include the following:

  • Overly smooth faces (like they’re wearing extra makeup).
  • Unnatural flickering in the video’s lighting. 
  • Unnatural movements and expressions.
  • Unnatural hair and skin colour.
  • Awkward head positions.
  • Having double eyebrows or raising eyebrows at the wrong time.
  • Too much glare or a lack of glare on eyeglasses.
  • Mismatched or missing earrings.

Given all this information, enterprises must be practical in their approach regarding deepfakes. While the technology to detect it is getting better, more rigorous verification processes are necessary to avoid getting scammed.

It certainly doesn’t hurt to know how to identify a deepfake either, because falling for such trickery is serious business in today’s constantly evolving threat landscape.