Deepfakes: Unmasking the new face of cyber fraud

Image created by DALL·E 3.

Cybercriminals and enterprises are constantly outsmarting each other in a cat-and-mouse game. When hackers targeted vulnerable APIs, businesses countered with XDR and zero trust. When threat actors launched phishing campaigns, their targets adopted passwordless authentication.

But what if the attack came from someone you personally trust? While the age of AI has undoubtedly introduced ChatGPT and automation, it has also given rise to a new adversary — deepfakes.

As with any piece of technology, deepfakes have both good and bad uses. A recent cause for concern was an incident in Hong Kong where a multinational lost around US$25.6 million to a deepfake scam. This underscores the problem: As the world becomes fascinated with AI, cyber thieves are also using it to do their dirty work.

- Advertisement -

Digital heist 

In late January, a finance employee at a multinational firm in Hong Kong received an email purportedly coming from their UK-based Chief Financial Officer. According to Hong Kong police, the employee was initially doubtful because the message indicated that a secret transaction had to be undertaken.

A video conference with the CFO and other company staff followed. Unknowing that the individuals in the meeting were not the actual company executives, the employee proceeded to make 15 transactions totaling HK$200 million (US$25.6 million) to 5 local bank accounts. It was only when the employee checked back later with their head office that the scam was uncovered.

As per police officials, the criminals used deepfake technology to deceive the victim into sending them money.

“We want to alert the public to these new deception tactics. In the past, we would assume these scams would only involve two people in one-on-one situations, but we can see from this case that fraudsters are able to use AI technology in online meetings, so people must be vigilant even in meetings with lots of participants,” Acting Senior Superintendent Baron Chan said.

Indeed, AI-powered cyberattacks are on the rise, with 95% of IT leaders worldwide stating that cyberattacks are more sophisticated now than they’ve ever been.

High stakes

While the Hong Kong police did not name the victim or the multinational company, huge repercussions are expected, on top of the stolen US$25 million, one expert noted.

Heng Mok, Chief Information Security Officer, Asia-Pacific and Japan, Zscaler. Image courtesy of Zscaler.

“The financial loss is just a start. We also have to consider the reputational harm, legal ramifications, operational disruptions, and costs of recovery efforts,” Heng Mok, Chief Information Security Officer, Asia-Pacific and Japan, Zscaler, said.

Meanwhile, privacy and security are also at stake, another expert shared, due to the complex nature of AI.

“For enterprises, the security implications include the potential compromise of sensitive information, deception of employees, and undermining the integrity of corporate communications. Addressing these privacy and security concerns is crucial for maintaining trust and safeguarding against the broader implications of deepfake technology,” David Chan, Managing Director, AdNovum Singapore, said.

How did deepfakes become so successful in deceiving people in the first place? One expert pinpointed the abundance of personal data and content on social media as cybercriminals’ fuel for mischief.

“The modification of deepfake models can lead to the creation or appearance of identities of people who never existed, which can then be used in various fraud schemes. Moreover, the widespread adoption of AI means that such attacks are becoming easier to execute, which has already led to a surge in deepfake attacks across the Asia-Pacific region,” David Ng, Country Manager, Singapore, Trend Micro, remarked.

Industry effects

Although cybersecurity is always evolving to counter persistent and emerging threats, the introduction of AI into the equation has considerably made the battleground more complex.

David Ng, Country Manager, Singapore, Trend Micro. Image courtesy of Trend Micro.

The sophisticated nature of deepfakes, in particular, poses a significant challenge to trust and authentication, threatening the credibility of visual and auditory information.

“As deepfake techniques advance, the risk of more convincing and targeted cyberattacks, such as social engineering or business email compromise, increases significantly. This technology has the potential to erode public trust in media and compromise the integrity of information, making it critical for cybersecurity measures to adapt,” Shahnawaz Backer, Senior Solutions Architect, F5, shared.

According to him, the scam in Hong Kong was just the tip of the iceberg.

“Deepfakes possess the capacity to influence public opinion significantly, particularly in the political landscape. In fact, there were concerns about the use of deepfakes to manipulate speeches or statements, potentially impacting elections or public opinion,” he added.

Trend Micro’s David Ng likewise agreed with these observations, stressing cybercriminals’ continued use of the technology as a major immediate concern.

James Cook, VP APAC, Digital Security Solutions, Entrust. Image courtesy of Entrust.

“Of all the AI-powered tools that have become progressively more sophisticated, we foresee voice cloning being significantly abused in near-future scams. This is because voice cloning tools are among the AI tools ripe for hyper-realistic audio and video misrepresentation in real time. Such threats are also likely to remain more targeted as it requires adversaries to collect numerous audio sources from specific individuals to ensure successful AI-driven voice impersonation,” he said.

What’s more alarming is that malicious actors can also trick generative AI systems into circumventing their own censorship rules.

“A resourceful imposter who already holds stolen credentials and uses a virtual private network connection would be able to maintain anonymity when executing scams in these systems. Ultimately, the accessibility of AI and deepfake technologies will clear the path for more convincing and pervasive scams to targeted victims in the future,” he continued.

These realities serve as a major hurdle for the entire cybersecurity landscape today, with the Deloitte Center for Financial Services expecting synthetic identity fraud to generate at least US$23 billion in losses by 2030. This is where government regulation and legislation can be a huge difference maker, James Cook, VP APAC, Digital Security Solutions, Entrust, noted.

“Considering the pace of technology and AI advancements, there is a need for legislation to keep up. Regulatory bodies should continue to establish clear guidelines and regulations for the responsible use of AI, addressing issues from data privacy to the spread of misinformation. Businesses should consider each new government initiative a call to action to improve not only their own cybersecurity strategies, but also to consider the impact of new technologies, like AI, on their organisation and their customers,” he said.

David Chan, Managing Director, AdNovum Singapore. Image courtesy of AdNovum Singapore.

In Singapore, the existence of AI regulations is already a step towards the right direction. Cook highlighted Project Mindforge by the Monetary Authority of Singapore, the Infocomm Media Development Authority’s “AI Verify” testing framework, and the recently announced National AI Strategy 2.0 as notable examples.

Meanwhile, for AdNovum’s David Chan, regulation must take on a multi-pronged approach in order to maximise success.

“Regulations should focus on responsible use, technology development, and public education to effectively manage the risks associated with deepfake technology. A holistic approach that combines legal frameworks with technological advancements and public awareness is essential for addressing the multifaceted challenges posed by deepfakes,” he said.

Battle-ready

With deepfakes here to stay, what can enterprises do to stay one step ahead of cybercriminals? 

Shahnawaz Backer, Senior Solutions Architect, F5. Image courtesy of F5.

For F5’s Shahnawaz Backer, a “meticulous understanding and documentation of sensitive transactions, along with the implementation of stringent verification processes to act as a crucial line of defence” will help prevent incidents like the one in Hong Kong.

One such verification process, according to Entrust’s James Cook, is through biometrics.

“One of the most secure ways to detect and prevent fraudulent attempts, biometric verification sees three times fewer fraudulent attempts than documents. Taking it a step further, AI-driven biometric systems can analyse miniscule details in facial expressions, skin texture, and even micro-movements that are typically inconsistent in deepfakes,” he said.

Meanwhile, implementing zero trust will strengthen an organisation’s defences, leaving very little to no room for human error.

“For cybersecurity teams, a zero-trust architecture enforces access policies based on context—including the user’s role and location, their device, and the data they are requesting—to block inappropriate access and lateral movement throughout an environment,” Zscaler’s Heng Mok said.

“Cybersecurity experts can look at multiple factors, calculate a score, and assign risks to users trying to gain access. But the zero-trust mindset should extend beyond just IT architectures. By educating their employees to adopt a zero-trust mindset, they can help detect anomalies indicative of deepfake manipulation, such as inconsistencies in communication patterns or unusual access requests,” he added.

In addition, workforce education and training is pivotal to stopping fraudulent attempts using deepfake technology.

“Employees are the first line of defence. Organisations should empower them with the necessary education and tools to safeguard against such threats,” Trend Micro’s David Ng said.

He further enumerated the possible signs of a deepfake attack:

  • Watch out for anomalies in the visuals and audio. For instance, atypical facial movements or blinking patterns, whether the audio matches lip movements, glitches, or noticeable edits around the face.
  • Check and verify the source of the audio, video, or content. For example, if it’s a video from a news channel, they will likely have consistent and standard banners, branding, and a running script of breaking news at the bottom. These are all signs of authenticity.
  • For live voice or video calls among colleagues in an organisation, it is important to authenticate those involved in the call with three basic factors: something that the person has, something that the person knows, and something that the user is. Ensure that the “something” items are chosen wisely.

Ultimately, enterprises can also fight cybercriminals using the same tools being leveraged against them, AdNovum’s David Chan said.

“Utilising AI/ML-powered tools automates the flagging of potential deepfakes based on visual and audio anomalies, offering an additional layer of defence. Implementing explainable AI techniques also increases the transparency and trust in AI-driven deepfake detection tools, allowing employees to understand the rationale behind the system’s decision,” he concluded.