Navigating Asia-Pacific’s deepfake dilemma

Many of us in Asia-Pacific have only begun to take deepfakes seriously in the past year.

This might stem from the fact that many of us have witnessed just how pervasive and persuasive they can be — especially with AI in the mix. The technology’s reach is fuelling major societal challenges, enabling increasingly sophisticated scams that exploit trust and emotional vulnerability. An example is deepfake romance scams, where a Hong Kong case saw victims losing over US$46 million to ‘love.’

Worryingly, it has also become increasingly commonplace to see ministers and celebrities ‘endorsing’ financial schemes. In Singapore, Senior Minister Lee Hsien Loong highlighted a case where fraudsters manipulated a video showing him advocating an investment product with reportedly guaranteed returns. In the Philippines, scammers are using deepfakes of local tycoons for investment fraud, while in Malaysia, fraudsters have exploited the voice and likeness of popular singer Siti Nurhaliza for similar scams.

- Advertisement -

Deepfakes have permeated so many aspects of life, with people being exposed to them on social media, and their reach extending into financial services and public sectors. This trend is particularly concerning, with elections approaching in Singapore and the Philippines next year. This concern has also prompted Singapore to ban deepfakes of candidates during the election period

Given the potential repercussions, deepfakes can disrupt power dynamics and gradually undermine trust in our economies and governments.

We see evidence of this in a recent study by Jumio, which found that 72% of consumers worry daily about being fooled by deepfakes into sharing sensitive information or money. This figure rises to an alarming 88% in Singapore. Furthermore, 67% of global consumers doubt their banks’ ability to combat deepfake-based fraud, and 75% are ready to switch providers over inadequate fraud protection.

But, while most people are already worried, are organisations taking the threat seriously enough?

Compounding the issue, deepfakes are rapidly evolving. The technology behind it is becoming more adept at deceiving our eyes and ears. How will this impact our current security standards? Crucially, how can organisations not only protect themselves but also restore consumer confidence in an era where seeing is no longer believing?

The rise of sophisticated deepfake tactics

Before tackling the threat, we first need to understand the lay of the land. AI is now more accessible than ever, and scammers are fully aware of its opportunities. They are already utilising advanced AI tools to enhance both the sophistication and scale of fraud — all at unprecedented speeds and low costs. The aforementioned fake videos of Singapore’s senior minister, with disturbingly realistic voice cloning and lip-syncing techniques, serve as stark examples.

In this context, concerning statistics about deepfaked politicians have emerged. While 83% of people in Singapore are worried that AI and deepfakes could influence upcoming elections, a surprising 60% believe they could easily identify a deepfake of a politician. This misplaced confidence is particularly troubling, given that 66% of respondents would still trust political news they encounter online, despite the risk of deepfakes. As we look ahead, the increasing power and availability of AI tools capable of producing highly realistic deepfakes significantly raise the risk of misleading the public about their leaders.

The implications extend beyond governments. Newer deepfake technologies could also undermine our financial systems. This includes the mass production of fake identities to create synthetic personas, such as combining real credentials with fabricated images. A growing trend also involves using camera injection techniques, which effectively tricks a device’s camera into perceiving people who aren’t physically present.

Innovative solutions in the face of growing threats

Fearful of deepfake scams, consumers are demanding change. A majority of global consumers (60%) are calling for more AI regulation to address issues surrounding deepfakes and generative AI, while 69% want stronger cybersecurity measures as confidence in banking protection wanes. However, with regulatory trust varying globally, the private sector must also step up.

For financial institutions, a vital step in strengthening security measures is ensuring the right person is in front of the screen during transactions. One of the most effective tools for this is liveness detection, which verifies and confirms the user’s physical presence behind the camera. Advanced liveness detection techniques undergo rigorous testing to combat even the most sophisticated spoofing attempts, including deepfakes.

Assessing additional fraud risk signals to detect anomalies and suspicious transactions throughout the customer journey adds an extra layer of security. These signals include checking if a user’s email and phone number have been used to open multiple accounts in a short timeframe, and verifying the user’s location via IP address. Increasing the frequency of monitoring activities, such as identifying accounts recently accessed from new locations or devices, is also important.

Another effective tool is predictive analytics. According to Jumio analysis, 25% of fraud is interconnected, carried out either by fraud rings or individuals exploiting shared information or credentials to open new accounts on banking sites, e-commerce platforms, and sharing economy sites. Predictive analytics can tackle this issue by identifying suspicious individuals or fraud patterns, strengthening security, enhancing trust, and fostering a secure environment for users and regulators. Bank Negara Malaysia is currently exploring predictive analytics to detect fraudulent transactions via its National Fraud Portal.

Constant vigilance: Staying ahead in the deepfake era

Businesses have made commendable progress in enhancing their defences against deepfakes and fraud. This includes implementing stricter authentication measures, moving beyond one-time passwords, and improving fraud surveillance. However, more must be done to address the growing sophistication of deepfake attacks.

For individuals, staying alert and well-informed is key. This is especially true given that 60% of global consumers — and 77% in Singapore — overestimate their ability to detect deepfakes. We’re all susceptible to misleading content, so it’s important to remain cautious.

For instance, deepfakes are often used to produce provocative or controversial material, so exercising caution toward such content is paramount. If you encounter something that seems excessively shocking or out of character, take the time to research it further through official sources to verify its authenticity before reacting or sharing.

For businesses, countering the rise in deepfakes and cyber deception requires more effective technological solutions. Incorporating multimodal, biometric-based verification systems is imperative. These technologies are key to ensuring that businesses can protect their platforms and their customers from emerging online threats, and are significantly stronger than passwords and other traditional, outdated methods of identification and authentication.

As deepfakes continue to evolve and infiltrate various aspects of our lives, both individuals and organisations must adopt a proactive approach. By fostering vigilance and leveraging advanced technologies, we can enhance our resilience against misinformation risks. These efforts are integral to preserving trust in our digital interactions and safeguarding the integrity of our societies.