Deepfake fraud: AI’s impact on financial institutions

The financial industry is at a technological crossroads. The rapid evolution of AI has brought both opportunities and significant challenges, particularly in identity verification and fraud prevention. Among these challenges, the rise of deepfakes is one of the most pressing. According to a Sumsub study, the Asia-Pacific region witnessed a 1,530% surge in deepfake cases between 2022 and 2023, marking the second-highest increase globally.

Deepfakes utilise a type of AI known as deep learning to create convincing fabricated images, videos, or audio recordings. As this technology advances, traditional biometric security systems are increasingly vulnerable. The ability to generate hyper-realistic, AI-produced images, videos, and live facial swaps has fundamentally altered the fraud landscape. Financial institutions must act swiftly to address the rising wave of deepfake-enabled fraud, which threatens their operational integrity and reputation. To mitigate these risks, institutions need both the knowledge and tools to strengthen defences and safeguard their customers.

The social and financial impact of deepfake fraud

In August 2024, a prominent financial institution in Indonesia experienced a significant case of deepfake fraud. Attackers acquired the victim’s ID through illicit channels, such as malware, social media, and the dark web. They manipulated the ID by altering features like clothing and hairstyle, and used the falsified photo to bypass the institution’s biometric verification systems. Despite the institution’s multi-layered security measures, over 1,100 instances of deepfake fraud were uncovered within its mobile app.

Unfortunately, such incidents are becoming increasingly common. In Indonesia alone, potential financial losses linked to deepfake fraud are estimated by Group-IB to total US$138.5 million over just three months.

The financial damage is only part of the issue. On a social level, individuals are increasingly targeted by fraudsters who use deepfake techniques to conduct social engineering attacks. These attacks exploit victims by manipulating them into sharing sensitive information, transferring funds, or downloading malware.

The emerging challenges of AI-driven fraud detection 

Traditional fraud detection systems are struggling to keep pace with the sophistication of deepfake technologies. The challenges extend beyond merely detecting deepfakes to addressing the broader ecosystem of tools enabling such fraud. Key challenges financial institutions face in preventing deepfake fraud include:

Lack of direct detection tools – While some tools exist to detect deepfakes, the technology continues to evolve rapidly. Fraudsters can now use open-source AI models to create highly realistic deepfakes, leaving detection tools struggling to keep up. This gap between evolving fraud tactics and static detection capabilities represents a critical vulnerability.

Real-time detection difficulties – Real-time fraud detection remains a formidable challenge. Traditional systems rely on device identifiers to track and differentiate between legitimate and fraudulent actions. However, deepfake fraud often involves cloned devices that fragment detection efforts, making it harder to spot fraudulent behaviour as it unfolds.

Limited access to training data – For AI-driven detection systems to be effective, they need access to a broad, diverse data set of both real and synthetic media. Yet, the ethical and privacy concerns surrounding the collection of such data, coupled with the challenges of generating high-quality synthetic data, mean that detection models are often poorly equipped to recognise the latest generation of deepfakes.

How financial institutions can protect themselves 

As deepfake fraud continues to evolve, financial institutions can no longer afford to rely solely on traditional, reactive security measures. To stay ahead of increasingly sophisticated threats, institutions must embrace a forward-thinking, proactive approach to fraud prevention.

Rethinking account verification processes – Financial institutions must recognise deepfake fraud as a genuine threat to digital onboarding and account registration processes. To defend against these risks, they should:

  • Implement multi-layered verification – Enhance digital onboarding with a combination of verification methods beyond biometric recognition. For example, integrating behavioural biometrics — such as typing patterns and user navigation styles — can add an additional layer of authentication that is more difficult for fraudsters to replicate.
  • Require physical verification for high-risk activities – For significant transactions or new account activities, requiring physical presence or in-branch verification can help prevent fraudulent activities from slipping through digital-only processes.

Deploy advanced anti-fraud systems – The evolution of AI-driven fraud demands the evolution of anti-fraud systems. Financial institutions should integrate multi-dimensional fraud detection mechanisms that include:

  • Device fingerprinting – By creating a unique digital signature for every device based on its hardware and software characteristics, financial institutions can detect cloned devices even when they are used across multiple accounts.
  • AI-powered anomaly detection – Implement AI algorithms that continuously analyse user behaviour for any deviations from established patterns, such as unusual activity times or odd transaction behaviours.
  • Cross-platform monitoring – Institutions should monitor user activity across various channels — web, mobile, and in-person — to detect discrepancies and track malicious activity across devices.
  • Collaboration and data sharing – Fraud doesn’t stop at the doors of any one institution. Financial organisations must collaborate to defend against global threats. By sharing insights into fraudulent accounts, devices, IP addresses, and geolocations, institutions can create a global database of threats that helps identify and prevent fraud across multiple platforms and borders.
  • Leverage AI and behavioural analytics – AI-driven fraud detection tools can analyse user interactions and behavioural patterns in real time. By employing  machine learning, these systems may help identify anomalous activities early and reduce the risk of fraud. 

A call for proactive security measures

The financial sector is facing an existential threat in the form of deepfake fraud. The very tools designed to protect identities and secure transactions are increasingly being exploited by fraudsters. To navigate this new environment, financial institutions must adopt a proactive, multi-layered security strategy that incorporates the latest advancements in AI, behavioural analytics, and device monitoring. By rethinking their approach to fraud detection and verification, embracing collaboration, and investing in cutting-edge technologies, financial institutions can stay one step ahead of the fraudsters and preserve the trust of their customers in an era of rapidly advancing threats.