For Ping Identity CEO Andre Durand, the deepfake problem is a failure of distribution rather than detection. He observes that the algorithms to spot synthetic content are already strong, but they aren’t yet built into the everyday systems where attacks occur. As agentic AI expands the number of digital actors in the enterprise, the challenge now is ensuring every interaction, human or otherwise, is verified.
In a conversation with Frontier Enterprise, Durand discussed how enterprises can counter deepfake-driven attacks, the role of decentralised identity, and why verification must extend beyond humans to the AI agents acting on their behalf.
Why are deepfake threats harder to detect in enterprises than on consumer platforms?
The enterprise attacks are far more researched and targeted, which makes them more believable. Often, extensive intelligence gathering takes place before an enterprise is attacked. The financial stakes are higher, and a large amount of information about targeted individuals and major companies is publicly available to aid attackers. For instance, I have many videos online, and it no longer takes much footage or audio to convincingly mimic my likeness or voice. Some of the communication channels used in these attacks are also not fully verified; not necessarily unauthenticated, but lacking complete verification.
That will change over time. There is still an inherent trust in many internal communication channels, which may not be entirely justified. Eventually, repeated abuse will shift this default mode from trust to distrust. Until communication channels are comprehensively verified and authenticated from a reliable source, enterprise attacks will remain a more serious threat. By contrast, the general public is becoming increasingly sensitised to suspicious behaviour, and because consumer-targeted attacks are not as specific, they can be recognised more easily.
Where do current detection models struggle with high-quality synthetic media?
This is where the technology industry finds itself in a constant cat-and-mouse game. Deepfakes are becoming increasingly realistic, while the algorithms designed to detect them are also improving. The industry continues to test and benchmark deepfake and liveness detection across various algorithms. For instance, the National Institute of Standards and Technology conducts assessments of different vendor solutions and their ability to identify deepfakes. As deepfakes advance, so too will detection capabilities. Since AI is used both to create and detect deepfakes, this dynamic is unlikely to change.
What matters now is ensuring that detection technology is implemented in all the right places. The issue is less about the strength of detection models and more about deploying them across every channel vulnerable to deepfake propagation. For example, detection needs to be embedded in videoconferencing tools such as Zoom and Teams, as well as in email security services designed to block phishing attempts before they reach users. Liveness detection in leading algorithms has gotten awfully good, and biometric recognition in modern cameras has grown increasingly sophisticated. With high-resolution imagery now common, algorithms are taking advantage of that additional detail.
Which APAC sectors face the greatest risk from deepfake-related fraud?
About a third of our business comes from the financial sector, which remains the most frequent target for fraud because of its role in protecting digital assets — essentially, our money. Banks continue to attract much of this activity, although I haven’t seen recent data comparing fraud targeted at financial institutions versus other enterprises. My sense is that both are highly targeted.
Larger financial institutions are generally well-funded, with strong human and technological resources. Smaller ones, however, lack the same level of support and may be more exposed. In the next phase of society’s vulnerability to these frauds, this segment will need particular attention, and technology providers will likely have to help fill the gap. It’s crucial that smaller or newer institutions have access to modern identity security controls. This remains both an area of opportunity for technology providers and a broader societal vulnerability, particularly for those without the means to keep pace. Enterprises in general also remain susceptible, given the focus of the attacks.
What safeguards prevent decentralised identity models from creating new trust gaps?
Interoperability at the scale of the internet largely depends on agreement around open standards. Since day one, we’ve focused on internet-scale rather than enterprise-scale identity solutions, which means they have always been built on open standards. By adhering to these, much of the interoperability challenge across proprietary systems can be addressed. A number of global standards for decentralised identity are now emerging. While they’re not as universally defined as TCP/IP is for the internet, there’s already sufficient commonality in the underlying protocols to maintain a reasonable level of interoperability and ensure security within decentralised models.
The decentralised identity model is, by design, more secure than earlier frameworks. When fully implemented, decentralised identity, verifiable credentials, and the cryptography that underpins them provide strong security. Decades of experience from traditional security models have informed this new approach. Advanced forms of double encryption now bind the user, their credentials, device, and issuer through sophisticated cryptography, making spoofing extremely difficult.
Penetrating a decentralised system is very expensive for attackers because they must compromise one identity at a time. There’s no concept of breaching a single centralised point and accessing a lot of records; instead, each identity must be individually penetrated, fundamentally changing the cost equation for adversaries.
In this model, biometrics are bound to the credential, the credential is bound to the phone, and the phone is also bound to the issuer through cryptography. So as long as there is trust — and good controls to ensure that each person enrolling in the model has been properly vetted and that credentials are issued to the correct individual — all subsequent interactions can be carried out with a high level of trust and assurance.
How will enterprise adoption change as generative AI becomes more accessible in APAC?
Well, there’s an AI for good and an AI for bad. We focus on ensuring that AI for good keeps pace with AI for bad. On the positive side, we’re seeing growing interest from customers. Initially, they wanted to know what deepfake detection capabilities we had, so they could prepare for all the channels vulnerable to deepfakes. That interest has been building over the past several months, and it has now shifted, at least among many of our customers, to using agentic AI to automate workforce or frontline tasks.
Think of the chatbot. How smart is the chatbot on the website or in the app? Can it perform tasks that humans used to handle, and do so 24/7 to improve service quality? Customers want to make sure that these agents being developed and managed are secure, that they cannot collude with each other, and that their access to personal information is appropriately scoped and authorised.
A lot of focus now is on securing AI agents. That’s what customers are asking about: the architecture and the use cases for securing agents, especially as they build them across multiple ecosystems. They might be developing them on AWS, Salesforce, or elsewhere, and they want proper governance. How many agents do we have? What rights and permissions do they hold? How is their access to data authorised? How do we prevent an agent from disclosing information from one user’s account to another’s? That is where much of the focus lies today.
The industry has agreed that open protocols such as OAuth are the standards for authorising and securing agentic access controls, which aligns with what Ping does. We apply the same principles to human identities. Agents are non-human identities, but they still need lifecycle management, authentication, authorisation, and governance, sometimes even more than humans, because in certain cases these agents are privileged, requiring additional layers of security.
How will identity verification evolve amid synthetic media and new provenance standards?
The root of societal trust today largely depends on how effectively governments issue passports, driver’s licences, and Real IDs — with the private sector building upon that foundation. The private sector verifies identity by asking, “Who are you? Show me your driver’s licence or your passport.” In this sense, the root of trust exists at the societal level, grounded in the government’s ability to identify its citizens. Historically, that identification has taken the form of physical credentials. Over the next five to ten years, these will become digital. So, alongside a physical credential, people will also have a digital one.
When we ask someone to verify their identity, we rely on their government-issued physical or digital credentials as the foundation for establishing who they are. In a more sophisticated environment, however, this is not the only signal. For instance, banks don’t rely solely on a physical credential. They assess a range of other data points to ensure the credentials aren’t fraudulent or fraudulently obtained. These include email account reputation, whether the email address was created yesterday or 20 years ago, and phone-related information such as whether the number is new, whether it’s linked to a disposable device, or how long the user has stayed with the same carrier.
Identity verification by governments will continue to strengthen, with biometrics and other less transferable data points collected earlier in life, becoming identifiers that are far harder to steal. For now, though, there is no single global source of identity truth. Much like multi-factor authentication, identity verification combines multiple signals, including government-issued IDs, to confirm a person’s identity.














