While APAC organisations race to deploy AI agents to enhance customer experiences, a significant disconnect remains between what businesses want AI to do and what many customers are willing to accept. The recent Auth0 “Customer Identity Report 2025” found that 70% of users still strongly prefer interacting with humans over AI agents.
This preference is largely driven by a fundamental lack of trust. A majority of survey respondents (60%) are concerned about the impact of AI on the privacy and security of their digital identities.
Identity enables customers to sign up for and sign in to digital properties. No one knows this better than threat actors, who see the login box as a path to sensitive information, privileges, and consumer benefits. Attackers relentlessly pursue this data. With nearly 50% of registration attempts meeting attack criteria, the trust gap isn’t unfounded.
Flailing trust a hindrance to AI adoption
Cyberattacks are surging across the region at an unprecedented pace. According to IBM’s “X-Force Threat Intelligence Index 2024” report, APAC is now being targeted for one-third of all global cyberattacks. As technology evolves, these threats are set to increase in both scale and severity.
At the same time, organisations are deploying AI applications without robust identity and access management controls, which creates new vulnerabilities. Without the right safeguards in place, AI agents could access APIs without permission, expose sensitive data, or take actions outside their intended scope.
Trust is foundational when discussing identity. When customers decide whether to create an account with a brand, their primary concerns are not always about product quality or value. The Auth0 report found that 74% prioritise a company’s reputation and trustworthiness, and 72% care about security.
However, the report also found that 44% of respondents did not trust AI agents with their personal data. Organisations need to address this shortcoming by ensuring AI agents are properly authenticated. If an agent is handling sensitive information such as a user’s bank card details, appropriate data protection and access controls must be in place to confirm the agent has the proper authorisation to use and share that information.
The lack of proper authentication of AI agents poses a significant risk, especially when they are allowed to handle personal data. Without verification, an agent may not have the right to access that information, which could lead to serious breaches. Users who share sensitive data with an unverified agent face the risk of unauthorised parties gaining access to that data. Because AI agents interact with one another, inadequate controls and guardrails can lead to improper data sharing, potentially triggering a security incident.
While we are seeing momentum for AI agents in APAC, the issue of trust will limit adoption and the long-term value of the technology. Without access to crucial data, agents cannot act on behalf of organisations, preventing the technology from reaching its full potential.
Building trust in AI
Securing AI agents from the start — with authentication, secure API calls, asynchronous user confirmation, and granular authorisation — is essential to realising their potential and preventing them from becoming a vector for abuse.
Customers remain optimistic about AI, as the report identifies clear pathways to building trust. Among the respondents, 38% said that having human oversight to review or approve AI agent decisions would increase their trust.
Transparency, ethical behaviour, and accountability measures are also essential for growing user trust. For organisations, this means prioritising security and ethical guidelines from the beginning when deploying AI agents and clearly communicating these efforts to users.
Organisations need to close the gap between AI innovation and user trust to thrive in the age of AI. By ensuring proper identity verification and implementing strong data protection measures, they can build greater trust in AI agents. Tackling these challenges is essential not only to safeguard security and privacy, but also to create the assurance needed for broader adoption of the technology.
As the future moves toward AI agents managing a wide range of tasks, organisations need to prioritise trust for AI agents to operate effectively. It is imperative for every organisation to embrace authentication and authorisation as a business challenge, to allow them to build secure digital relationships and harness the full promise of AI.














