Nearly four in every five IT security leaders in the Asia-Pacific region recognise their security practices need transformation as AI adoption accelerates and cyber threats increase.
A new report from Salesforce shows that APAC security leaders are unanimously optimistic about AI agents, identifying at least one security concern that could be improved by agents.
This is based on a double-anonymous survey of IT decision-makers conducted from December 24, 2024 through February 3, 2025.
There were 588 APAC respondents representing Australia, India, Indonesia, Japan, New Zealand, Singapore, South Korea and Thailand.
Across the region, half of respondents worry their data foundation isn’t set up to get the most out of agentic AI. Also, 57% aren’t fully confident they have the appropriate guardrails to deploy AI agents.
Bad actors are increasingly using AI to exploit security vulnerabilities, creating a rapidly evolving threat landscape. At the same time, security professionals are also using AI to protect their companies’ data and systems.
Autonomous AI agents, which help security teams cut down on manual work, can free up humans’ time for more complex problem-solving. However, agentic AI deployments require robust data infrastructure and governance to be successful.
“Organisations can only trust AI agents as much as they trust their data. When 62% of security leaders in Asia Pacific report that customers remain hesitant about AI adoption due to security and privacy concerns, it’s clear that robust data governance isn’t optional, but essential,” said Gavin Barfield, Salesforce VP & CTO of solutions in ASEAN.
IT teams that establish strong data governance frameworks will find themselves uniquely positioned to harness AI agents for their security operations all while ensuring data protection and compliance standards are met,” said Barfield.
In addition to a familiar slate of risks like cloud security threats, malware, and phishing attacks, IT leaders now cite data poisoning — in which malicious actors compromise AI training data sets — among their top concerns.
Resources are rising in response, with 76% of organisations expecting to increase security budgets over the coming year.
While 82% of IT security leaders believe AI agents offer compliance opportunities, such as improving adherence to global privacy laws, they acknowledge that AI agents also present compliance challenges.
This stems partly from an increasingly complex and evolving regulatory environment across geographies and industries, compounded by compliance processes that remain largely unautomated and prone to error.
Just 52% are fully confident they can deploy AI agents in compliance with regulations and standards.
Also, 85% of organisations say they haven’t fully automated their compliance processes.
According to the study, State of IT research, close to half (45%) of IT security teams already use agents in their day-to-day operations — a figure that’s anticipated to increase by more than half over the next two years.
IT security leaders expect a range of benefits as their use of agents ramps up, ranging from threat detection to sophisticated auditing of AI model performance. Almost three quarters (74%) expect to use AI agents within two years — up from 45% today.














