More than four-fifths (82%) of organisations already use AI agents, but only 44% report having policies in place to secure them, according to a new report from SailPoint.
The report also presented a paradox: 96% of technology professionals consider AI agents a growing risk, even as 98% of organisations plan to expand their use of them within the next year.
For this report, a total of 353 IT professionals responsible for AI, security, identity management, compliance, and operations at enterprise companies completed the survey, which was conducted by Dimensional Research.
Survey respondents are based in Singapore, Australia, New Zealand, India, Hong Kong, and Japan.
The terms “AI agent” or “agentic AI” broadly encompass autonomous systems that perceive, make decisions, and take action to achieve specific goals within an environment.
These agents often require several different machine identities to access needed data, applications and services, and they introduce additional complexities like self-modification and the potential to generate sub-agents.
Notably, 72% state AI agents pose a greater risk than machine identities. Factors contributing to AI agents as a security risk include AI agents’ ability to access privileged data (60%), their potential to perform unintended actions (58%), sharing privileged data (57%), making decisions based on inaccurate or unverified data (55%), and accessing and sharing inappropriate information (54%)
Chandra Gnanasambandam, EVP of product and CTO at SailPoint, said agentic AI is both a powerful force for innovation and a potential risk.
“These autonomous agents are transforming how work gets done, but they also introduce a new attack surface. They often operate with broad access to sensitive systems and data, yet have limited oversight,” he said. “That combination of high privilege and low visibility creates a prime target for attackers.”
Today, AI agents have access to customer information, financial data, intellectual property, legal documents, supply chain transactions, and other highly sensitive data.
Yet respondents reported deep concerns over the ability to control the data AI agents can access and share, with an overwhelming 92% stating that governing AI agents is critical to enterprise security.
Alarmingly, 23% reported their AI agents have been tricked into revealing access credentials.
Additionally, 80% of companies say their AI agents have taken unintended actions, including accessing unauthorised systems or resources (39%), accessing or sharing sensitive or inappropriate data (31% and 33%), and downloading sensitive content (32%).