Most enterprises treat threat protection and data security as separate concerns: a divide that’s increasingly untenable in the age of AI. True unification, according to Proofpoint CEO Sumit Dhawan, means viewing risk through the lens of human behaviour, not just technology. In a conversation with Frontier Enterprise, he outlines why human-centric integration, not just tool consolidation, is the key to modern security strategy.
Breaking barriers
For Dhawan, the real barrier isn’t a lack of technology or tools, but the failure to align security efforts around how people actually interact with data.
“Many enterprises still treat data, behaviour, and access as separate risk factors, rather than combining them into a single, unified view. They’re failing to bring AI into supervised, trustworthy workflows, and they continue to treat threats and data protection as separate concerns, instead of connecting them through people — their most common point of failure,” he observed.
As organisations contend with exponential data growth across cloud platforms, collaboration tools, endpoints, and AI-enabled services, the surface area for potential data loss or compromise has expanded significantly.
“This growth is outpacing the ability of traditional security teams and solutions to monitor and protect every channel. Addressing this disconnect requires an AI-enhanced, behaviour-driven approach that unifies data defence, not just across systems, but in the ways people interact with data,” he noted.
Forrester predicts that 90% of data breaches in 2024 will involve the human element — up from 74% in 2023. Dhawan said this underscores the need for deep, contextual visibility into how individuals handle data, including their behaviour, access patterns, and intent.
“Without this human-centric intelligence, even the most technologically integrated security stack risks overlooking threats caused by human error,” he warned.
Meanwhile, as organisations consolidate their security solutions, Dhawan said the first tools to be dropped are often niche point solutions, particularly those with narrow threat detection or single-purpose functions.
“Operating a suite of fragmented defence tools increases the burden on security teams. It can create visibility gaps, and in today’s threat landscape, attackers need only one small vulnerability to compromise an entire organisation,” he explained.
AI readiness
While AI is now central to modern security strategies, Dhawan believes many organisations are not yet “AI-ready.” And readiness, he said, goes well beyond tools or models.
“It starts with a mindset shift at every level of the organisation. Technically, companies may have the infrastructure or data lakes, but what we want to look for is cultural readiness: a willingness to challenge legacy thinking, trust data-driven decision-making, and, most importantly, embed responsible AI into their values from day one,” he said.
Rather than treating it as a shiny add-on, AI-ready organisations view the technology as an enabler of human potential. According to Dhawan, AI should augment, not replace, human judgment, particularly in areas like cybersecurity where context and intent are essential.
“You can invest heavily in AI tools, but without the right mindset and a clear vision for how AI integrates with business strategy, such investments risk becoming shelfware,” he added.
One emerging trend is agentic AI, which aims to streamline operations, reduce alert fatigue, and drive efficiencies. But even as AI agents are seen as force multipliers for security teams, they remain vulnerable to manipulation through prompt engineering.
To mitigate such risks, Dhawan outlined three components of effective agentic AI deployment:
- The first is the need for comprehensive guidelines and policies. This includes detailing the model’s inputs, components, design, intended use, and accountability structures. The framework must also provide explainability and auditability to build trust and manage risk.
- Second, organisations must implement robust security protocols for the data the AI consumes and generates. This minimises the risk of bias or malicious content and ensures model performance remains reliable.
- Finally, human oversight is essential. Dhawan said humans must retain ultimate control, correct the AI’s course when necessary, and ensure its behaviour aligns with organisational goals and ethical standards. Mechanisms to monitor, intervene, and override AI actions must be built into the deployment.
Security gaps
As automation becomes more prevalent across the enterprise, security tools are struggling to keep up. A lack of visibility, limited contextual awareness, and insufficient focus on people continue to hinder effective detection and response.
“Every day, AI, cybersecurity, and human behaviour intersect to create new variables that security strategies struggle to keep pace with. In the name of efficiency and consolidation, many approaches overlook new attack surfaces and people-related risks that emerge when automating existing processes,” Dhawan said.
In line with this concern, Dhawan noted that Gartner has identified human-centric security as one of just three strategic imperatives for CISOs in 2024 and 2025.
To illustrate the risks, he pointed to common pitfalls: “For example, automation tools are frequently over-permissioned, where they may be granted broader access than necessary for their functions. This creates over-privileged accounts that are attractive targets for bad actors. There can also be a tendency to adopt a ‘set and forget’ mentality, where security hygiene checklist items are overlooked.”
Meanwhile, generative AI has made it harder to detect phishing and other social engineering attacks.
“Generative AI has eliminated traditional red flags such as poor grammar or awkward syntax, allowing attackers to craft sophisticated, localised threats at scale,” Dhawan said.
Proofpoint’s 2024 State of the Phish report shows a 35% year‑on‑year increase in business email compromise attacks in Japan and a 31% rise in South Korea, underscoring the growing challenge of AI‑powered, localisation‑aware phishing.
“The common thread running through all of this is human error and a lack of contextual awareness. Automation can’t predict every edge case or every way an employee might interact with a system. Data loss remains, at its core, a people problem. And in this environment, organisations must shift from legacy perimeter thinking to a human-centric approach that dynamically considers behaviour, intent, and context,” he concluded.














