Exabeam has expanded its leadership remit to sharpen its focus on AI, naming Steve Wilson as Chief AI and Product Officer. For Wilson, the rise of agentic AI marks a decisive shift: Where copilots waited for prompts, agents now act on behalf of analysts, taking on investigation work that once consumed their time.
In this conversation with Frontier Enterprise, Wilson explains how his role has changed, why investigation has become the true bottleneck in SOC operations, and how his early experiences at Sun Microsystems shaped his perspective on software security.
What has changed in your remit going from Chief Product Officer to Chief AI and Product Officer?
Exabeam has always been steeped in the use of AI in its products, developing AI and machine learning capabilities for more than 10 years. That focus is why I joined the company a couple of years ago, because I wanted to work with an organisation intent on moving in that direction.
What has changed in recent years is the broader applicability of AI. Exabeam had been using it for very specific functions within its products. Today, AI extends beyond product features to how a company modernises itself: how it builds, services, sells, and markets its offerings.
I’ve worked with AI technologies throughout my career, so I was asked to take on this role with two priorities: making sure we remain at the forefront of what goes into our products, while pursuing the next stage of transformation to change how the business operates and become more efficient.
In Exabeam’s APJ study, 71% of managers saw AI boosting productivity, but only 5% of analysts agreed. Why?
This became the headline finding of the study because the contrast was so stark. Several factors seem to be driving it. I’ve spoken with people working in SOCs, both managers and analysts, about how they view AI. From management’s perspective, there’s optimism. They see the challenges in their organisations, they’re inundated with information about the need to adopt AI, and they’re shown amazing demonstrations by vendors. It has never been easier to create a great AI demo than since ChatGPT came into existence.
But when you look at the pressure that SOC analysts are under, it’s intense. They’re facing continuous attacks, which is absolutely true, and there are reasons for that. As a result, security operations teams are being forced to collect more data, deploy more cybersecurity products, and support more users and traffic.
If you look at their quality of work and life over the past few years, the pressure hasn’t eased. They still feel as overwhelmed as before. When you go deeper into the survey results, everyone — regardless of role — agrees that AI has value in the SOC. But when asked whether it improves productivity, the response reflects a qualitative perception. Right now, analysts feel inundated. Early generative AI tools and basic machine learning algorithms improve detection, but they don’t address the core areas where security teams spend most of their time or need the most help.
How has agentic AI transformed the role of security analysts?
I’ll break this into three phases. The first phase is predictive AI algorithms, which are the special-purpose algorithms that became the backbone of tools like user behaviour analytics, widely recognised as important. Then last year, we started to see the rollout of copilots. People found them useful and there was some benefit, but these chatbot-style copilots relied on prompt and response. Analysts still had to process everything themselves, then ask the bot for help: “Help me understand this, help me decide what to do with this.” It was very much the interaction pattern we’ve become familiar with since ChatGPT.
What has arrived this year is the agentic shift. Exabeam introduced its Nova platform in April, with the idea that these agents should be more proactive and action-oriented. If engineers in the operations centre are overwhelmed, simply giving them tools that sit idle until prompted is the wrong approach. The tools should do the work and bring results to the humans. That is the shift we’re starting to see.
Take what we introduced in April with our investigation agent. Analysts may acknowledge they are getting better detections from UEBA and security analytics, but they still have to investigate each one and make decisions, such as whether to disable an account or shut down a laptop. Investigations are where the bulk of their time is spent, not the detections themselves.
The investigation agent activates as soon as a detection occurs, finds related detections and information, and packages them into a case. By the time a human sees it, they are not looking at raw detections, rules, or log files. Instead, they see a case file that is organised, outlines what likely happened, explains why, highlights the risks, and suggests next steps. The investigator remains in the loop, reviewing the case and judging whether it sounds reasonable.
As a result, analysts are spending far less time on each case, with feedback indicating investigations are three to five times faster. That’s just the first phase. The next is about how we can help teams plan their defences and improve their posture, which is where things become even more interesting.
So it’s not just merely reacting to an incident, but more of a proactive approach?
Yes, I think that’s the key shift with agentics, as odd as that big word might sound. If you break it down to the roots, “agent” and “agency,” it’s about granting power to act on your behalf. Just as you might give your lawyer the agency to sign a contract, here you’re giving a piece of software the ability to carry out tasks you would normally do yourself. Having these bots working at your disposal, picking up cases, processing them, and producing detailed outputs changes the equation.
What role does agentic AI play in threat detection, investigation, and response?
Back in 2024, we rolled out a production cybersecurity copilot and learned a tremendous amount from that experience. Allowing search queries in natural language — whether English, Japanese, Korean, or Thai — rather than SQL-like languages was a huge boon for entry-level SOC staff because it dramatically reduced training time. We saw another benefit as well: fewer cases escalated from tier one to tier three. That was important because in the past, if a tier one analyst was stuck, the only option was to push the case back in the queue and say, “I don’t know what to do.”
Now, if they don’t understand something, they can ask the bot, and it may be able to explain. This has shortened the learning curve and reduced escalations.
But the deeper insight came from discussions with CISOs about the metrics that really matter. For a decade, the focus was on detections, improving detection quality. Yet when faced with a backlog of detections, even high-quality ones, the problem shifts. The TDIR flow (threat detection, investigation, and response) has gotten really good on detection, but the investigation step became the bottleneck.
That led us to give some of these bots, trained with the basics of cybersecurity and the ability to read Exabeam detections, more agency to act on their own. When the focus moved from mean time to detect to mean time to remediate, the investigation stage clearly emerged as the constraint. That’s why the first agents we introduced were aimed squarely at that step. The goal is to dramatically shorten the TDIR flow.
How has the security conversation in software development evolved over the years?
I spent much of my mid-20s into my mid-30s at Sun Microsystems, working on developer tools, and I was an early member of the Java team. For the first 10 years of my career, I didn’t think much about security. It wasn’t something we worried about. We built applications, delivered them on floppy disks, and maybe ran a virus scan on the floppy disk before sending it to the factory. That was software security.
When the web arrived, things changed. At first, websites were static, basically just HTML pages that didn’t do anything. Security was still at a very early stage. Java was the first time web pages became truly interactive and supported online commerce. From the beginning, Java recognised the distinction between trusted and untrusted content and built security management into the concept of an applet. That was groundbreaking at the time. But as the web quickly grew more complex, Java became one of the security challenges. For years, people debated how to move Java and Flash out of the way. Having these virtual machines running on every laptop created too much surface area for risk. As workloads moved to the back end, modern application security practices emerged.
That shift gave rise to OWASP (Open Worldwide Application Security Project) in the early 2000s. The first OWASP Top 10 list outlined the most critical risks and highlighted the need to address issues such as SQL injection and remote code execution. Fast forward 20 years, and I was inspired by that work. While collaborating with Jeff Williams, who wrote the original Top 10 list, I began thinking about building a similar framework for large language models. This represented another transformation: No one had been developing software in that way before, so it required a new mental model.
Everything we know about application security remains important, from writing secure code to securing supply chains. But as AI becomes part of software systems, there’s a new set of things to learn, things that are specific to embedding AI in your software.














