
Splunk is introducing agentic AI capabilities into its security and observability solutions, but are enterprises ready to adopt them?
At Splunk .conf25 in Boston, Frontier Enterprise spoke with Hao Yang, Splunk’s Head of AI, and Mimi Shalash, Observability Advisor, who unpacked the strategy behind the company’s agentic AI push and shared how it plans to address challenges such as model explainability and user adoption.
Trust issues
With hallucinations among LLMs remaining a recurring issue, Yang pointed out that the goal is not to eliminate them, which is impossible, but to minimise their occurrence.
“Even humans make mistakes. For example, a SOC analyst reviewing incident logs can interpret them incorrectly. What do we do? We learn from those mistakes, which is the same principle with AI,” he said.
To address this, Splunk has implemented guardrails that include visible traces that can be cross-referenced against AI decisions, as well as adherence to the human-in-the-loop principle.

“When AI does something, the human can come in to check the results. From that perspective, we always have this design principle of keeping traces of different references. For example, when the AI says that someone has logged onto the system five times, we log the SPL queries generated to find that answer,” Yang explained.
On agentic AI readiness, Yang believes that enterprises aren’t necessarily unprepared for the technology but need a clearer understanding of how it works and how it could benefit them.
“Saying ‘I’m not ready to adopt it’ is different from ‘I don’t know what it is.’ In many of these conversations, it starts with education. It starts with us actually showing customers what the agents can do, so that they see the value,” he said.
Running blind
In observability, Shalash argued that many enterprises still underestimate its importance across their IT stack, resulting in persistent complexity.
“I don’t think the world understands the opportunity to take advantage of the data at hand and create new revenue streams,” she said.
Previously, IT procurement was highly decentralised, Shalash recalled. An executive could quickly purchase the tool they needed to solve a specific problem and move on.
Today, however, enterprises use more than 50 tools, and the challenge remains unresolved.
“Now it’s financially irresponsible, and even more challenging than the financial component, it creates a lot of blind spots. You end up having this massive team in a war room trying to figure out — is it a network issue? Is it an infrastructure issue? Is it an application issue? Is it a performance issue? You’re looking at five different screens trying to identify what happened and, more importantly, why it happened,” she said.
Another weakness Shalash observed is that organisations often have strong visibility into either their back-end or front-end systems, but rarely both.
“Their back-end systems, such as databases or infrastructure whether on-prem or in the cloud, and their front-end systems, like digital experience and e-commerce platforms, don’t have visibility into each other. That lack of connection prevents components from communicating, and I think there’s an opportunity there,” she said.
Bringing agentic AI into observability, she added, could be a turning point. She drew a distinction between AI for observability, which enables autonomy and intelligence across dashboards (as in Cisco’s AI Canvas tool), and observability for AI, which applies when monitoring AI models themselves, such as in the case of a financial institution Splunk works with.
“This financial institution needed to run automated reports on the health of several new lines of business. One of them was so long from a query perspective that the bill was enormous, in the seven figures, because we’re talking about large enterprise-grade organisations. If they had observability for that AI model, they would have caught it and saved millions of dollars. AI also has to have an ROI. You can’t just have AI running everywhere. You have to make sure you can map it back to the business,” she said.
Cat-and-mouse game
Asked why Splunk is pursuing agentic AI for security, Yang said the move was inevitable, given current enterprise demands.
“AI is there, whether we like it or not. As an industry, we’re at a pivotal moment, and as a company, we have the desire — and even the obligation — to step up and provide the best solution for our customers,” he said.
Yang outlined the disparity between defenders and attackers in the age of agentic AI.
“In cyber defence, unfortunately, the bad guys always have the first-move advantage. That has been the case since day one, because that’s just the nature of the game. When it comes to AI, the bad actors also have an advantage over defenders because they are not bound by any ethics. We have clear guidelines on how we’re going to use AI ethically, but I don’t think the bad actors pay much attention to those things, so they can do whatever they want,” he remarked.
Still, defenders have their own advantage, Yang noted.
“The advantage that we have as defenders is access to much more data than the bad actors. The bad actors may have the models, but they’re not going to be able to see the entire network traffic of organisations. Our advantage, really, is that we can deeply understand machine data, recognise the patterns, and use those patterns to defend against the bad actors,” he said.
Patience bears fruit
With hype around AI — especially agentic AI — running high, Shalash recommended focusing on low-hanging fruit that can yield immediate benefits rather than launching large-scale projects without sufficient groundwork.

“Find one particular use case, one application, or one line of business where teams have a lot of mundane, repetitive tasks. Take that as a case study. Analyse the current state: How much time does it take my team to do these? Identify five action items or steps to create a workflow and then look for a way to optimise that: leverage AI, leverage observability. Once you do that, you can build a centre of excellence, create playbooks for scalability, and map back to why it’s beneficial to the executive suite,” she said.
Yang, meanwhile, is optimistic that agentic AI adoption will accelerate within the next 12 months.
“In the world of AI, we talk about things in units of weeks and months. It will take some time for people to get used to it and be able to leverage the agents fully. It will also take some time for Splunk to learn from customer feedback, to make our agents more useful and robust. But I’m definitely seeing agentic AI pilots and deployments in the next 12 months,” he said.













