
AI is on everybody’s shopping list. As new use cases emerge by the day, enterprises are eager to adopt the latest innovations. But deeper issues remain unresolved, especially how humans interact with, adapt to, and place trust in the technology.
Dr Dorit Dor, Chief Technology Officer of Check Point Software, spoke to Frontier Enterprise during the CPX 2025 APAC conference in Bangkok to unpack the AI challenges that most organisations tend to overlook, including those with direct implications for cybersecurity.
What are some common misconceptions about human-AI partnership in cybersecurity?
I wouldn’t say people necessarily get this wrong, but there are two trends: one where AI assists humans, and the other where humans assist AI.
In one, AI becomes more autonomous; in the other, AI serves as an assistant to people. It’s easier for us to imagine AI helping people, and harder for us to imagine people helping AI. I think we don’t always realise when we change our behaviour. For example, if you told someone from a few generations ago that people today publicly share their location at any given moment, they’d find it very strange — maybe even privacy-invasive. Yet today, there are apps where you declare that you’re travelling, and it doesn’t seem odd. You talk to Gen Z, and it’s second nature that everyone knows where they are. These behavioural shifts happen gradually, and we tend to accept them over time — but it takes time.
This applies to organisations as well. We may be happy with how we’ve implemented AI, but often we’ve done so incrementally. Then, another organisation comes in and does something radically different. Instead of hiring three more people, they teach AI to do the work of those three — even if it takes more time to train the AI. An AI-first organisation changes the rules of the game. They’re no longer thinking incrementally. They ask: “What if we couldn’t hire a human, and had to rely on AI instead?” That mindset leads to very different economic outcomes. I think it will take us time to understand that these AI-first organisations will outsmart us by doing things faster and much more effectively.
Recently, there have been talks of data/AI poisoning. What are your current observations about this?
I think it’s still more of a method (than a widely adopted technique). But let’s talk about attacks in general. Attackers use whatever tool is easiest next. They have many options, but they’ll go with whatever gets them a quick win. I think data poisoning is one of those tools, but it’s not always the easiest path. In fact, it’s not used very much — it’s not that easy to pull off. You have to poison the data in a way that goes unnoticed. If it’s too blatant, it looks like bad data and gets flagged. So it has to be subtle; gradual enough not to be detected, but still have the desired effect. And you can’t do this globally, because you don’t want the poisoning to be obvious to everyone at once. You want it to be more granular.
So no, the attack isn’t as easy as it sounds. Maybe someday it’ll become a more common method, but for now, I think it’s being used more at the margins. That doesn’t mean it’s irrelevant or unimportant; just that, for now, attackers often have easier options. It’s something I sometimes see in researchers’ experiments — more like someone showing off that they managed to poison a model.
In relation to data poisoning, one of the issues enterprises have with AI is data explainability. What are the risks and opportunities there?
I think explainability is still a big opportunity, because most AI systems today don’t provide enough of it. You have to make a deliberate effort to explain, to provide the sources and reasoning behind a given answer, and that’s still not common enough. Why is that an opportunity? Because if explanations are available, we’re better able to verify and trust the results.
There’s academic work happening on better explainability, and there are practical tools available, but they’re not widely used. And the issue isn’t being taken seriously enough. People move fast — they want to achieve the next thing rather than explain what they did yesterday. So I think we need to dedicate more focus to explainability.
One of the problems with explainability is this: say I offer you water, and you decline and ask for sparkling water. I then ask, “Why do you prefer sparkling water?” and you give me a reason. But I’d say the reason you gave isn’t necessarily the real reason. It may be that you weren’t trying to mislead me; you gave what you thought was your reason, but the gut-level reason why you like sparkling over still water might be something you’re not even aware of. Maybe you were raised to prefer sparkling water, and over time, you constructed a justification for it. So the fact that you gave me an explanation doesn’t mean that it’s the actual cause — and the same applies to AI.
On one hand, yes, we need to focus more on explanation. But people move fast, and over time, they deprioritise explaining in favour of chasing the next goal. I think explainability is essential to building trust. But I would also say this to the human side: even if you’re given an explanation, and even if it’s technically true, it may not be the full story. There may be underlying reasons you don’t see. This is true for people too.
The reason something was “explained” might not reflect the actual internal logic. In AI, the explanation is often layered on top; it’s an after-the-fact account of how the output was generated. There might be a disconnect. So yes, explainability is important. It allows humans to verify and test. But people need to understand: the explanation you get is not always the real explanation, even if the AI (or the human) had no intention to mislead.
What are the challenges to explainability? Is it technology, willingness, or several other factors?
It’s not considered mandatory. For example, you might be developing an application, and you’ve just delivered five new features. Then you move on to the next five without explaining the first set, because doing so feels like a waste of time, it’s not required. The first reason, then, is perception: people don’t see it as essential.
The second reason is that it takes more compute power and more time. To provide an explanation, you need to invest extra resources, both in justifying the output and in crafting the response. Maybe you’re not even sure whether the user will value that effort or give you credit for it. These are some of the main reasons explainability still isn’t treated as essential. The value isn’t obvious, and it adds cost and time.
I think explainability will become more important in certain applications, especially as we see more agentic AI systems that ask for permission. Imagine an agentic AI that shows you a plan for doing something and asks for your approval before executing it. That’s a step toward autonomy, a kind of pre-approved autonomous action. The AI presents the plan, gets the green light, and then carries it out. It’s a way for the system to work toward “getting the keys to the kingdom” — to eventually act on its own — but for now, it still needs approval. In that context, I think we’ll start building in more reasoning and explanation. Maybe after that, we might eventually bring it back and apply it to other areas too.













