Check Point CTO on countering new threats with generative AI

Generative AI can strengthen cybersecurity, and counter targeted and evolving threats. Image created by DALL·E 3.

In cybersecurity, organisations often have to fight fire with fire. When hackers began leveraging generative AI to enhance their attacks, security vendors and threat hunters responded in kind, harnessing the same technology to counter evolving threats.

Dorit Dor, Chief Technology Officer of Check Point Software Technologies, recognises that the threat landscape is only becoming broader and more complex. In a conversation with Frontier Enterprise, she shared insights into the company’s generative AI strategy and her perspective on the future of regional security.

Everyone’s talking about AI-enabled this and that. What exactly does AI-enabled mean, from Check Point’s perspective?

- Advertisement -

When people ask me about AI at Check Point, there are five main areas we focus on: the attacker landscape, the defender’s tools, the operationalisation of AI — covering security operations and tools like Copilot — defending against AI-based attacks, and data management and protection.

When we look at Check Point’s portfolio, our priority is defence. Our philosophy has always been centred on prevention — stopping attacks before they occur. Many others discuss AI in terms of detection and response, which is important, but there’s often talk of responses within minutes, which doesn’t fully address the demands of a real-time attack. We believe it’s essential to stop threats upfront, wherever possible. For that, we’ve integrated AI into our ThreatCloud platform for over 15 years.

ThreatCloud combines everything we know about threats and prevention into an AI-powered system that collects real-time data from sensors and incoming requests. For instance, when a product attempts to access a domain or detects something suspicious like malware, this information is sent to ThreatCloud. Every day, ThreatCloud processes billions of queries on emails, URLs, files, mobile applications, and more. This data is used to train AI models to achieve a high level of accuracy. With that accuracy embedded into our products, malicious emails are blocked, suspicious users are stopped at the gateway, and harmful processes on endpoints are prevented from running. Our products — across endpoints, cloud, mobile, and email — work together, leveraging collective threat intelligence to determine security measures.

But this isn’t new for us. This approach has been part of our operations for over 15 years. What’s evolving now are some additional ways to strengthen our defences with AI. Today, AI helps us be more productive and effective when developing protections, enriching data, and triaging security events. For instance, our extended detection and response (XDR) and managed detection and response (MDR) are now AI-powered. This AI processes the vast amounts of data we receive from customers, triages it, and delivers more informed security decisions.

How does generative AI come into this picture?

Generative AI primarily enhances precision by helping us make more informed decisions because it enriches the data within our engines. In this case, our generative AI acts as a conversational engine. For example, within our XDR, it enables event hunting, correlates global information to those events, triages security issues, and even takes actions. Our AI understands APIs and can take direct action as needed.

The next level of our generative AI use is in the form of a copilot within the Infinity platform. This copilot has learned from what we know — our documentation, APIs, log formats, policies — so a customer can ask questions such as, “Am I protected against a specific CVE (Common Vulnerabilities and Exposures)?”

To answer this, the AI must assess several factors. First, it needs to know what protections are in place for each CVE, whether it’s relevant for the endpoint, cloud, or compliance. Next, it evaluates which products you have installed, as not all may be in use. Then, it checks if your configuration of these products is set to block the CVE, since some configurations may leave certain elements open. The AI can also examine your logs to see if this event has occurred within your network or review our logs to determine if we’ve seen this threat in real-world traffic. 

All of these capabilities are built on the extensive training we’ve provided — on documentation, policy, API functions, threat data, and log analysis. This engine, now part of our platform, can respond to these complex inquiries, and the next step after answering this is proactive support.

We try to understand what you are protecting against and may suggest adding further layers of security if there are gaps in your configuration. The AI can also review policies, offering automated adjustments to better secure against specific threats. We’ve progressed from precise threat prevention and data-driven decision-making to an interactive, conversational mode with the customer, with parts of this approach becoming increasingly autonomous and automated to tailor security for the end user.

Is there a common security principle you follow when it comes to future and emerging technologies, such as IoT and quantum computing?

First, there’s no “security powder,” as I like to call it — no magical solution you can sprinkle on everything to secure it instantly. The principles of security remain the same: You need to map out risks and build in controls. With emerging technologies, however, cybersecurity is often considered only at the end, after everything else is done.

Dorit Dor, Chief Technology Officer, Check Point Software Technologies. Image courtesy of Check Point Software Technologies.

I was recently in Dubai at the World Economic Forum for the Global Future Council, which consists of different councils focused on future threats and developments, each with its own topic. I’m part of the Cybersecurity Council, where we’re working to raise awareness of the need to consider cybersecurity from the start when creating emerging technologies. The security solutions might look different between, say, 5G and IoT, but a security assessment is essential early in the development. There need to be built-in security features, plugins, or other ways to integrate security into the design itself. Additionally, there has to be collaboration across the ecosystem — between companies, vendors, and customers — to embed security into the framework.

The Global Future Council is not like my regular day job of selling products to customers. It’s not about implementing something in an enterprise tomorrow; it’s more conceptual, like how to measure the security of the world, a region, or a country. There are no simple answers, but we’re striving to improve the indicators and measurements we use, such as defining what good security metrics and effective APIs might look like. So, the work in the Global Future Council is largely forward-looking.

What advice do you give to governments, given that cybersecurity awareness and cyber resilience vary from one government to another?

We should move away from high-level concerns and focus on immediate, practical issues, because there’s so much to do in terms of cyber resilience alone. Countries that are less developed in cybersecurity could let more advanced nations handle emerging issues and focus instead on the existing threat landscape — on stopping current threats. These threats might include ransomware or various risks to the supply chain. The priority should be setting appropriate security standards for organisations and implementing security reporting. For example, there are established security reporting requirements, such as those from the SEC and other global regulations, that governments can adopt rather than reinventing the wheel.

If I were advising a less developed country, I’d suggest leveraging security frameworks already used globally and packaging them in a way that’s accessible and manageable. It’s also crucial to maintain an improvement mindset — identifying the cybersecurity vector that poses the biggest risk and setting measurable goals to address it. Governments could also implement a requirement, for example, only purchasing from organisations that meet specific security standards. This approach would help elevate the security level across the industry.

How do you see the future of security pan out, especially that generative AI is becoming part of your core culture, and deepfakes and other AI threats are becoming more prominent?

I’d classify deepfakes under AI-based attacks, because attackers are obviously using AI tools to create more targeted and successful attacks that increase their success rate. The first step is to acknowledge this reality, though it’s not entirely new — it’s simply a more advanced class of attacks, which means we need to focus even more on prevention.

In an upcoming report, we suggest that one of the most widely used malware strains was likely developed with AI. However, this doesn’t make it fundamentally different; it’s still a mix of existing capabilities, just enhanced by AI.

As for deepfakes, the risk lies in how effectively AI learns social behaviours, making these fakes hard to detect. One possible solution could be a unique digital signature for videos, allowing creators to verify their authenticity. For instance, on a platform like Facebook, a green checkmark could appear on videos that I’ve signed, confirming they’re genuine. If someone is unsure, there could even be a dialog option allowing them to ask directly if it’s truly me in the video. Education is essential here, too; people need to understand that they shouldn’t easily trust everything they see online.