CrowdStrike’s strategy for AI and evolving threats

Adversaries are increasingly targeting cloud environments, making cloud security a critical focus for organisations. Image courtesy of Growtika.

Cybersecurity has become impossibly complex, and organisations worldwide are grappling with the increasing frequency of attacks. The rapid advancement of technology has only heightened the sophistication of these threats, creating an urgent need for stronger security measures.

CrowdStrike, one of the players in the space, is responding by addressing both current and emerging security challenges, with a focus on AI-driven solutions and evolving cyberthreats

Frontier Enterprise caught up with Elia Zaitsev, CrowdStrike’s Chief Technology Officer, to discuss Southeast Asia’s unique security challenges, the role of AI in cybersecurity, and strategies for navigating these risks in a rapidly changing landscape.

- Advertisement -

Keeping up with cyberthreats

Zaitsev joined CrowdStrike in 2013 as its first sales engineer — a mere two years after the company was founded. This gives Zaitsev, as he puts it, a “front-row seat” to CrowdStrike’s evolution. Since then, he’s seen major changes within the company and across the cybersecurity sector.

“The threat landscape has transformed significantly over the past decade,” Zaitsev explained. “Adversaries are leveraging technological innovations to break into organisations at record speeds, and they are increasingly shifting their focus to cloud and identity-based attacks.”

Legacy security solutions are becoming less effective in addressing these modern threats, and Zaitsev pointed to the need for new approaches to keep pace with the evolving challenges posed by adversaries.

Looming dangers in SEA

One of the factors that influence the threat landscape is geopolitics. Southeast Asia’s proximity to China, for instance, often puts it in the crosshairs of prolific threat actors.

“According to CrowdStrike’s 2024 Global Threat Report, China-nexus adversaries increasingly exploited third-party relationships via supply chain compromises in 2023 and have focused on elections in their regional sphere of influence for significant information operations campaigns,” Zaitsev shared.

Additionally, small and midsize enterprises (SMEs), which play a major role in Southeast Asia’s economy, are being targeted more frequently due to their limited security resources while adopting cloud and AI tools.

These businesses, Zaitsev noted, are struggling with resource constraints, particularly in terms of technology and cybersecurity talent. As the region grapples with a growing cybersecurity skills gap, SMEs are finding it increasingly difficult and expensive to attract and retain top cybersecurity professionals.

“Security teams may struggle with the nuances of cloud-based attacks and attacks on AI services deployed in the cloud. These blind spots and new technologies open the door to increased risk and potential compromise,” Zaitsev pointed out. “Unfortunately, for many smaller enterprises, a successful breach could have magnified consequences for the business.”

In the broader context, the threat landscape has shifted significantly, with adversaries increasingly exploiting valid credentials to breach cloud environments. Zaitsev observed that these attackers are moving laterally across endpoints, often with hard-to-detect, cross-domain attacks.

New threats

“The cloud will be a key battleground for adversaries,” Zaitsev predicted. “Our 2024 Global Threat Report found a 75% increase in successful cloud attacks in 2023. Adversaries are increasingly exploiting valid credentials to breach cloud environments, with identity-based and social engineering attacks seeing a sharp rise in success rates.”

Zaitsev also pointed to the rapid growth of cloud computing, the increasing pace of DevOps, and the rise of no-code and low-code development platforms as factors accelerating these trends. Organisations expanding their digital footprint must ensure they are protected across their entire network, not only for their own sake but also for the supply chains they belong to.

The growth in AI use cases and adoption is also introducing new attack surfaces.

“CrowdStrike expects threat actors to shift their attention to AI models and systems as the newest threat vector to target organisations,” Zaitsev said. This shift underscores the importance of securing AI systems in the cloud to mitigate these emerging risks.

Navigating regulatory compliance

Another consequence of having more frequent and sophisticated security threats is that governments step in to pass more laws governing data protection, privacy, and cybersecurity practices.

Elia Zaitsev, Chief Technology Officer, CrowdStrike. Image courtesy of CrowdStrike.

The problem is that as regulatory oversight increases, companies must adopt better ways to ensure they comply with these laws while maintaining robust security. Zaitsev stresses the importance of having a complete grasp of the threat landscape.

“Organisations must also have a comprehensive view of their IT and security estate to monitor for breaches or vulnerable, out-of-date systems, and technologies that need patching,” said Zaitsev.

To drive accountability, Zaitsev recommends establishing a governance structure within cybersecurity programs. This includes appointing a dedicated person or team to manage all aspects of compliance and security.

“Testing the strength of cybersecurity posture, particularly incident response plans and programs, through tabletop exercises and drills, is crucial,” he continued.

AI security misconceptions

A commonly held belief about AI security is that security teams should block the use of AI tools entirely due to perceived data or privacy risks. Zaitsev thinks this approach could create unintended consequences.

“Organisations must recognise that employees will seek out these AI tools. By entirely restricting corporate access, they risk creating a ‘shadow IT’ scenario, where employees might use consumer versions outside of company oversight,” he cautioned.

Instead, Zaitsev suggests using approved AI options that give employers both visibility and control.

“Organisations should prioritise tools that contain guardrails, controls, and validation around the entire system that the end user interacts with, as well as between each AI agent as they interact with each other,” he explained.

This design approach, he added, also insulates users from the risks and weaknesses of interacting directly with exposed, or ‘naked,’ large language models (LLMs), such as prompt injection attacks or hallucinations.

Emerging solutions

As of 2023, CrowdStrike has also thrown its hat into the generative AI ring. In their case, it’s to bolster cybersecurity through Charlotte AI, a generative AI assistant that reduces the time needed for various tasks.

“Charlotte AI uses a multi-model architecture that processes trillions of daily events collected by the Falcon platform (CrowdStrike’s endpoint detection and response system), and integrates threat intelligence,” Zaitsev revealed.

Like other generative AI, Charlotte lets users ask questions in plain language and promptly get practical answers. Zaitsev asserts that the AI can help security analysts understand their environment, investigate attacks, write technical queries, or receive AI-driven recommendations for reducing risk.

Early adopters reported that Charlotte AI helps them answer questions about their security posture 75% faster, write queries 57% faster, and hunt down attackers 52% more efficiently.

Another key development is Falcon for IT, which Zaitsev says automates complex use cases across security and IT using native generative AI workflows and the single-agent architecture of the Falcon platform.

“Threat actors often exploit misconfigurations or vulnerabilities that leave systems open to attack. Traditionally, IT teams fix these issues, often working in silos separate from security with different tools and visibility,” Zaitsev noted.

He added that Falcon for IT addresses this problem by ensuring cross-team visibility and collaboration for better security outcomes.

Technology challenges and goals

Innovating in the ongoing battle against cyberthreats presents unique challenges, especially with the rise of generative AI.

Zaitsev remarked that one of the biggest hurdles in designing Charlotte AI was navigating the rapidly growing landscape of AI models, which vary in their strengths and applications. Each model differs in speed, accuracy, training data, computational demands, and the risks they pose to end users.

“Selecting just one model, or one family of models, for Charlotte AI could force users to accept trade-offs across these variables, which was unacceptable to our team,” Zaitsev explained.

To address this, CrowdStrike designed Charlotte AI’s architecture to use multiple AI agents, with the aim of performing tasks more accurately and efficiently. When a user asks a question, Charlotte AI breaks it down into subtasks and directs them to the most appropriate AI agents for each part. This design reportedly protects users from risks like prompt injection attacks and hallucinations by adding safeguards between the AI models and the end user.

Looking ahead, Zaitsev said that CrowdStrike plans to continue refining Charlotte AI to support security analysts and improve workflows as cyberthreats continue to evolve.