Rethinking security for ransomware as a service

Chaim Mazal, Chief Security Officer, Gigamon. Image courtesy of Gigamon.

Attackers are constantly updating their methods and strategies to breach defences, which means enterprises cannot rest easy. With the rise of AI-powered threats and ransomware as a service, the likelihood of an attack has increased exponentially.

As a result, security teams must shift their mindset from “Will there be an attack?” to “Where will the attack be?” noted Chaim Mazal, Chief Security Officer of Gigamon.

“Traditional cybersecurity hygiene such as two-factor authentication and regularly updating passwords is simply insufficient against AI-powered malware,” he said.

Weak points

The most significant vulnerability lies in encrypted network traffic, where 93% of malware currently hides, Mazal pointed out.

“With 70% of IT and security leaders admitting to allowing encrypted traffic to flow unanalysed, attackers exploit these blind spots through sophisticated techniques like ‘living off the land,’ blending seamlessly into normal network activities and potentially remaining undetected for extended periods,” he explained.

According to Gigamon’s 2024 Hybrid Cloud Security Survey, 83% of Singapore respondents consider gaining visibility into encrypted traffic a priority.

To close these gaps, Mazal believes that organisations must observe their data in motion, in real time.

“Importantly, this includes the ability to inspect encrypted traffic without compromising privacy or compliance. Deep observability allows security teams to proactively monitor not just what enters and leaves the organisation, but also what occurs within — including lateral (East-West) traffic. This visibility is crucial for detecting anomalies and stopping attackers before they can exfiltrate sensitive data, shifting security from reactive to predictive,” he noted.

The AI dilemma

Attackers using AI to launch threats doesn’t mean enterprises should rely on it to do everything. According to Mazal, this is a common misconception that often leads to serious consequences.

“Defenders put themselves at risk when they see AI as a replacement for human expertise, rather than a tool to enhance it. ‘Set-and-forget’ AI solutions without human oversight and contextual understanding create dangerous security gaps, particularly when novel attack patterns emerge that weren’t part of the AI’s training data. This is when generative AI becomes a double-edged sword that favours attackers,” he said.

In Singapore, 46% of security and IT leaders are seeing a rise in AI-enabled scams. For Mazal, this is unsurprising.

“If defenders aren’t vigilant or fail to recognise AI’s limits, attackers will continue to create sophisticated malware like WormGPT and BlackMamba, which can infiltrate infrastructure with unprecedented effectiveness,” he emphasised.

Additionally, open-source LLMs are presenting a related challenge, as organisations try to balance innovation and risk in AI adoption.

Mazal believes open-source models raise significant data security concerns, particularly because individuals and organisations often use such tools for convenience, without considering the privacy risks to both personal and corporate data.

“The open-source nature of these models means data is frequently transmitted, potentially exposing sensitive information or encrypted traffic. Ultimately, AI must be trained in a controlled, private environment to maintain data integrity. Ironically, in this case, the ‘human element’ risk is greater than the AI risk,” he said.

Mazal warned that without full visibility into all network traffic, organisations remain vulnerable. He added that the lack of deep observability doesn’t just pose a cybersecurity risk, it also threatens business operations, as compromised data can directly impact decision-making.

Not enough

Security teams that focus solely on endpoints are making a critical mistake, because some of the most dangerous threats are hidden in network traffic, Mazal said.

Encrypted network traffic remains a major blind spot for many security teams — and a favoured hiding place for attackers. Image created by DALL·E 3.

Hence, traditional endpoint-focused strategies — and sole reliance on metrics, events, logs and traces (MELT) — leave high-risk gaps around compromised endpoints.

“One of the most effective ways to improve network-layer detection is by implementing a security data lake that consolidates previously siloed data sources, eliminating blind spots. Data lakes have long been used in other industries, but are only now being adopted to address growing security concerns, as it allows teams to analyse previously siloed data sources more effectively,” he noted.

He added that visibility into encrypted traffic is especially critical, as cybercriminals often move laterally through the network without detection. By bringing together available data into a unified system, organisations can improve real-time visibility across the network and surface abnormal activity or hidden threats that might otherwise go unnoticed.

Additionally, the shift to hybrid cloud environments makes visibility even more difficult, as all cloud services encrypt customer data, he said. While encryption protects data from cybercriminals, it also limits security teams’ ability to monitor threats in real time.

“Security data lakes can play a key role in addressing this challenge, helping organisations analyse encrypted traffic more effectively and improve their response strategies to cyberattacks,” Mazal said.

Further challenges

Mazal also identified outdated workflow assumptions and fragmented tooling as persistent cybersecurity challenges. He said traditional approaches are hindered by linear response models that cannot keep pace with modern ransomware attacks, which often rely on lateral movement to bypass defences.

In Singapore, 79% of security and IT leaders report being overwhelmed by the sheer number of issues flagged across an expanding range of tools and assets — a sign that today’s security stacks remain fragmented and hard to manage.

Moreover, siloed tools create inefficiencies. As workloads spread across hybrid and multi-cloud environments, security postures become more complex, opening up dangerous windows of vulnerability, Mazal cautioned.

Signature-based detection methods also struggle to keep up with the speed and complexity of ransomware. Organisations can explore detection strategies that incorporate automation, behavioural analysis, and real-time insights from network activity, he suggested.

Mazal asserted that many organisations still hold a fundamental misconception about securing hybrid cloud infrastructure — viewing security and observability as separate, siloed functions. In reality, he said, they must work together to provide complete visibility across multi-cloud infrastructures.

“Combining MELT data with network-derived telemetry gives security teams a broader view of their digital environment, supporting real-time threat detection and faster incident response,” he noted.

Mazal argued that adopting a zero-trust architecture, which challenges traditional perimeter-based models, is a critical step in this transformation. In Singapore, 84% of respondents believe deep observability is key to successful ZTA implementation, underscoring its importance in strengthening overall security.

“Enhancing threat detection starts with complete visibility into data in motion. When organisations combine network-derived telemetry with MELT data, they can improve observability, reduce blind spots, and detect threats that may otherwise go unnoticed. This approach helps security teams respond more effectively to complex challenges across hybrid cloud environments,” he remarked.