Analysing AI-driven cyberthreats through an Olympic lens

The upcoming Olympics and Paralympics Games are set to showcase the world’s best athletes. However, these highly anticipated sporting fixtures are also expected to become a focal point for another reason: a surge in cybercrime.

Individuals and organisations continue to ramp up their AI adoption, further propelled by the arrival of generative AI tools like ChatGPT and Microsoft Copilot. These tools can best be described as ‘search engines on steroids’ due to the speed at which they process large volumes of data. It’s no surprise that threat actors are now targeting the new attack surfaces created by generative AI adoption, providing more opportunities to infiltrate organisations and exfiltrate data.

A not-so-fun fact: The number of reported incidents during the Paris Olympics was nearly ten times greater than those witnessed at the Tokyo event, where a staggering 450 million individual cyberattacks were reported. Based on these numbers, we saw approximately 3.5 billion individual cyberattacks during the Paris event — a global cybersecurity risk unprecedented in magnitude and scale. As technology and security leaders, we need to be ready.

- Advertisement -

Understanding the Olympic cybersecurity risk and why this matters

Identity-based threats and email compromise are already pressing concerns for security professionals; however, the Paris Olympics saw these threats reach new levels. Cybercriminals were expected to use generative AI to create everything from fake travel documents and event tickets to accommodation and holiday offers, luring unsuspecting individuals.

With many employees using their work devices for managing personal tasks such as booking flights or event tickets, they could unknowingly put their organisation’s security at risk. Further complicating matters, many employees use Microsoft 365 collaborative tools on their mobile devices — so any threat of business email compromise (BEC) or phishing has the potential to impact the entire enterprise ecosystem.

What makes the detection of these Olympic-branded attacks more challenging is the increasing availability of malicious generative AI tools on the dark web. These tools allow users, even those without technical skills, to create and polish phishing emails at scale without needing to create a macro in Microsoft Word or even sign up and log in. These criminal tools provide step-by-step instructions with multiple combinations of suggestions from AI and large language models to craft convincing BECs that appear authentic. The widespread use of such tools poses a significant risk for large-scale events like the Olympic Games if not properly contained.

Strengthening threat detection to counter lateral movement

Despite advancements in technologies and AI, one thing remains constant — the human element. Humans are fallible and threat actors know this, frequently exploiting vulnerabilities through phishing and social engineering campaigns to gain a foothold into their victim’s network.

While many breaches can be prevented with basic cyber hygiene tactics, most organisations continue to invest in protecting their network perimeter rather than focusing on much-needed security controls that can affect positive change against the leading attack vector: lateral movement.

CISOs should consider adopting a layered approach that includes not only preventative measures and monitoring of known behaviours, but also the ability to identify and respond to emerging threats. This requires visibility, contextual understanding, and robust controls. Leveraging industry-wide AI techniques can aid in detecting unknown threats and identifying attackers employing new, evasive methods.

Promoting safer employee behaviour using relatable threat scenarios

Effective cybersecurity awareness campaigns consider the psychological aspect of human behaviour. They aim to engage users by addressing cognitive biases, employing behavioural psychology principles, and using relatable examples to promote safer online practices.

For example, simply reminding employees about the threats posed by generative AI may not be sufficient to create the desired awareness and behaviour change. However, providing context and real-life examples can have a much greater impact. Employers could use examples like this in their messaging:

“There have been numerous cases recently where individuals fell victim to Olympic-related scams such as phishing emails or other fraudulent activities. Often, this occurred while using their work devices, which subsequently exposed their workplace to a cyberthreat. It could happen to you next time, so please stay vigilant and take preventive steps.”

By training employees, users, and customers to recognise these biases and develop strategies for mitigating their effects, cybersecurity professionals can make more accurate judgements and decisions, ultimately improving the security and resilience of their digital assets.

The road to success: battling AI-powered cyberthreats in a generative AI era

The Paris Olympics may have been a battle for sporting dominance, but AI was at the heart of the security battle. We now live in a world where generative AI tools are widely available, and cyberattackers are developing AI-based capabilities to commit crimes faster, more efficiently, and with minimal skill required.

Taking the necessary steps to defend against the growing threat of AI-powered attacks within your organisation can help prevent costly long-term security breaches, protect against evolving threats, and ensure that we can continue to enjoy and celebrate significant events like the Olympic Games.