How to harness AI to advance cybersecurity

Organisations are engaged in a high-stakes chess match against cybercriminals who are using AI. While AI offers immense potential to drive innovation and efficiency across industries, it also introduces a new breed of cyberthreats. According to Dell Technologies’ Innovation Catalyst Research, 75% of respondents in Singapore say they have been impacted by a security attack in the past 12 months, and 44% cited data privacy and cybersecurity concerns as challenges they face in driving innovation.

Generative-AI-enabled attacks present unique challenges for organisations. Automated phishing campaigns are becoming more sophisticated, and generative AI is enabling scammers to better mimic human behaviour. In Singapore, malicious actors have used the technology to enhance attacks, including creating deepfake scams that bypass biometric authentication. On a software level, we’re also seeing autonomous malware that adapts and evolves to evade detection. How should organisations respond?

Strengthening security hygiene for AI adoption

While there’s no ‘silver bullet,’ good security hygiene is essential, especially as organisations accelerate AI adoption.

It starts with ensuring that the environment and IT estate are secure by design, from product development through to deployment. Embedding security features such as multi-factor authentication and role-based access controls adds another layer to minimise vulnerabilities, while continuous monitoring is critical to detect and respond to attacks.

Logging and monitoring tools are also crucial. Security professionals rely on data from these tools to identify behavioural outliers that could pose risks to the organisation.

In the event of a security incident, having a recovery plan in place is vital to restoring operations securely and efficiently, reducing disruption.

Organisations are increasingly adopting zero-trust architectures to fortify their environments. This approach operates on the principle that no entity within or outside the network is trusted by default, and verification is required to access resources on the network. Implementing zero trust effectively reduces the risk of cyberattacks by allowing only verified and necessary activity.

Implementing AI requires strong control over enterprise data, especially for AI systems leveraging the public cloud. Robust data security and governance are prerequisites for a comprehensive AI security strategy.

The power of security enabled by AI

Once you have a strong security foundation, embrace the very technology that threat actors use against us — AI. Adopting AI-enabled security solutions can help organisations build cyber resilience and stay ahead of cybercriminals.

Security enabled by AI refers to AI-powered solutions that organisations can use proactively and reactively to identify and respond to threats. By equipping their security teams with tools that use machine learning, self-learning, and adaptive defence capabilities, they can better detect and respond to threats. Leveraging these tools strengthens the overall security posture across the organisation.

In terms of proactive defence, AI can continuously monitor network traffic, user behaviour, and system logs to identify anomalies and suspicious patterns that may indicate malicious activity. This early detection and prevention capability is crucial for minimising potential damage from cyberattacks. AI can learn and adapt when detecting new challenges, helping IT and security teams outmanoeuvre attackers who refine their tactics and exploit new vulnerabilities. It also enables businesses to create a bespoke security response that is effective against specific threats in their industry.

Unfortunately, attackers can still get past even the best-protected systems. In these cases, AI can also support recovery by automating incident response processes. Threat containment, data recovery, and forensic analysis supported by AI can reduce the business impact of attacks and accelerate recovery.

The human element in security for AI

On top of building a strong security foundation, organisations must recognise that employees are their first line of defence. Every employee needs a basic understanding of how AI is making threats more sophisticated, how to spot them, and what to do when something doesn’t seem right. This will become more important as attackers deploy advanced spoofing attacks created by deepfakes that add a convincing façade to well-practised social engineering techniques. Security practitioners also require role-specific training in AI so they have the knowledge and skills to understand how bad actors might use the technology.

The cybersecurity landscape is in constant flux. Organisations that prioritise AI-enabled security and a culture of continuous learning are best positioned to navigate the evolving threat landscape. By embracing a proactive and adaptive approach to security, businesses can confidently harness the transformative power of AI and build a more resilient and secure future.