AI security is a risk we can’t ignore

The AI debate is raging, and skepticism is high. But AI is here to stay.

While some headlines criticise tech giants for AI-driven social media or questionable consumer tools, AI itself is becoming indispensable. Its efficiency is unmatched, promising gains that no business or government can ignore.

Very soon, AI will be as integral to our lives as electricity — powering our cars, shaping our healthcare, securing our banks, and keeping the lights on. The big question is, are we ready for what comes next?

The public conversation around AI has largely focused on ethics, misinformation, and the future of work. But one vital issue is flying under the radar: AI security. In Gartner’s 2024 survey of senior enterprise risk executives, AI-enhanced malicious attacks and AI-assisted misinformation ranked the top two emerging risks.

There’s reason to be concerned. Asia-Pacific is heavily invested in AI — Singapore is in the race to tripling its AI workforce in the next three to five years as part of a broader economic development strategy to be globally relevant. The future, however, cannot be achieved without collective action — governments and businesses must collaborate to build resilient AI ecosystems.

As we give AI more control over tasks, the fallout from a cyberattack grows exponentially. Disturbingly, some AIs are as fragile as they are powerful.

How cyberattacks exploit AI

There are two primary ways to attack AI systems:

  • The first is to steal data, compromising everything from personal health records to sensitive corporate secrets. Hackers can trick models into spitting out secure information, whether by exploiting medical databases or by fooling chatbots into bypassing their own safety nets.
  • The second is to sabotage the models themselves, skewing results in dangerous ways. An AI-powered car tricked into misreading a “Stop” sign as “70 mph” illustrates just how real the threat can be. And as AI expands, the list of possible attacks will only grow.

Yet abandoning AI due to these risks would be the biggest mistake of all. Sacrificing competitiveness for security would leave organisations dependent on third parties, lacking experience and control over a technology that’s rapidly becoming essential.

So how do we reap AI’s benefits without gambling on its risks? Here are three critical steps:

Choose AI wisely

Not all AI is equally vulnerable to attacks. Large language models, for example, are highly susceptible because they rely on vast data sets and statistical methods. But other types of AI, such as symbolic or hybrid models, are less data-intensive and operate on explicit rules, making them harder to crack.

Hybrid AI models combine symbolic reasoning with machine learning, offering both accuracy and explainability. They can dramatically reduce false positives in fraud detection while enhancing transparency in decision-making — an advantage for financial services and fintech businesses in the region.

Deploy proven defences

Tools like digital watermarking, cryptography, and customised training can fortify AI models against emerging threats. In Asia-Pacific, these tools are essential for sectors such as banking, critical infrastructure, and healthcare, where digital transformation is accelerating. Stress-testing solutions can help cybersecurity teams identify and address vulnerabilities in AI models before attackers can exploit them.

Level-up organisational cybersecurity

AI doesn’t operate in isolation — it’s part of a larger information ecosystem. Traditional cybersecurity measures must be strengthened and tailored for the AI era. This starts with training employees; human error, after all, remains the Achilles’ heel of any cybersecurity system.

The availability of cybersecurity talent across Asia-Pacific is a growing challenge. Businesses must invest in upskilling employees to secure their AI systems. For instance, Singapore’s Cybersecurity Talent Development Scheme offers a model for bridging skill gaps across the region. 

Some might think the battle over AI is just another chapter in the ongoing clash between bad actors and unwitting victims. But this time, the stakes are higher than ever. If businesses and governments in the region fail to prioritise AI security, they risk not only data breaches but also setbacks to economic development and innovation.

By making AI security a strategic priority and working collectively, we can ensure that AI delivers its full potential — driving new growth, enabling innovation, and securing societies.