Navigating AI’s thin line between progress and peril

With all the buzz about AI, the technology itself is neither inherently good nor bad. Although there is immense potential to leverage AI for a qualitative leap in human society, there are as many risks that could have severe consequences on all of us. Navigating this delicate balance becomes trickier as we venture deeper into machine learning and autonomous systems.

One of the biggest concerns about AI is how malicious actors can use it to target people and businesses. AI-powered automation, for instance, can be utilised for more sophisticated and continuous attacks. Meanwhile, threat actors have also leveraged AI to surpass long-established security measures.

Overcoming the AI paradox

If you aren’t worried yet, think again. Last year, senior security professionals overwhelmingly told CyberArk that despite AI being widely used to fortify defences, they were braced for an influx of new AI-enabled cyber threats. Increasingly popular generative AI tools have opened up a can of security worms with potential exploits cause for grave concern. For instance, CyberArk’s research found that security experts were particularly wary about chatbot security.

Meanwhile, 37% of Singaporean respondents were concerned that generative technologies will embolden cyberattackers to exploit vulnerabilities and inject malware, impersonate employees through deepfakes, and conduct ever-more sophisticated phishing campaigns. Then there are cases of malicious actors using generative AI to create emails for phishing campaigns that appear authentic, or even to generate malware that bypasses facial recognition authentication or evades detection altogether. Such techniques were revealed in CyberArk’s research, which uncovered how ChatGPT could be used to write malicious code and create polymorphic malware that is highly elusive to most anti-malware products.

However, cybersecurity teams from Singapore have also reported early wins from deploying AI to counter threats. In particular, generative AI has become a linchpin in efforts to spot behavioural anomalies and improve cyber resilience. As a result, respondents said they had more time to upskill and evolve defences in the face of evolving threats and increasingly innovative cyberattacks. Survey participants also stated that through AI, they see a way to bridge talent shortages. ISC2 estimates that there is a shortfall of 4 million professionals worldwide.

Guardrails for unleashing AI’s full potential

In the dynamic landscape of AI, establishing robust guardrails is pivotal, and governments have certainly been in this vanguard. Singapore’s National AI Strategy 2.0, for example, stresses the need to prevent AI models from being used in malicious or harmful ways. Clearly, the priority here is to maintain Singapore’s position at the forefront of digital transformation.

To ensure AI delivers holistic benefits, businesses too must play their part. This entails establishing AI-specific company guidelines, publishing usage policies and updating employee cybersecurity training curricula. Before introducing AI-led tools, due diligence is needed to mitigate risk and reduce vulnerabilities. Appropriate identity security controls and malware-agnostic defences are essential for containing innovative threats in high volumes. Just as data is the wellspring of AI’s value, threat actors primarily seek credentials as they provide access to the most valuable assets.

Organisations must also clearly define their position on AI from the very beginning. Perhaps a company is already using generative AI at an enterprise scale. Maybe the enterprise is just launching into a proof of concept. Regardless of an organisation’s AI maturity, ensuring it is clearly defined ensures unity of action and minimises risks.

At the same time, open lines of communication are key for this to become widespread. However, be wary of lopsided conversations. Organisations could consider forming a team of cross-functional experts who meet regularly to review and respond to AI developments, identify high-value use cases, and work to create secure models in line with organisational policy. This will also equip the business with an organised unit that has the capabilities to tackle emerging challenges and maximise benefits.

Ultimately, AI is just a tool, and it will be up to organisations to take charge and maximise its benefits. By embracing a proactive stance and working closely with governments and industry bodies, we can harness the transformative power of AI to usher in a brighter and more secure digital future for all.