Generative AI is increasingly being used to simplify complex tasks — meeting summaries, database categorisation, and even code generation. However, IT and security experts are closely monitoring how the technology could be exploited by hackers to bypass security measures and override authentication protocols.
This issue was a key focus of an industry panel during the FIDO APAC Summit 2024 in Kuala Lumpur, where speakers debated the extent of the threat generative AI poses to authentication and what steps can be taken to mitigate the risk.
Growing concern
Passwords remain a hacker’s best friend because phishing is an easy method for stealing credentials. To strengthen authentication measures, enterprises have adopted multi-factor authentication, such as one-time passwords. However, this approach is costly and still susceptible to phishing.
For Christiaan Brand, Product Manager at Google, the priority is staying one step ahead of attackers.
“I don’t want to say that we completely solved authentication, but I think we’ve gone a long way in addressing the problem of proving user identity when signing on to a resource. Attackers are adapting and shifting their focus elsewhere,” he remarked.
However, generative AI presents a growing threat to authentication, and many users are unaware of who they are actually interacting with online.
“Generative AI is concerning because it enables social engineering as a service. It is now much easier to craft complex commands or messages in any language and format to coerce users into actions they shouldn’t be doing. Our number one concern right now is ensuring that users can trust that the person they are interacting with is who they claim to be,” Brand explained.
Even for connected vehicles, weak authentication measures present significant security risks, observed Tin Nguyen, Director of Automotive Cybersecurity Services at VinCSS.
“For the automotive ecosystem, we’re more concerned about issues like hacking the charging infrastructure, which could leave car owners unable to charge their vehicles,” he said.
In supply chains, a persistent challenge is the varying levels of security and authentication protocols across different segments. The rise of generative AI further complicates this landscape.
“Attackers usually target the weakest link. Once they identify this weak point in the supply chain, they can use generative AI to profile an enterprise or user, making it easier to launch an attack,” said Roland Atoui, Managing Director at Red Alert Labs.
Risky business
A recent study found that 46% of Singapore IT leaders struggle to distinguish phishing emails from legitimate ones, while only 36% believe that cybersecurity is a shared responsibility. The research also found that phishing emails are becoming harder to spot due to generative AI tools.
Another report indicated that 89% of global IT leaders believe generative AI flaws could put their organisations at risk, despite their own deployment of the technology.
According to Brand, access to data has never been easier for malicious actors.
“Generative AI is commoditising access to information. It is lowering the bar for attackers to infiltrate systems and do whatever they intend to do,” he said.
Across Southeast Asia, governments play a crucial role in combating online fraud and authentication-based attacks. Malaysia, for example, is taking proactive steps with its new cybersecurity law and its commitment to passwordless authentication, setting a precedent for securing enterprises and industries.
“We always return to the fundamentals of security, which is awareness. Understanding how technologies like generative AI work is crucial. While it is easy to use, do people fully grasp the risks? We cannot stop people from adopting new technologies, but we must educate them enough so they understand the risks and prepare accordingly,” said Mohamed Kheirulnaim Bin Mohamed Danial, Senior Assistant Director at the National Cyber Coordination and Command Centre (NC4) & National Cyber Security Agency (NACSA) in Malaysia.
Shared responsibility
At present, large-scale use of generative AI for hacking remains in its early stages, but enterprises should not become complacent, Brand cautioned.
“Our advantage is that these attacks are still extremely difficult to perfect. Successful execution requires deep expertise in electrical engineering and cryptography, along with the ability to interpret data and turn it into a real exploit. We’re not there yet, but I can see the trajectory we’re going: Given enough raw data, anyone can become proficient in interpreting that data and weaponising it. We’re starting to see this with basic threats like phishing emails, and that threat will only grow,” he said.
Nguyen pointed to industry regulation as a positive step in countering cybercriminals, as it ensures consistency across the sector rather than relying on individual companies to take action. This is particularly relevant in the automotive sector, where the adoption of the FIDO Device Onboard (FDO) protocol is helping to streamline the onboarding of IoT devices and edge nodes.
“Now, discussions around FDO extend beyond the automotive sector to aviation, maritime, and trade. Regulations are being implemented across these industries, and soon, security will no longer be optional — it will be a requirement. That presents a significant opportunity for those of us working in this space, because the focus has traditionally been on finance, healthcare, and other well-established sectors,” he said.
Ultimately, going back to the basics of authentication — a person’s own credentials — helps instill a culture of shared responsibility in cybersecurity, Danial noted.
“Sometimes, even with all the technologies available, security is not enough unless we do our own due diligence in protecting our endpoints,” he concluded.