The age of AI has arrived, sparking important discussions across various levels. A recent survey by Ping Identity on AI-powered threats found that 99% of businesses in Singapore struggle with identity verification in the AI era, while 85% of Singaporean organisations expect the adoption of AI by cybercriminals to increase the number of identity-based threats over the next 12 months.
At the governmental level in Singapore, multiple reviews are underway on responsible AI development and use. Concurrently, entire industries and the companies within them are attempting to find their own equilibrium. The rules they’re creating for themselves reflect individual risk appetites and, more often than not, a security assessment of the safety of interacting with AI platforms, particularly those that are freely available.
This level of activity is very much warranted. One of the risks AI presents today is its unmanaged proliferation. Without proper guardrails, its adoption is evolving much like cloud and as-a-service software did in their early stages: as ‘shadow’ or unsanctioned deployments. It will take time to bring this under control. In fact, organisations should assume that AI is in use even in areas where its application has been expressly forbidden. It’s a technology that generates great interest, making usage almost impossible to police or control.
Regarding the use cases themselves, a key risk posed by AI is its potential to challenge or disrupt established methods or norms. This applies to several domains, from work processes to security.
We are just beginning to see how advanced AI and machine learning might be used to penetrate established identity verification methods.
AI promises to enhance one of the primary vectors through which threat actors obtain identity credentials: phishing attacks. AI-generated phishing emails are far less likely to have obvious errors that make most such emails so easy to spot.
AI also poses risks to emerging methods of establishing and verifying identity. The cybersecurity industry has long relied on the emergence of identity controls like biometrics to augment or replace passwords. However, there is growing evidence of cybercriminals using AI and machine learning to circumvent advanced identity controls such as face and voice verification.
Finally, if an attacker successfully uses AI to bypass a basic identity system and gain entry to a business’s environment, they may also be able to deploy AI-powered malware to remain within the system, collect data, and observe user behaviour until they’re ready to launch another phase of an attack or exfiltrate the information they’ve gathered, all with a relatively low risk of detection.
So, AI presents multi-layered threats to current identity systems. If it continues on its present trajectory, operations will soon be forced to confront and radically reassess their future identity options, and the protections wrapped around these identity systems.
Considering how rapidly things are escalating with AI, a new approach to securing digital identity is likely warranted.
IDC states that with the increasing complexities in the evolving threat landscape, spending on security hardware, services, and software in Asia-Pacific is expected to reach US$36 billion in 2024, an increase of 12.3% over the previous year, as organisations recognise the critical importance of safeguarding against emerging risks.
A combination of identity threat detection and response (ITDR) and decentralised identity (DCI) practices is emerging as one of the best ways to keep data and identities safe in this new paradigm. By employing this two-pronged approach, users can help manage their own identity data, while organisations reinforce users by constantly monitoring the IT environment.
A strong response
ITDR helps organisations detect and respond to cyberattacks, while DCI enhances security and privacy by reducing reliance on centralised data systems.
Ping Identity’s survey also found that 99% of organisations believe that adopting DCI is valuable for their customers. However, only 41% have implemented a strategy to use DCI as a protection against fraud, though more are beginning to offer DCI. Additionally, 45% of respondents indicated that reducing financial and reputational losses due to fraud and breaches is among their top priorities.
ITDR practices involve closely monitoring the IT environment for suspicious and anomalous activity. By focusing on identity signals in real time and understanding the permissions, configurations, and connections between accounts, ITDR can proactively reduce the attack surface while also detecting identity threats.
Centralised IAM data stores increase the risk of large-scale data breaches through AI-powered cyberattacks. With DCI, identity verification is based on providing a cryptographically verified credential instead of storing personal information in a centralised IAM database. DCI not only empowers individuals to manage their own digital identities, but it also offers a secure and tamper-proof way for people to authenticate themselves.
Moreover, the attractiveness of a hack is significantly reduced, as a breach would likely result in the compromise of an individual’s records rather than the sensitive data of millions.
With DCI serving as a frontline defence in conjunction with ITDR practices, IAM best practices across the industry are being revised and refined, making it far more difficult for cybercriminals to use AI against organisations to execute identity takeovers and fraud.