As Singapore sets its sights on becoming the world’s most AI-powered economy, enterprises across industries are turning to AI to boost efficiencies, enhance decision-making, and gain a competitive edge. The potential gains are tremendous: AI is projected to increase global corporate profits by up to US$4.4 trillion annually, according to McKinsey, while the IMF estimates that AI could boost global economic growth by 0.8% per year.
Yet alongside these opportunities, AI also poses significant cybersecurity risks, particularly in the payments industry. While AI can be leveraged to improve security measures, it also offers cybercriminals powerful new tools to carry out sophisticated attacks. This duality makes AI a double-edged sword in the evolving cybersecurity landscape.
The emerging threat of AI-powered cybercrime
AI is already being used by malicious actors to automate and scale cyberattacks, including phishing schemes, malware development, and even deepfake scams. These technologies make it easier for attackers to bypass traditional security systems, and the payments industry is especially vulnerable due to the value and sensitivity of the data it handles at high volumes.
For enterprises, the repercussions of such breaches are severe — ranging from financial losses to reputational damage. As cybercriminals continue to exploit AI for financial gain, enterprises must adopt robust strategies to stay ahead of these evolving threats.
4 tips for strengthening payments cybersecurity in an AI-driven world
Given the increasing risks, organisations must adopt robust cybersecurity strategies that leverage AI while mitigating the vulnerabilities associated with AI-driven attacks. Here are several key areas where enterprises can focus their efforts:
- Strengthen authentication protocols
AI can be used to create convincing deepfakes, making it easier for attackers to impersonate legitimate users. To counter this, businesses should strengthen their access control measures by implementing multi-factor authentication and ensuring that authentication processes are continuously monitored and regularly reviewed. Organisations can also explore phishing-resistant authentication factors to enhance security. - Regularly review AI-generated code for security
While AI can accelerate the software development process, it may not prioritise security. As AI-generated code becomes more common, businesses need to implement stringent security reviews during the development cycle. These reviews are especially critical for identifying vulnerabilities in AI-generated code that could be exploited by cybercriminals. Security should be ingrained into the entire development process, not viewed as an afterthought. - Monitor AI systems with human oversight
AI-driven cybersecurity tools can automate threat detection and incident response, but human oversight remains essential. Businesses should maintain a team of skilled cybersecurity professionals to monitor AI systems, review alerts, and provide manual intervention when necessary. AI can be a valuable tool, but it should not replace human judgement entirely. - Protect payment data with latest security standards
When AI models process sensitive information, such as payment data, they must adhere to rigorous security standards. For businesses in the payments sector, adopting updated security standards is a crucial step in ensuring the security of payment data in an AI-powered world. These updates often introduce enhancements aimed at addressing the evolving threat landscape.
Building resilience in an AI-powered world
AI is transforming the cybersecurity landscape, offering both opportunities and challenges for businesses using payments. By adopting best practices, such as strengthening authentication protocols, securing AI-generated code, and maintaining human oversight of AI systems, businesses can leverage AI to their advantage while minimising the risks associated with AI-powered cybercrime.
Financial institutions and other members of the PCI Community are approaching AI integration with caution, particularly due to concerns over how AI models handle sensitive data. Companies should prioritise understanding the storage locations of this data. When sensitive information is processed by AI systems, it is crucial that security assessments thoroughly cover these interactions to ensure comprehensive data protection.
For businesses, adhering to global security standards will be critical for protecting sensitive payments data and maintaining consumer trust. As Singapore and other countries move toward AI-powered economies, enterprises must ensure that cybersecurity measures keep pace with the rapid evolution of AI. Cyber resilience will be the defining factor that enables businesses to navigate the risks and reap the rewards of AI in the years ahead.