As AI reshapes Singapore and the rest of the world, organisations must address the array of risk management challenges this transformative technology brings. Companies and organisations are not the only ones focusing on this — regulators and governments are also crafting AI governance frameworks to address specific risks or concerns faced within their jurisdiction or sector.
For example, the OECD AI Policy Observatory tracks more than 1,000 AI policy initiatives from 69 countries, territories, and the EU. We have also seen differing approaches on the extent of the regulatory reach in governing the potential risks of AI.
Regardless of regulatory measures, AI risks are inevitable. As a result, a standardised approach incorporating global consensus is helpful in providing guidance to organisations seeking to balance innovation and agility with good risk management.
The AI risk matrix: Why it’s not all new
AI and traditional software share many risk management practices, such as development cycles and tech stack hosting. However, the unpredictability of AI and its dependence on data introduces unique risks, in addition to the management of existing technology risks.
First, with the rise of generative AI, far more people are adopting and using the technology, increasing the attack surface and risk exposures. Second, as generative AI models take in more enterprise data, the risks of accidental disclosure of information are rising, particularly where access controls have not been correctly implemented. Third, AI carries risks in areas like privacy, fairness, explainability, and transparency.
Finding balance in a time of constant change
When it comes to challenges, perhaps the greatest one is the fact that AI is evolving so fast that risk management will need to be seen as a moving target. This puts organisations in a quandary: fail to adopt AI quickly enough and they fall behind their competitors; press ahead too fast and they could encounter ethical, legal, and operational pitfalls.
The balance to be struck, then, is tricky — and this applies not just to business behemoths but to firms large and small in every industry, where deploying AI into core business operations is becoming routine. How, then, can organisations manage the risks better without slowing down innovation or being overly prescriptive?
This is where standardisation efforts such as the ISO/IEC 42001:2023 provides guidance for organisations to establish, implement, maintain, and continually improve an Artificial Intelligence Management System (AIMS). Developed by the ISO/IEC JTC 1/SC 42 subcommittee for AI standards, which has 45 participating member nations, it represents a global consensus and provides organisations with a structured approach to manage the risks associated with deploying AI.
Rather than being tightly coupled with a specific technology implementation, such guidance emphasises setting a strong “tone from the top” and implementing a continuous risk assessment and improvement process — aligning with the Plan-Do-Check-Act model to foster iterative, long-term risk management rather than one-time compliance. It provides the framework for organisations to build the necessary risk management components that takes into consideration the scale and complexity of their implementations.
Being a certifiable standard, ISO/IEC 42001:2023 is also verifiable. Organisations can become formally certified or simply adhere to it as best practice. Either way, certification or alignment helps organisations demonstrate to stakeholders their ongoing efforts to manage the risks associated with the adoption or development of AI solutions.
Standardisation: The AI pain panacea
Following a standard like ISO 42001 is helpful in other ways. Its approach helps to address the fragmentation of AI adoption within firms, where it had previously been siloed within data science teams. The broad adoption of generative AI solutions has resulted in an implementation sprawl that places pressure on firms to manage their AI risks on a much larger scale.
With this come three significant pain points: a lack of clear accountability for the reliance on AI decisions; the need to balance speed and caution; and, for firms with cross-jurisdictional operations, the challenges of navigating fragmented guidance from different regulators.
Again, taking a standardised approach works best. ISO 42001’s unified, internationally recognised framework for AI governance, for instance, tackles these. It establishes clear accountability structures and — instead of dictating the use of specific technologies or compliance steps — offers guiding principles focused on processes that organisations can follow when establishing an AI risk management program. This principles-based approach also avoids two key concerns about AI risk management: that it will stifle innovation, and that overly prescriptive standards will quickly become irrelevant.
In a world where AI is becoming increasingly woven into the fabric of business, organisations must ensure they are prepared for its risks. Standardising their approach ensures they can position themselves to navigate future AI regulations more easily, mitigate compliance risks, and innovate responsibly. In these ways, AI can remain a force for good — for organisations themselves and for society more broadly.














