AI governance: Powering ASEAN’s next growth leap

Image created by DALL·E 3.

The ASEAN region has a tremendous opportunity to advance its AI ambitions by leaps and bounds, if businesses, from the largest corporations to individual entrepreneurs, fully grasp the importance of governance and ethics in the technology.

In Europe, the new AI Act is seen as a positive step towards democratisation of artificial intelligence. In ASEAN, the ASEAN Guide on AI Governance and Ethics can serve as a roadmap for organisations in the absence of a law similar to the EU’s. But where and how do enterprises start on their AI journey, especially that cybercriminals are also leveraging AI to prey on unsuspecting victims?

In an online panel titled “Building AI Ethics & Governance in ASEAN: Framework, Best Practice and Checklist for ASEAN Organisations,” as part of IBM’s “Future of Responsible AI & Governance in ASEAN” masterclass, experts shared how enterprises can effectively jump on the AI train by building a solid governance and ethical framework.

Starting point

Lee Wan Sie, Director, Development of Data-Driven Tech, Infocomm Media Development Authority (IMDA), Singapore. Image courtesy of IMDA.

One of the limitations of the ASEAN Guide on AI Governance and Ethics is that it is much more focused on traditional AI rather than generative AI, although the plan is to soon update it following inputs from member states, revealed Lee Wan Sie, Director, Development of Data-Driven Tech, Infocomm Media Development Authority (IMDA), Singapore.

For organisations, the guide includes a risk impact assessment template, which developers and deployers could use. 

“What we’re recommending is not just applying all kinds of controls but instead using risk assessments to guide the implementation. This approach ensures that controls are applied in a way that is directly proportional to the level of risk,” she said.

For Stephen Braim, VP, Government and Regulatory Affairs, IBM APAC, such an AI guide for organisations is a good place to start, especially since ASEAN is in a unique position to outpace its neighbours.

“ASEAN is wedged between China, India, and Japan, and all the work that’s going on in those countries. As a group, it really does have this economic opportunity and competitive priority to keep growing, and proper deployment and support for AI in ASEAN economies is very much a ticket to global economic competitiveness,” he remarked.

AI has also been a transformative force for service delivery in ASEAN, particularly around banking, health, education, government services, and tourism, the IBM executive added.

“When you travel across ASEAN, and you dig into some of these more innovative government agencies and banks, you’ll find that the adoption and the innovative approach that AI brings is really quite startling, and a great platform for ASEAN to leap forward,” Braim noted.

In the case of Indonesia, the ASEAN AI guide has proven beneficial, particularly in regulatory gaps, said Dedy Permadi, Digital Policy Advisor to the Minister of ICT, Indonesia.

“The ASEAN AI guidelines can coexist with Indonesia’s Financial Services Authority guidelines on trust and responsible AI in the finance industry, providing additional clarity for financial industry players in terms of AI governance. Similarly, our ministry’s issuance of a circular letter on the ethics of AI complements these efforts by providing further guidance for private and public organisations in implementing AI,” he explained.

Competitive edge

Dedy Permadi, Digital Policy Advisor to the Minister of ICT, Indonesia. Image courtesy of Syahsam Ihza (licenced under CC-BY 4.0).

To maximise AI’s potential, ASEAN businesses must be conscious of the ethical issues associated with the technology, or they risk being left behind. Southeast Asia’s digital economy is expected to reach US$100 billion, while the gross merchandise value is expected to surpass US$300 billion by 2025, according to Permadi.

“As part of the industries utilising AI, the e-commerce, financial, and banking sectors are expected to be among the earliest adopters of the ASEAN AI guidelines. This is because their business models, which benefit from the processing of personal data that may be catalysed by AI, will gain from having clear rules. These rules provide a strong and trusted protection system for their users as part of AI governance,” he said.

One of the areas where AI has been leveraged for malicious intent is deepfake technology, such as the incident in Hong Kong where a multinational was scammed by cybercriminals pretending to be company executives.

IBM, for its part, released recommendations for policymakers to mitigate the risks posed by deepfakes. The three key priorities are:

  • Protecting elections.
  • Protecting creators.
  • Protecting people’s privacy.

“The time is now to focus on AI safety and governance because generative AI has introduced some new risks and amplified some existing ones. It’s clear that AI will offer significant benefits, so balancing those benefits with the potential risks is more important than ever. With the rise of generative AI, new risks like those related to deepfakes can harm people and generate more misinformation and disinformation,” noted Christina Montgomery, VP and Chief Privacy & Trust Officer at IBM.

Long-term development

For organisations still uncertain about how to begin with AI, Purushothama Shenoy, CTO, APAC Ecosystem Technical Leader, IBM, outlines three initial steps:

Stephen Braim, VP, Government and Regulatory Affairs, IBM APAC. Image courtesy of IBM.

First, establish an AI governance board to oversee all regulations pertinent to a specific industry or enterprise.

Second, adopt a human-centric approach to AI, ensuring the technology augments people’s work rather than replacing them entirely.

Third, organisations should build a trusted AI platform founded on open principles.

As AI has already revolutionised numerous industries, the conversation should also encompass how broader society can contribute to shaping the future economy.

For Lee Wan Sie, upskilling is a must, especially in relation to AI.

“We’re talking about not just training AI engineers or scientists, but really everyone across society to be able to use and understand what the tech can and cannot do, to be aware of the risks. For workers, it’s about how they can use the technology safely and ensuring they have the right skills to augment their current work,” she explained.

Aside from this, championing greater transparency is essential to mitigate the risks posed by AI, especially generative AI.

“I think with generative AI, a key aspect of increasing transparency is to ask, ‘What exactly do we want to be transparent about?’ Many model developers and organisations are beginning to release information about their model’s causal systems or how the models were trained. It’s crucial for enhancing transparency to ensure that evaluations are conducted openly, allowing for the sharing of the outcomes,” she concluded.