AI rapidly expands in Asia: Risk & compliance practitioners should embrace ongoing education

Artificial intelligence (AI) promises to deliver extraordinary opportunities.  In the context of the ongoing pandemic, AI is used for contact tracing as well as monitoring of physical distancing.  Hospitals are also using AI to manage hospital bed assignments and to provide better patient care using predictive modelling.

In Asia, AI research and commercialisation is rife. Japan and South Korea are leading in patents filing, while China spends a considerable budget in AI research and development. In Southeast Asia, Singapore is leading the charge and recently announced its AI roadmap which aims to make the city-state the regional AI hub in the next 10 years.  With global manufacturing shifting into Southeast Asia, AI deployments in places like Indonesia, Malaysia, Thailand, and Vietnam are likewise expected to grow tremendously.  

What this AI trend means for us, compliance practitioners, is an upsurge of significant risks that we must understand, anticipate and respond to. These risks are real and rapidly evolving, yet the rules, laws and codes governing AI are not always able to keep up with the pace of technological change, creating a considerable challenge for compliance.

- Advertisement -

In order to address this megatrend, the International Compliance Association (ICA) recently held a series of online roundtable events aimed at exploring the practical challenges of A from the perspective of Risk and Compliance professionals. The discussion was led by Janet Adams, a disruptive tech advocate and long-term AI enthusiast with over two decades of experience in banking risk and technology, who was able to share with the participants some of her academic research from a recently completed master’s degree in AI.

Senior executives working within the financial services sector across the globe, including Heads of Compliance, Money Laundering Reporting Officers (MLROs), Heads of Monitoring, Chief Risk Officers (CROs), and Conduct Risk Managers were in attendance. They represented organisations at varied stages in their AI journey.  Some common themes emerged, including “Decision Making and Governance” the most commonly-cited concern, followed by “Monitoring RegTech and AI”, “Accountability and Explainability”, and “Training”.

Training

Given the complexity of AI technologies and the speed at which they have developed, participants reported that levels of understanding regarding how AI technologies work, and of the outputs of these technologies, varied considerably both across their industries and organisations, as well as within compliance teams. This resulted in practical hurdles when selecting and implementing suitable technological solutions as well as when establishing AI-related training needs. 

In addition, diverse workforces have a corresponding diversity of training needs with regard to AI. For example, according to one participant: “There’s something of a divide between those that ‘get’ AI and those that don’t. And often that divide is between the younger and the older employees.” Given that AI has the features of a general-purpose technology – i.e. it is expected to impact all aspects of the world around us – it was agreed that adopting a ‘head in the sand’ approach to AI training is simply not an option. All business functions will need at least some understanding of AI and what it means to the organisation and their role within it.

As Janet Adams suggested: “Many also highlighted the need for much greater understanding of risk and compliance issues amongst technology providers. “There is a huge communication piece here. The data scientists and tech specialists generally don’t have strong risk awareness. Training needs to work both ways. We as Risk and Compliance need to understand the tech, but the tech specialists need to understand the risk and compliance considerations too.” In addition, it was suggested that risk and compliance teams may, increasingly, need to contain a risk and compliance data scientist as a matter of course.

Accountability, explainability and monitoring

Concerns regarding shortcomings in knowledge and understanding of AI and, more broadly, of the challenges of vendor selection and engagement, also featured strongly in the discussion around the accountability for, and explainability of, AI-derived decisions. How can you select the right solution for your organisation if you don’t understand how AI solutions work? Further, how can you be accountable for the results of an AI solution if you can’t explain how it operates?

Participants agreed on the importance of selecting the right technology partner when implementing AI, with an emphasis on the word ‘partner’. One individual highlighted “the difficulty of trying to select the right provider for an AML solution given that there are over a hundred providers in the market”. Another suggested that vendors must get better at explaining how their products work, particularly as solutions are now running ahead of the understanding of both regulators, businesses and often vendors themselves. “Solutions can feel like a ‘black box’,” they suggested. “I have had discussions with vendors where they weren’t keen to explain how the underlying technology works, but if you can’t understand how the technology works, how can you implement it?”

 “I propose that a risk-based approach to explainability should be adopted, with the degree of risk depending upon the use-case. If you’re dealing with a high-risk use case that directly impacts a customer you would need extremely high confidence in the explainability. You therefore need a case-level understanding of the technology in order to select a technology partner and understand the level of associated risk to be managed”.

Given the limitations of both compliance practitioners’ technological know-how and of tech providers’ grasp of risk and compliance, finding a common language is essential. As one participant suggested that “For those who aren’t tech-savvy, there is a halfway house conversation that can be had around the logic of decision making. I understand the logic of decision making, so the vendor should be able to explain to me the logic of the technology’s decision-making process.”

Decision making and governance

These concerns about explainability and accountability were underlined by a broader unease regarding potential loss of control, associated in particular with the use of deep learning algorithms for decision making. Some participants were particularly uncomfortable about the possibility that algorithms could amplify bias, for example in a client onboarding context. 

In practice, adopting AI presents a great opportunity for a step change in the eradication of bias. This points to the need for compliance by design at the outset of an implementation project: you choose the most appropriate algorithm and then apply rigorous governance in the design of any feature of the system. Bias needs to be considered at the very outset and then continuously monitored and audited throughout the system lifecycle.

The message is that good governance must underpin all aspects of an AI project, from procurement to implementation and monitoring. This should include:

  • Auditable risk assessment and cost/benefit analysis taking into consideration key principles of AI design
  • Monitoring and testing of AI outcomes and data input against key risks and principles
  • Appropriate oversight of AI systems and clear accountability of senior management with sufficient understanding of AI.

Data governance, in particular, was flagged as an often-underestimated component of any AI project, not least because the output of an algorithm will only ever be as good as the input data. According to Janet Adams: “An AI project in financial services is likely to be 10% AI; 40% data gathering, cleansing, normalising, checking, and de-biasing; and 50% compliance and governance. It’s easy to knock together an algorithm, but getting all your data together and in good shape can be a challenge, and then testing, monitoring and governing it safely throughout its journey could be as big an effort as the other two elements, depending on the risk level of the use case.”

Put your hand up

Above all, the takeaway message for participants was not to be daunted by the pace of change in this field and, equally, not to be fearful of asking questions of their organisations and product providers.

“There is a reluctance among banking professionals to say ‘what is AI?’,” suggested Janet Adams, “But my single biggest tip is: put your hand up and say when you don’t understand something. Keep saying that you don’t understand until you, as a compliance professional, are happy that you have the answer that you need. AI is massively complex, so nobody looks stupid when asking questions about it. Now is the time to invest in yourself and really learn.”