At the SuperAI Conference in Singapore, the Monetary Authority of Singapore (MAS), DBS Bank, and AWS confronted the financial sector’s central dilemma with AI: How to embrace rapid innovation without undermining regulation and trust.
Their discussion showed that the industry is moving past experimentation into real deployment, but always under the shadow of governance and risk.
Industry collaboration
For MAS, AI is considered an innovation enabler for the sector, and the agency is determined to collaborate with industry stakeholders to identify the best approaches to use cases, said Kenneth Gay, Chief FinTech Officer at MAS.
“AI is a really big part of our growth agenda for the financial sector. I see a lot of interest among Singapore financial institution start-ups in adopting AI, and we work very closely with them,” he said.
At DBS, the bank began intensifying its focus on AI in 2018 to reduce time to value.
“It used to take 18 months to build use cases, from idea to deployment, and we wanted to drive economic outcomes. To do this, we decided to review our tech platforms, our processes, and how to bring the data community together from a transformation perspective,” shared Rajeev Hassamal, Head of Generative AI and FoW at DBS Bank.
DBS has since produced over 300 successful use cases and deployed 1,500 AI models across corporate banking, retail banking, and finance risk functions throughout the organisation.
“We have reduced time to value from 18 months to two to three months to build use cases,” Hassamal said.
Annabel Lee, Director APJ Strategic Policy and Campaigns and ASEAN at AWS, recalled that about a decade ago, financial institutions were hesitant to explore data opportunities before the AI boom.
“Back in 2016, I remember sitting in rooms in Singapore with the financial services industry, with banks and insurers. People were asking at the time, ‘Why should I even do anything with data? I don’t want to use data. I don’t want to share data. I’m collecting data for very narrow purposes, and that’s all I want to do with it,’” said Lee, who previously worked for the Singaporean government.
By 2018, however, interest in AI surged and has remained strong, she added.
“There was strong interest in the use of traditional AI, and a lot of people at the time were saying, ‘Now that we have the compute power, we can do a lot more with data.’ Suddenly, the possibilities looked very interesting,” Lee observed.
Still, she noted, it was not until generative AI came to prominence that the broader public began to grasp its potential.
“For the average consumer, it suddenly became clear what AI could do,” she said.
She cautioned, however, that while organisations are excited about AI’s possibilities, regulators and governments are increasingly concerned with issues such as governance and explainability.
“If we can’t strike the right balance between responsible use of AI and innovation, it won’t be adopted,” she said.
Building use cases
Within MAS, several generative AI use cases have proved successful. Gay, who previously worked on internal data and digital transformation, helped build some of the applications that are still operational today.
“We built an internal version of ChatGPT. We used a self-hosted LLM and we kept upgrading it. We kept iterating and experimenting to see which among the set of models best fit our colleagues’ use cases, and we combined that with a variety of retrieval methods and generation tools connected to our information stores,” he said.
MAS also created a horizon scanning and risk surveillance tool, an information pipeline that draws from news aggregators into a centralised repository and message feed.
“This message feed allows us to monitor news over a certain time horizon, identify key implications, and summarise them for our stakeholders. Every day, the management in MAS receives a set of news alerts that are generated automatically in the morning, so they can see the most relevant information,” Gay explained.
At DBS, AI use cases are divided into horizontal and vertical applications.
“For horizontal use cases, we apply generative AI to unstructured data, similar to what MAS does. It’s our general tool. We keep upgrading the models to the latest versions from any family of models that provide read, write, summarise, and transcribe capabilities, as well as access to company knowledge,” Hassamal said.
Vertical applications, on the other hand, are designed for specific roles.
“In call centres, for example, the tool provides live transcript access and automatically generates summaries and service requests when needed. This means customer service officers can focus on the conversation instead of scribbling notes or toggling screens. Since deployment, interaction quality has improved and resolution rates have increased,” he added.
In defence of guardrails
As early as 2018, MAS had already begun shaping AI governance. Gay noted that the regulator issued broad guidance called FEAT, which stands for “fairness, ethics, accountability, and transparency.
“After that, we started hearing a lot of feedback from financial institutions. ‘How do I adopt these principles within my institution?’ We then came up with a FEAT toolkit, an open source toolkit that financial institutions could adapt and run in their own institutions,” he said.
With generative AI, MAS released a whitepaper explaining the technology, outlining potential risks, and setting out considerations for financial institutions.
“Financial institutions are now building and deploying generative AI applications, and they want more clarity around the guardrail features required. To address this, we’re developing a playbook this year for use across different parts of the financial sector, including banking, insurance, and capital markets. We are gathering key use cases and related risk considerations, and testing how far we can go in giving institutions clarity on what to consider when deploying their applications,” Gay said.
For DBS, working closely with regulators like MAS to develop a risk-based approach to AI regulation is critical, Hassamal remarked.
“Technology keeps evolving at a pace never seen before, so finding that sweet spot is hard. For financial services, trust is at the centre of everything for our customers. They give us their money to save and keep, so it’s important to take good care of that trust, while also finding the balance between governance and speed of innovation,” he said.
Future of human-AI interaction
Looking ahead, Gay predicts greater clarity in work processes as human-AI collaboration becomes more defined.
“You see it in coding: if you give the AI only a few lines, the output isn’t going to be great. But if you let it work across a full code repository, with all the system specifications included, the results are better and you can run automated testing as well. The same applies in the policy space. When AI is exposed to a wider pool of information, supported by standard operating procedures and work processes, it improves not only the AI itself but also supports new joiners to the team,” he explained.
Meanwhile, as enterprises explore agentic AI, Hassamal argued that it will still take time before financial institutions deploy fully autonomous customer-facing agents.
“From a banking perspective, trust is essential, and we are highly regulated. One wrong answer could cause a lot of trouble for the organisation. We’re not there yet, but the path starts with copilot functions, then exposing simple knowledge, followed by simple transactions and queries on those transactions. On top of that, we can consider low-risk advisory. These building blocks are necessary to establish security before opening up to a wider spectrum of use cases,” he said.
Lee agreed, suggesting that AI agents will likely first be deployed for routine tasks such as filing claims or scheduling.
“Externally, probably the key enabler is having that regulatory certainty, knowing that you can implement these use cases without running afoul of requirements. That’s where MAS has done well in ensuring that conversations are ongoing. Where there are higher-risk use cases, there are sandboxes that people can experiment in as well,” she said.














