DBS has spent nearly a decade building the data, systems, and governance structures that now support its AI work. According to Group CIO Eugene Huang, much of the effort involved preparing the organisation for sustainable, scalable use of AI long before generative models emerged. In this interview, he explains the decisions behind that groundwork and how DBS is approaching generative and agentic AI.
What has DBS learned from scaling AI, and what proved more difficult than expected?
DBS began experimenting with AI in 2014, starting with a pilot using IBM Watson for wealth management. Through those early efforts, we saw that data would be central to any AI work, and we spent several years putting in place the architecture needed to support it.
We worked on our technology stack to make it more scalable and stable, with automation built into our processes. This involved moving from legacy systems to open-source technologies and investing in a hybrid, multi-cloud infrastructure to increase available compute resources.
At the same time, we developed two in-house capabilities: ADA, a self-service platform that serves as a single source of truth for data governance, discoverability, quality, and security; and ALAN, our AI protocol and knowledge repository that provides a standardised and repeatable approach to deploying AI models in use cases.
Today, DBS deploys over 2,000 AI models across more than 430 use cases in different parts of the bank. While it was no easy feat to build this foundation, it has helped us scale generative AI use cases and prepare for agentic AI use cases.
In 2021, we began quantifying and disclosing the economic impact of our AI work. In 2025, we achieved approximately SG$1 billion from our AI/machine learning and data analytics initiatives.
While we built the processes and technology that support our AI strategy, the people aspect was just as important. We focused on making sure employees were included in the transformation.
How is the bank using generative AI today, and which use cases have had the most impact?
Having built the foundation for our AI work over several years, we were able to deploy generative AI use cases in 2023 as the technology matured. These use cases now support different customer and employee workflows across areas such as sales, advisory, servicing, processing, and software development.
For corporate customers, our virtual assistant DBS Joy uses generative models to answer queries, respond to common requests, and support routine servicing. When customers require more complex assistance, the system connects them to a service specialist, who uses an internal co-pilot to help provide more complete responses.
We are also using generative models to support employees. Customer service officers use the CSO Assistant for transcription, call summarisation, and post-call documentation. This has reduced call handling time by up to 20%. More than 90% of our workforce have access to DBS-GPT, our in-house platform used for writing, brainstorming, summarisation, translation, and retrieving information from the bank’s knowledge base.
Generative AI is also used throughout the software lifecycle. One example is JIRA Assist, which helps developers and business analysts refine code, create documentation, and reduce time spent on bug fixes.
These tools aren’t just about automation. They free up our people to focus on work that requires more judgement and interaction with customers.
How are AI and large language models being deployed across your systems?
One part of our approach is a modular AI architecture. We have expanded ADA to support generative AI use cases by building a generative AI marketplace that provides applications in the bank with LLM as a service under defined controls and governance. This approach means we are not dependent on any specific LLM or technology provider, whether on-premises or in the cloud.
Instead, our architecture allows us to integrate and swap LLMs with minimal effort. The marketplace includes safety guardrails, audit controls, cost control, and pre-approved patterns, together with reusable APIs that support AI deployments.
This work has reduced time to value for AI and machine learning from 18 months to about 2 to 3 months, and contributed to an economic outcome of approximately SG$1 billion in 2025.
How do you balance AI development with governance requirements?
For us, responsible AI means deploying AI in a way that is ethical, transparent, and aligned with clear principles. We recognise the potential of AI in areas such as customer experience and operations, but its use must be guided by shared guidelines.
All our AI and machine learning use cases are reviewed through the PURE framework, which sets out that data use must be purposeful, unsurprising, respectful, and explainable.
PURE is not viewed only as a compliance checklist. It is applied at the level of each data use case and built into our AI and machine learning processes, so that use case owners consider whether data use is ethical in addition to whether it is legally or technically permissible.
To support this, all new employees are introduced to PURE during orientation, where the principles are explained as part of how we approach data use.
With the rise of generative AI and more agentic systems, we are extending our responsible data use framework to include guidelines specific to these technologies. Insights from each use case review are included in later updates.
What role could agentic AI play in a regulated bank, and where are its limits?
With agentic AI likely to become more common for customers, people may come to rely on these systems to find information, procure products, and manage payments. This creates new possibilities for how we handle certain tasks, although key decisions will continue to involve human oversight for the foreseeable future.
Our goal is to provide customers a controlled and convenient way to conduct these transactions. We have a working group studying different use cases and examining areas such as observability, spend controls, accountability, and liability to balance convenience with the responsibilities involved.
By automating routine tasks within defined guardrails, agentic AI can reduce manual work and allow employees to focus on activities that require judgement and more complex decision-making.
How are teams being prepared to work differently as AI changes the technology stack?
We have stepped up our upskilling efforts to help employees stay relevant as AI reshapes operating models. DBS has identified over 11,000 employees, whose roles are AI-impacted, for deeper, role-specific capability building.
To support this, we are making product and customer experience design more data-driven and developing data analytics and governance specialisation roles. Tech employees will also be trained to take on changed roles that incorporate generative AI tools into their work.
Editor’s note: This interview was first published in Frontier AI 2026.














