
In global banking, the success of AI doesn’t just depend on model performance; it hinges on whether systems can scale, stay compliant across jurisdictions, and deliver traceable outcomes. Yet despite growing regulatory attention and organisational investment, many banks still struggle with operational blind spots and legacy assumptions.
Dr David R Hardoon, Global Head of Artificial Intelligence Enablement at Standard Chartered, spoke with Frontier Enterprise about how the bank approaches responsible AI, what explainability really means in a financial context, and why the build-versus-buy debate remains unresolved.
Where do you see the biggest disconnect between AI testing and operationalisation in global banking?
I would consider the biggest challenge in moving from testing to operationalisation to be the need for reliable, scalable, and governed deployment. Experimentation typically focuses on model accuracy and proof of concept, but production systems must be able to handle scalability — specifically, processing large volumes of data and transactions without performance degradation.
Integration with legacy systems is another critical factor, as fragmented architecture can introduce operational risks. Equally important is the need for robust AI monitoring and governance. Ongoing model performance and compliance must be tracked to meet regulatory requirements and prevent data drift. Without operational discipline in these areas, AI initiatives may fail to deliver value or could expose the organisation to compliance, reputational, or monetary risks, ultimately undermining the transformative potential of AI in financial services.
What kinds of legacy thinking still get in the way of responsible AI deployment at scale?
Candidly, the only “legacy thinking” that I believe still gets in the way of responsible AI deployment is the notion that responsible AI is something new and unique; that it has sprung up because of AI. There is an organisational conduct and culture that lays the foundation for the values and ethics an organisation adheres to.
For instance, our Board has adopted a Group Code of Conduct and Ethics relating to the lawful and ethical conduct of business, supported by our valued behaviours. The only time ‘being responsible’ gets in the way of scaling AI is when responsibility is applied in the wrong way.
For added context, we’ve adopted AI and digital tools across various functions to remain competitive in financial services for a number of years. Our approved AI use cases are deployed in domains such as client engagement, operational efficiency, risk management, client onboarding, employee engagement, management reporting, and talent acquisition.
We’ve had a formal Responsible AI governance framework in place for several years, led by a dedicated team within the Chief Data Office that centrally oversees all AI use cases. Our approach aligns with established industry standards, including the MAS FEAT and HKMA BDAI guidelines, which are benchmarks among banking regulators. This alignment helps ensure compliance with ethical and regulatory standards, and supports readiness for evolving industry requirements.
How do you decide which AI use cases to pursue internally versus sourcing externally?
Choosing whether to develop AI use cases internally or source them externally revolves around the classic build-versus-buy decision: weighing factors such as intellectual property, speed to market, ownership, support, and commercial implications. The choice is highly contextual and must consider the strategic importance of the use case, data sensitivity, and regulatory complexity.
Internal development may be preferred for core capabilities that provide a competitive advantage or require close integration with proprietary data and systems. Externally sourced solutions may be more suitable where requirements are less unique, rapid deployment is critical, or where leading-edge innovation is needed that would be difficult to replicate internally.
Organisations should evaluate each use case based on criteria such as alignment with business strategy, control requirements, total cost of ownership, scalability, and the ability to maintain or adapt the solution over time.
In what scenarios is explainability still non-negotiable, and where is that bar shifting?
Explainability remains non-negotiable in scenarios involving material impact, explicit legal or regulatory requirements, and where decisions affect customers or markets. The bar is shifting, as explainability now spans a spectrum tailored to different stakeholders rather than being a fixed requirement.
Increasingly, we are recognising that explainability in financial services ultimately refers to the ability to identify, explain, and manage risk, and not just to provide technical insights into model behaviour.
What blind spots do you think regulators and banks still share when it comes to AI governance?
We are continuously uncovering new insights and understandings brought forward by AI, which highlights the importance of focusing not only on the technology itself, but also on the insights it reveals and the impact it may have on underlying processes. Jurisdictional risk also arises from diverging international regulations, including differences in the pace and scale of adoption, conflicting rules, and extraterritorial or localisation requirements related to data, AI, capital, and revenues. This makes it challenging for regulators and banks to strike the right balance between capturing the real substance of AI without over-prescribing methods. It is not necessarily just about the ‘how’, but also the ‘what.’
We mitigate this risk by actively monitoring regulatory developments, including those related to AI, digital assets, sustainable finance, and ESG, and by responding to consultations either bilaterally or through well-established industry bodies. We track evolving country-specific requirements and work closely with regulators to support key initiatives, such as the recent MAS Pathfinder Programme. We also engage with policymakers and regulators on AI-related issues as part of our broader regulatory dialogue.













