To provide services more efficiently, agencies are turning to AI to process affiliated unstructured data and unlock more use cases that benefit the public.
One such agency utilising AI is MOH Office for Healthcare Transformation (MOHT), an innovation office established in 2018 by Singapore’s Ministry of Health (MOH). It enforces safeguards to ensure successful deployment without compromising sensitive healthcare data.
At the “Transforming Government with AI” panel, hosted by NetApp and Logicalis and organised by Jicara Media, government and technology experts explored the complexities of deploying AI in public agencies. Thisum Buddhika, Assistant Director and Software Architect at MOHT, shared how MOHT approaches AI implementation challenges, while Ong Poh Seng, Head of Solutions Engineering for ASEAN at NetApp, provided insights into technical solutions for AI and cloud deployments.
Thisum mentioned that MOHT adopts a generally cautious approach to AI.
“We don’t jump into deep learning or sophisticated large language models (LLMs) straightaway, because there are certain guidelines when you talk about AI,” he said.
Data strategy
MOHT’s approach to AI begins with a well-defined data strategy aimed at ensuring safety, fairness, and privacy. In every AI application, MOHT carefully considers these factors:
- First and foremost, MOHT ensures that patient safety remains paramount. Hence, all uses of AI must never put patients in harm’s way.
- Secondly, patient privacy is a top priority, and MOHT takes measures to protect sensitive data at every stage of AI deployment.
- Finally, MOHT ensures that its use of AI is fair and ethical.
According to Thisum, MOHT deploys AI methodically, which could mean adopting a simple rule-based approach and then gradually incorporating more sophisticated machine learning algorithms that assist in decision-making.
“When making a prediction, we also try to provide enough evidence to substantiate why that prediction was made, because we have this requirement of explainability. We show graphs, historical trends, and data to support how this decision came about. This gives confidence to both care providers and patients,” he elaborated.
In terms of data privacy, Thisum described how MOHT protects sensitive data: “For example, when dealing with GPS coordinates, we use a method called obfuscation. We add random XY coordinates to the latitude and longitude and do not store the delta. As a result, the coordinates retrieved from the user’s application are altered irreversibly.”
Additionally, the agency refrains from using sensitive clinical data to train its AI models.
“In certain applications, we don’t even use clinical information; that data stays in the clinical system. Instead, we gather data such as digital behavioural patterns and work with it based on the specific needs of the project. We carefully select which data to collect,” Thisum added.
As organisations work through the challenges of deploying AI, technology providers like NetApp are offering solutions that tackle these complexities, particularly in overcoming the limitations of rule-based models.
Deployment challenges
One of the challenges with a rule-based approach, Ong noted, is that some of the information a user seeks cannot be found.
“When I try to use the chatbot in some government web pages, 90% of the time, I cannot find the information I want, and I end up in a loop. The chatbot brings me back to the first question, just because it’s rule-based,” Ong said.
He highlighted that the introduction of retrieval-augmented generation and generative AI offers significant convenience to organisations and external users, provided that the source data is accurate.
“What we can do is use pre-trained data sets, pre-trained LLMs, and your own private data. For instance, if you want healthcare information, you can access it from MOH or any medical institution, but for more accurate predictions or responses to user requests, you need to use your own data sets,” Ong explained.
Internally, NetApp has a tool that allows users to ask questions and receive immediate answers.
“For example, when responding to a tender, we need quick answers to questions like, ‘Can this product perform a certain functionality?’ or ‘Does this product have a specific feature set?’ I don’t have to go through the documentation. I just ask a question, and I get an answer. However, the usefulness of this chatbot is only as good as the data source,” he remarked.
Ong identified two main reasons why most AI deployments fail.
The first, he pointed out, is due to data limitations: “Many customers we’ve spoken to started their AI journey with a small pilot or a limited data set. When they moved on to training the model, they found the data was insufficient to make accurate predictions.”
The second reason, which extends beyond AI, is data security. Combining a pre-trained LLM with internal data sets introduces challenges in protecting data from attackers, Ong noted.
“Ransomware as a service is real, and attackers have become more intelligent. They don’t just target your primary data or storage anymore. Now, they’re also going after your backups. When they steal data, they can resell it, expose it, or delete it entirely,” he warned.
Eliminating roadblocks
As more organisations adopt a hybrid multi-cloud approach, it’s essential for solutions providers like NetApp to meet their customers where they are. For any cloud migration, a key question is: How can organisations ensure they don’t expose sensitive data?
Ong outlined how NetApp’s technology can identify sensitive data within files before they are used for AI training models.
“Our technology helps identify whether the data set you’re using for AI training contains any sensitive information. It can detect sensitivity inversions in your files and automate the process in real time. This means you don’t have to manually check for sensitive data. If you have thousands of files and miss even 1%, that could result in exposing sensitive data,” he said.
Whether on-premises or in the cloud, NetApp ensures consistent data and interface management, as well as a unified approach to software and hardware.
“We’ve partnered with hyperscalers like AWS, Azure, and Google to provide a common management control plane. This makes data mobility between on-prem and cloud environments seamless,” Ong added.
Ong elaborated on how NetApp optimises its customers’ data: “When you upload your data to the cloud, the next time you upload a different set, we don’t upload the entire data set again. We upload only the delta, making the process much more efficient. With our caching technology, we can bring the data closer to your compute resources, maximising efficiency.”
He emphasised that these capabilities are supported by NetApp’s long-standing partnership with Logicalis, which provides complete AI project solutions for enterprises. Ong concluded by highlighting the Singaporean government’s commitment to using AI for the public good — not just to improve local services, but also to elevate homegrown businesses on the global stage. He stressed that increased collaboration between the public and private sectors would ensure all industries can leverage the transformative power of AI while addressing the challenges of data security and ethical deployment amid its rapid advancement.