Why does AI adoption still seem like a pipe dream?

- Advertisement -

Singapore, with its vision to attract the world’s top AI talent, aims to leverage technology for urban planning and environmental sustainability. The 2023 launch of its National AI Strategy 2.0 highlights a future driven by responsible and accessible AI. Despite significant advancements, AI, particularly generative AI, remains largely in the “hobbyist” stage.

The reason lies in the current state of AI development. While impressive, many AI solutions resemble high-powered gadgets — remarkable feats of engineering but ultimately impractical or too far-fetched for everyday use. According to IBM’s 2023 Global AI Adoption Index, only 28% of business leaders in Singapore stated that their company has a holistic strategy for AI adoption across the organisation.

As businesses face numerous challenges, and mainstream AI solutions often fall short due to their limitations, what does the future hold, and how can companies effectively scale their AI adoption?

Imagine a compact yet more powerful and efficient AI

Generative AI has evolved to a stage where massive models are no longer the only answer, especially for domain-specific applications like large language models (LLMs), where larger parameter counts may no longer provide proportional improvements.

While large models have fuelled the AI golden age, they also come with several drawbacks. For example, only the largest companies can afford to train and maintain these energy-intensive models, which contain hundreds of billions of parameters. One estimate suggests that a single day of ChatGPT queries rivals the daily energy consumption of 33,000 households.

Conversely, smaller language models are democratising AI by lowering the barriers to entry for individuals and organisations with limited resources. This is especially critical in Singapore, where small and medium enterprises account for 99% of businesses. These models require less computational power and memory to fine-tune and deploy. Furthermore, their ability to run locally on smaller devices may help address privacy and cybersecurity concerns across edge computing and Internet-of-Things applications. Processing data on-device, rather than sending it to centralised servers, minimises the risk of data exposure and unauthorised access.

Until now, generative AI applications have required substantial resources and energy. However, downsizing doesn’t necessarily compromise performance. In fact, smaller models excel in specialised fields such as finance and human resources.

For a nation like Singapore, balancing the benefits of AI with environmental sustainability may seem like a contradiction. However, this is not a pipe dream. It can be realised as smaller language models enter the market. Despite being significantly smaller, some of these models have demonstrated competitive performance across multiple tasks, particularly in specialised sectors such as financial services.

The era of one-size-fits-all AI is over

Tied to the idea of small language models is the notion of specialisation. The evolution of AI suggests that no single model can cater to all needs. This signals the end of the generic, one-size-fits-all approach to AI and points to a future where every enterprise can deploy custom models that align precisely with their goals and regulatory requirements.

Enterprises are recognising that understanding foundation models is crucial for optimising AI initiatives, as these models form the backbone of AI systems. Moreover, more businesses are realising the importance of personalising AI models to reflect their unique values and operational contexts.

For example, AI Singapore’s SEA-LION LLM is an open-source model focused on Southeast Asian languages and cultures. Imagine a bank in Vietnam developing its customer service virtual assistant. Using SEA-LION LLM and enriching it with data from its own customer service department, the bank can better understand customer needs and deliver personalised responses. This is possible because the bank has tailored its foundational model to comprehend local languages, cultural nuances, and the needs of its current and potential clients.

AI that respects privacy, dignity, and diversity

Finally, the future of AI starts and ends with better governance. Only through responsible oversight can AI adoption become widespread and trusted.

People remain wary of AI. Well-known risks include more sophisticated privacy breaches and the potential for bias in AI’s recommendations and outcomes. Developing ethical AI policies (55%), safeguarding data privacy throughout the entire lifecycle (51%), and tracking data provenance, as well as changes in data and model versions (51%), are the most common ways Singaporean enterprises are working to ensure trustworthy AI.

Singapore’s latest initiative, Project Moonshot, exemplifies this commitment by promoting global standards for AI testing and safety. Developed in collaboration with industry leaders, Project Moonshot aims to enhance the reliability and security of AI technologies through comprehensive testing methodologies.

It truly cannot be overstated: Without ethical principles embedded from the outset, AI has no future. This culture doesn’t just belong in IT teams — it must start at the top with every CEO.

Smaller language models, greater specialisation, and responsible AI at every step are three ways in which businesses in Singapore can adopt AI at scale to build competitive advantage, while society as a whole moves toward a better future.