Singapore has established itself as an early adopter of AI, being among the first countries globally to publish a National AI Strategy, followed by the Model AI Governance Framework. The government’s commitment to fostering responsible AI — AI that is trustworthy, safe, reduces bias, increases reliability, and ensures auditability — is evident. Coupled with a balanced regulatory approach, these initiatives aim to shape a critical juncture in AI adoption as modern organisations navigate new complexities to realise business value.
Responsible AI adoption lies with both regulators and organisations seeking to benefit from it. Leaders must find ways to align AI implementation with their strategic goals while addressing the need for updates to security and privacy policies. When implemented strategically, AI can enhance various business functions, including software development, marketing, and finance.
While many organisations are rushing to incorporate AI into their workflows, the most successful adopters will be those that take a deliberate and strategic approach. Ensuring that AI implementation considers its impact on consumers has become not only a best practice but also a moral imperative.
Taking a privacy-first approach
Establishing safeguards for responsible AI use is critical. According to GitLab’s 2024 Global DevSecOps Report, nearly half (48%) of respondents expressed concern that AI-generated code might not receive the same copyright protections as human-generated code, and 42% worried that AI-generated code could introduce security vulnerabilities.
Without carefully addressing how AI tools store and protect proprietary corporate, customer, and partner data, organisations risk exposing themselves to security breaches, regulatory fines, customer attrition, and reputational harm. In Singapore, sector-specific guidelines for AI have been issued by various government agencies in fields such as financial services, healthcare, infocomm, and media, aiming to build trust and address accountability and security challenges.
Organisations must implement strict policies governing the use of AI-generated code to safeguard intellectual property. When incorporating third-party AI platforms, a thorough due diligence assessment is critical to ensure that data — both the model prompt and output — will not be used for AI/ML model training and fine-tuning. This could inadvertently expose proprietary information to other businesses.
Although some companies behind popular AI tools are less than forthcoming about their model training data sources, transparency will be foundational to AI’s long-term viability. When models, training data, and acceptable use policies are vague or inaccessible, it becomes increasingly difficult for organisations to use these models safely and responsibly.
Starting small
To safely benefit from AI’s efficiencies, organisations can avoid pitfalls, including data leakage and security vulnerabilities, by identifying where risk is the lowest in their company. This approach allows them to build best practices in a low-risk area before enabling additional teams to adopt AI, ensuring it scales safely.
Leaders can start by facilitating conversations between their technical teams, legal teams, and AI service providers. Setting a baseline of shared goals is critical to determining where to focus and how to minimise risk with AI. From there, organisations can begin establishing guardrails and policies for AI implementation, such as employee use, data sanitisation, in-product disclosures, and moderation capabilities. Businesses must also be willing to participate in well-tested vulnerability detection and response programs.
Finding the right partners
Just as Singapore’s AI strategy focuses on collaboration and international partnerships, organisational leaders should also consider partners who can help them securely adopt AI and ensure they are building on security and privacy best practices. This approach will enable them to adopt AI successfully without sacrificing adherence to compliance standards or risking relationships with their customers and stakeholders.
Businesses’ concerns about AI and data privacy usually fall into three categories: what data sets are being used to train AI/ML models, how proprietary data will be used, and whether proprietary data — including model output — will be retained. The more transparent a partner or vendor is, the more informed an organisation can be when assessing the business relationship.
Developing proactive contingency plans
Finally, leaders can create security policies and contingency plans regarding the use of AI and review how AI services handle proprietary and customer data, including the storage of prompts sent to and outputs received from their AI models.
Without these safety measures, the resulting consequences can seriously impact the future adoption of AI within organisations. Although AI has the potential to transform companies, it comes with real risks — and technologists and business leaders alike are responsible for managing those risks responsibly.
To achieve Singapore’s goal of ensuring responsible AI adoption and its subsequent societal benefits, businesses and individuals must take intentional action today. How we adopt AI technologies will significantly impact AI’s role in business and society moving forward. Organisations can achieve more business value by strategically identifying priority areas to incorporate AI while ensuring they stay secure, compliant, and trusted by their most important stakeholders.