Businesses across Singapore are embracing AI-driven tools to enhance productivity and decision-making. However, the potential use of “shadow AI” — the unauthorised use of unapproved or unsanctioned AI technologies within organisations — could be putting your organisation at risk.
The term “shadow AI” originates from “shadow IT,” where employees use devices, cloud services, or applications without the knowledge or consent of the IT department. The use of shadow AI is equally concerning. Employees using generative AI tools like ChatGPT without IT knowledge or oversight could inadvertently expose sensitive information or leave the company vulnerable to attack.
Consider this hypothetical scenario: Clara, a marketing specialist at a Singapore-based firm, starts using a free, third-party AI tool to generate creative content and automate social media posts. This tool falls under shadow AI, as Clara is using it without formal approval or oversight from her organisation.
While Clara finds the tool helpful, she is unknowingly putting her company at risk in several ways. First, she uploads sensitive client data, including customer demographics and marketing strategies, to generate content. Since the tool hasn’t been vetted by her IT department, it’s unclear where this data is stored or how it’s protected, posing a potential breach of Singapore’s Personal Data Protection Act (PDPA), which could lead to legal repercussions if client data is mishandled.
Second, as shadow AI tools may lack robust security measures, they may be vulnerable to cyberattacks. By using this unapproved software, Clara may inadvertently expose the firm to increased cyber risk, as cybercriminals could exploit weaknesses in the tool’s infrastructure. Such vulnerabilities could lead to data breaches, unauthorised access to client information, and even ransomware attacks — all of which could have severe financial and reputational consequences for the company.
This scenario may not be far-fetched in Singapore, where AI use in the workplace has become increasingly common. A joint study by KPMG and the University of Queensland found that 75% of Singaporeans are already using AI tools at work, with 60% expressing trust in AI’s capabilities and 59% believing the benefits outweigh the risks. According to a global study by Salesforce, nearly half (48%) of generative AI users in Singapore workplaces admit to using platforms banned by their employers. Worryingly, this study also showed that only 37% of respondents were able to identify ethical and safe practices when using generative AI tools, underscoring the need for more education.
So, what are the risks for businesses?
Put simply, employees using generative AI tools for tasks such as coding, drafting copy, or creating visuals can inadvertently expose their companies to risks. These risks include regulatory breaches, where unauthorised AI use may violate laws like Singapore’s Personal Data Protection Act (PDPA) or the European Union’s General Data Protection Regulation (GDPR). Such violations can lead to fines, legal issues, and reputational damage. Additionally, there is the risk of data exposure; confidential information shared with AI tools increases the likelihood of cyberattacks or leaks, as demonstrated by OpenAI’s 2024 ChatGPT breach.
What’s the solution?
- Establish clear governance around AI usage: To mitigate the risks associated with shadow AI, organisations should implement a centralised governance framework with clear AI usage guidelines. This framework should include detailed data usage policies that ensure compliance with both internal and external regulations, making it easy for employees to understand their responsibilities and limitations.
- Education is critical: Once a governance framework is established, educating your team is the next step. As mentioned, the Salesforce report reveals a lack of awareness in Singapore regarding the risks and consequences of shadow AI, highlighting the need for improved employee understanding. Implementing training programs is essential to address this issue, educating employees on best practices and compliance when using generative AI tools or any other AI technology in the workplace, ensuring they operate within the boundaries of company policies and regulations.
- Implement access controls to identify Shadow AI use: Implementing monitoring tools and access controls to help IT teams identify the use of shadow AI can significantly minimise risks. For example, requiring admin password consent before allowing AI applications to be installed or blocking access to specific websites known for unauthorised AI services can prevent shadow AI in the workplace.
As generative AI tools become more integrated into business operations, organisations must remain vigilant in managing their use. By educating employees, establishing clear AI governance policies, and implementing robust monitoring and access controls, companies can mitigate the risks posed by shadow AI. Taking these proactive steps will not only help protect against data breaches and regulatory violations but will also foster a more secure, compliant, and innovative work environment.