More than one-third of Australian professionals are regularly uploading sensitive company data including strategy documents, financials, and customer’s personally identifiable information (PII) into AI platforms, often without any formal oversight.
This is one of the key findings of a report from Josys. It warns that a surge in “shadow AI” — employee use of unauthorised AI platforms that bypass security protocols — is exposing Australian companies to serious compliance and data risks.
The report is based on research by Josys, in collaboration with independent research firm Censuswide. It involved a survey of 500 Australian technology decision makers across a range of sectors and company sizes.
Among respondents, 324 belonged to firms with less than 250 employees and 176 belonged to companies with more than 250 employees.
Despite the growth in AI adoption, 70% of organisations have moderate to no visibility into what AI tools are being used, creating massive blind spots. Smaller businesses are particularly vulnerable, with only 30% of companies with fewer than 250 employees feeling fully equipped to assess AI risks, compared to 42% of larger organisations.
As economic uncertainties and job pressures mount, users eager to capitalise on AI’s productivity gains are unintentionally creating backdoors for data leaks and compliance violations.
Jun Yokote, COO and president of Josys International, said that while Australia is racing to harness AI for increased productivity, without governance, that momentum quickly turns into risk.
“Productivity gains mean nothing if they come at the cost of trust, compliance, and control,” said Yokote. “What’s needed is a unified approach to AI governance which combines visibility, policy enforcement, and automation in a single, scalable framework.”
Among respondents, just 1 in 3 organisations (33%) are fully prepared to assess AI risks, with nearly 20% not prepared.
Barely two-thirds (63%) of professionals lack confidence in using AI securely, exposing a major readiness gap.
Despite operating in highly regulated environments, only around half of finance (52%), IT/telecom (55%), and healthcare (62%) teams report full preparedness.
As AI adoption accelerates, critical sectors and smaller businesses in Australia are becoming overwhelmed. Without effective technology oversight for policy enforcement and training, many organisations risk falling into a cycle of reactive governance and compliance failures.
Nearly half (47%) of respondents cite upcoming AI model transparency requirements and Privacy Act amendments as top compliance hurdles. Despite the growing complexity, 50% still rely on manual policy reviews, while a third (33%) have no formal AI governance processes in place.
Even among those with some level of oversight, only 25% believe their current enforcement tools are highly effective. This highlights a widespread gap between regulatory compliance and organisational readiness.














