DPEX finds most AI apps as substandard

Image courtesy of the Data Protection Excellence Centre

An eighth (12%) of 113 generative AI desktop applications that are popular predominantly among startups and individual developers are lacking a published privacy policy, according to the Data Protection Excellence (DPEX) Centre.

The research arm of Singapore-based Straits Interactive found this out through a research, as the consumer world jumps on the bandwagon of generative AI.

Conducted from May to July this year, the study focused on apps primarily from North America (48%) and the European Union (20%). 

The study underscores the potential risks to which users of generative AI apps might unwittingly expose their data.

The apps were categorised as core apps (industry leaders in the generative AI sector), clone apps (typically startups or individual developers/developer teams, created using core apps’ APIs), and combination apps (existing apps that have incorporated generative AI functionalities).

Of the apps with published privacy policies, 69% identified a legal basis (such as consent and contract performance) for processing personally identifiable information or PII. 

Only half of the apps meant for children considered age restrictions and aligned with child privacy standards such as the Children’s Online Privacy Protection Act (COPPA) in the United States and/or the General Data Protection Regulation (GDPR) in the European Union (EU). 

Though 63% cited the GDPR, only 32% were apparently within the GDPR’s purview. The majority, which are globally accessible, alluded to the GDPR without understanding when it applies outside the EU. 

Of those where GDPR seemed to be relevant, a mere 48% were compliant, with some overlooking the GDPR’s international data transfer requirements. 

In terms of data retention, where users often share proprietary or personal data, 35% of the apps did not specify retention durations in their privacy policies as required by the GDPR or other laws. 

Also, transparency regarding the use of AI in these apps was found to be limited. Fewer than 10% transparently disclosed AI use or model sources. 

Out of the 113 apps, 64% remained ambiguous about their AI models, and only one clarified if AI influences user data decisions. 

Apart from renowned players like OpenAI, Stability AI, and Hugging Face that disclose the existence of their AI models, the remainder primarily relied on established AI APIs, such as those from OpenAI, or integrated multiple models. 

The study shows a tendency among apps to collect excessive user PII, often exceeding their primary utility. With 56% using a subscription model and 31% veering towards relying on advertising revenue, user PII becomes invaluable. 

The range of collected data – from specific birth dates, interaction-based inferences, and IP addresses to online and social media identifiers – suggests potential ad-targeting objectives. 

Kevin Shepherdson, CEO of Straits Interactive, said the study highlights the pressing need for clarity and regulatory compliance in the generative AI app sphere. 

“As organisations and users increasingly embrace AI, their corporate and personal data could be jeopardised by apps, many originating from startups or developers unfamiliar with privacy mandates,” said Shepherdson.

Lyn Boxall, legal privacy specialist at their eponymous fintech advisory firm and a member of the research team, said it is significant that 63% of the apps reference the GDPR without understanding its extraterritorial implications. 

Boxall said many developers seem to lean on automated privacy notice generators rather than actually understanding their app’s regulatory alignment. 

With the EU AI Act on the horizon, the urgency for developers to prioritise AI transparency and conform to both current and emerging data protection norms cannot be overstated,” she said.