DeepSeek shakes up AI, but trust holds back action

The last two years have very much been about laying the groundwork for AI. Globally, billions of dollars continue to be poured into training AI models and the infrastructure needed to run them — think data centres and GPU chip manufacturing.

But in the very first month of 2025, the industry was shaken by the newest AI model on the block: DeepSeek. The R1 LLM model was developed at a fraction of the cost of its predecessors, using significantly fewer GPU chips.

Unsurprisingly, this has got the world talking: how much infrastructure do we actually need for the AI age?

The answer, thus far, is subjective. Yet what we do know is that with more models available, the adoption of informational AI will inevitably increase in 2025.

But will these developments impact the use of its lesser-known cousin, actionable AI? Or will it remain at the periphery?

A turning point for informational AI

Informational AI was widely popularised after ChatGPT’s launch, characterised by large language models (LLMs) and generative AI agents that slowly traversed business. Research suggests that Australian organisations are among its early adopters, and have seen time-saving results of around 30% across all AI initiatives.

As it stands, according to All About AI, the market in Australia is expected to see an annual growth rate of 28.55% from 2024 to 2030, with a market volume of AU$20.34 billion by 2030. But as more organisations implement the technology, and more data becomes available on its positive business impact, year-on-year growth will likely exceed forecasts — even more sceptical companies will be swayed.

The less defined actionable AI is a different beast, and apprehension will hamstring adoption throughout 2025. Building on informational AI, actionable AI translates these insights into automated actions. For example, it can not only inform an organisation of a market trend, but take the data it has gathered to implement changes that mitigate risk or help business capitalise on an opportunity.

Currently, the vast majority of Australian organisations and enterprises lack trust in actionable AI, making it difficult to play a role in specialists’ tasks. For actionable AI to reach its potential, organisations need to be more open to training the tool and allowing it to adapt, in addition to consistently refining the governance frameworks that underscore it.

But if DeepSeek’s disruption has taught us anything, it’s that forward-thinking organisations need to be agile to keep up. Unlike a marathon, there are no pacesetters in the AI race, and the success of an organisation will likely come down to its willingness to adapt.

Early trials ahead, but hesitation remains

With this in mind, in 2025 we might see some organisations begin to trial the use of actionable AI. The most common, everyday example is in customer service, where AI can analyse preferences and past interactions to suggest items, recommend offers, and provide individualised support.

But not all organisations, or industries, will see the rapid progress of the industry as a chance for innovation. Where trust in informational AI was building in 2024, the introduction of a new competitor has highlighted just how fast the industry can change. For instance, if we can start to train LLMs using less infrastructure, globally, government agencies and organisations will need to adjust the resources allocated to the expansion of data centres before we find ourselves in a situation with excess supply.

With this considered, it is understandable if organisations aren’t willing to invest resources into training and trialling actionable AI in 2025. Smaller organisations will likely find more value in waiting for the ‘big players’ to invest in training and implementing this technology before testing it out themselves.

At the end of the day, the uptake of actionable AI comes down to two factors: trust in the technology and the resources it will require. Realistically, the technology needs a much higher level of trust from users and organisations than informational AI.

But while there are still reservations over how much we can trust it to perform critical or even non-critical tasks, we don’t expect to see products released into the mainstream in the next 12 months.

That level of trust will develop over time — it’s a medium- to long-term play — and technologically advanced companies and industries will be first to take action. The reality is, for most, trust in actionable AI just isn’t there yet.