You’ve heard pundits and politicians say, “It’s the economy, stupid,” when discussing what matters most in any given election.
As enterprises and organisations grapple with an ever-evolving technology landscape — with decisions to be made around generative AI, cloud, data, security, quantum, and myriad other advances emerging in a short span of time — a similar adage can help cut through the hype: “It’s the application, stupid.”
There’s no debating the impact of AI, but it’s also rapidly becoming boring. In a similar vein, the wheel was once revolutionary, but now it’s the car that people pay attention to — not the tyres. AI is on its way to becoming just another utility: embedded everywhere but rarely the main event.
We can assume AI and large language models will be standard in most, if not all, new applications. For example, the 2025 Nutanix Enterprise Cloud Index report found that more than 80% of organisations had already implemented a generative AI strategy. Only 2% said they had not started one.
In a world where AI is simply expected, competitive advantage won’t come from which LLM you choose — whether it’s OpenAI, DeepSeek, Anthropic, or Llama. Few will care about a couple of extra points on a benchmark test.
Think of it this way: does using Microsoft 365 or the Google suite of applications make your business any more competitive than others in your industry? My view is ‘no’. It’s just assumed that you’d use some application for emails, messaging, and content creation. It’s hardly worth mentioning.
This is where AI is heading. In this “AI-everywhere” world, competitive differentiation will be found in its application — both in how it is used, and in the app through which employees and customers engage with it. Hence, “It’s the application, stupid!”
Build for scale or be left behind
AI applications aren’t just being built; they’re being engineered for scale, speed, and resilience. That means they must be cloud-native. No more clunky, monolithic AI deployments that choke under heavy workloads. AI-powered applications require on-demand computing power, rapid data processing, and agile deployment cycles — all of which call for cloud-native architectures.
The challenge is that AI isn’t a one-size-fits-all solution. Some workloads demand massive computing power for training complex models, while others need low-latency, real-time inference at the edge.
The difference is critical. AI that runs in the wrong environment either wastes resources or fails to deliver fast, accurate insights when it matters most. Cloud-native architectures bridge this gap, dynamically scaling AI applications up or down to meet business needs without overspending on unused capacity.
But scalability alone isn’t enough; AI applications must also be resilient. They can’t afford unplanned downtime. A single failure shouldn’t take the entire system offline. That’s why cloud-native AI is increasingly modular, containerised, and distributed — so if one component fails, the rest continue running. Ideally, AI needs to be always-on and always-available, especially in mission-critical industries like healthcare, finance, and security.
AI needs a platform that moves
Cloud-native architectures make AI applications more agile, but agility only works if AI can be moved freely. AI workloads need to shift seamlessly — across public clouds, private infrastructure, and edge environments — without friction, costly migrations, or vendor lock-in.
The problem? Most AI applications remain tied to a single hyperscaler. That limits businesses’ control over where and how their AI operates. Cost efficiency, compliance flexibility, and long-term innovation are all stifled if AI can’t adapt to changing needs. AI should be deployed where it makes the most sense — on-premises for security-sensitive workloads, in the public cloud for scalability, or in a hybrid set-up that evolves with business demands.
But moving AI isn’t as simple as lifting and shifting workloads. AI applications are living, evolving systems that require ongoing monitoring, security updates, and compliance management.
A well-designed AI platform should manage this complexity. It should allow businesses to run AI applications anywhere, without constant infrastructure reconfiguration or performance trade-offs. AI should be as flexible as the businesses using it.
Applications are where AI hits the road
There will always be die-hard enthusiasts obsessing over tyre tread patterns, PSI, and rubber composition. In the AI world, these will be the ones pushing LLMs to greater levels of efficiency and accuracy.
The world still needs them. But for most business leaders, the focus now needs to shift to how AI is applied, how it improves employee experience, and how it creates more value for customers.
This will happen through applications — the interfaces through which people engage, create, and innovate with AI. It is through applications that the rubber will finally hit the road.