On-premises vs public cloud: What’s best for AI workloads?

As independent software vendors (ISVs) accelerate their adoption of artificial intelligence (AI), a critical question has emerged: Where should AI workloads be deployed — on-premises, in the public cloud, or through a hybrid model?

Each approach offers distinct advantages: public cloud promises speed and scalability, while on-premises provides control and data security. But as AI matures and its demands grow more complex, many enterprises are rethinking which infrastructure strategy makes the most sense.

Public cloud: Fast to start, but harder to scale

Public cloud platforms offer clear benefits for AI experimentation and prototyping. For organizations at the start of their AI journey, these environments provide quick access to out-of-the-box tools, elastic compute resources, and minimal upfront investment. This is particularly advantageous for smaller organizations or individual teams working on pilot projects.

However, as AI projects scale and mature, the public cloud can introduce significant cost and complexity. “Public cloud makes sense at the prototyping stage, but it becomes commercially challenging to scale enterprise AI in an all-in public cloud model,” explained Deepak Waghmare, Chief Technology Officer, Dell Technologies APJ. One major issue is data access: AI systems need to process vast volumes of data, often stored outside the cloud. Moving and maintaining that data in the cloud becomes an ongoing operational and financial burden.

Moreover, regulatory constraints and concerns around data sovereignty are increasingly influencing deployment decisions. In regions like Asia-Pacific, where cloud infrastructure is unevenly distributed and data localization laws are tightening, many enterprises are turning to private or on-premises AI deployments to retain control and compliance. With these issues in mind, quite a few organizations in the region are also moving into a hybrid cloud model, combining public cloud flexibility with private infrastructure security to meet operational and compliance needs.

On-premises and private cloud: Control meets complexity

On-premises AI infrastructure offers a strong value proposition in terms of data control, compliance, and cost predictability. Enterprises with existing data centers or regulatory mandates often find it more secure and efficient to bring compute to the data rather than the other way around.

This is particularly important for industries where data is the core intellectual property, such as healthcare, finance, or manufacturing. “Increasingly, we are seeing customers prefer to run AI in a sovereign environment to avoid vendor lock-in and maintain tighter control over data,” explained Waghmare.

However, building and maintaining on-premises AI infrastructure comes with its own set of challenges, including upfront capital expenditure, hardware lifecycle management, and the need for skilled IT personnel. As a result, hybrid models where critical workloads remain on-premises while cloud is used for burst capacity or specific services have increasingly become the preferred strategy.

ISVs and the shift to cloud as an operating model

One of the most important shifts in thinking, according to Waghmare, has been redefining “cloud” not as a location but as an operating model. “10 years ago, ‘cloud’ meant public cloud. Today, it refers to a way of consuming IT infrastructure: one that is intuitive and scalable regardless of where it runs,” he noted.

This mindset has enabled organizations to decouple their application architecture from the underlying infrastructure. Independent software vendors (ISVs) are encouraged to design their applications with portability and interoperability in mind. “ISVs should focus on being infrastructure-agnostic and help customers across any landing zone — public, private, or hybrid,” Waghmare advised.

This flexibility is especially critical for AI workloads, which must operate across multiple data sources and environments. Building rigid, single-stack AI solutions, no matter how optimized, risks isolating the application from the broader enterprise IT ecosystem, which could diminish its functionality.

Multi-cloud and hybrid: A strategic imperative

As organizations grow more sophisticated in their use of AI, the need for a multi-cloud architecture becomes more pronounced. AI workloads are inherently horizontal, cutting across business functions and data silos. This requires seamless data access and integration across different cloud services, SaaS platforms, and edge devices.

The challenge lies in managing the complexity of such a distributed landscape. High egress costs, inconsistent security policies, and data mobility issues can quickly spiral out of control. To address this, many enterprises are building a common IT fabric; one that allows applications and data to move freely across environments without redesigning the architecture each time.

Waghmare advocates a flexible, open architecture approach grounded on four pillars:

  1. Silicon flexibility: The ability to choose and optimize hardware for specific workloads.
  2. Enterprise data strategy: Treating data as a strategic asset with clear governance.
  3. Cloud-native design: Leveraging containerization and orchestration across cloud environments.
  4. Tool and ecosystem integration: Supporting a wide range of ISV tools for diverse use cases.

How to create an AI-ready infrastructure?

For organizations and ISVs planning their infrastructure roadmap, three key recommendations from Waghmare stand out:

  1. Design for portability
    Avoid being tied to a single environment. Build AI applications that can run on any platform — public cloud, on-premises, or edge. This ensures broader market reach and future-proofs your technology against infrastructure shifts.
  2. Prioritize data control and security
    AI success depends on access to enterprise data, but access must be earned. Demonstrating compliance, security, and data isolation — especially in private deployments — will build trust and open doors.
  3. Adopt open, scalable and validated platforms
    Customers want solutions that scale and integrate easily, not “black box” AI systems. Validating your solutions across platforms reduces friction and accelerates deployment cycles.

Ultimately, the debate between on-premises and public cloud for AI is not a binary one. The right answer lies in flexibility, control, and alignment with business priorities. AI is not just another IT workload; it’s a horizontal enabler that touches every part of the enterprise stack. The infrastructure strategy must be equally dynamic and inclusive. Adopting a cloud-smart approach, rather than a cloud-first mindset, will be critical to thriving in the AI era.