Red Hat leaders map out open-source AI direction

Open-source AI development is reshaping how enterprises build and run modern infrastructure.

Red Hat Summit 2025 featured a series of announcements, including Red Hat Enterprise Linux 10, updates to OpenShift Virtualisation, and new AI-related initiatives. Senior leaders presented these releases within the context of ongoing developments in open-source software and enterprise AI adoption.

In an exclusive virtual media roundtable, President and CEO Matt Hicks, Chief Product Officer Ashesh Badani, Chief Technology Officer Chris Wright, and Chief Revenue Officer Andrew Brown discussed the technical and operational considerations behind the announcements, as well as their observations on AI use cases emerging in Asia-Pacific.

Open-source roots

According to Hicks, balancing AI commercialisation and enterprise adoption with the company’s open-source principles is achievable because Red Hat positions itself as an infrastructure provider focused on supporting a range of application requirements with choice and flexibility.

“Take vLLM (virtual large language model) as an example. It is unusual in open source to see something with the same level of collaboration and standardisation that shaped Linux or Kubernetes. Having open source as the default lets people run these models themselves and build on top of them,” he said.

In the early days of open-source technology, security was a primary concern, noted Wright.

“There were also concerns around robustness and reliability for software expected to support critical infrastructure. With transparency and openness in the development process, those security concerns quickly moved into the background,” he recalled.

According to Wright, a core value of open source is visibility into the software as it is being built, allowing users to examine how it works.

“In most cases, we’d argue that open source is more secure, similar to how cryptographic algorithms are expected to be open for detailed scrutiny of how they are constructed and operated,” he said.

In the context of LLMs, there is still debate over what open source means, Wright said.

“The models are built from large, curated data sets through a comprehensive pre-training process that produces the model. The models are then licenced as an artifact, and access to them may or may not be open. What we call open models typically carry an open-source licence for the artifact, often the same licence used in open-source software. This allows people to use the model, tune it, change its weights, and redistribute it,” he said.

Security concerns surrounding AI models involve separate issues, Wright noted, including bias and hallucination. Despite these challenges, he said the open-source approach remains a practical path.

“Our view is that the more open the process for generating, sharing, and tuning models, the better we can improve every dimension of security. I don’t think we are moving into a world where the only viable solution is a proprietary model,” he said.

APAC outlook

In terms of regional potential for AI, Brown said Asia-Pacific represents a significant share of the market, accounting for about 35%.

“It’s normally more heavily weighted towards North America, Latin America, and Europe, so this is a distinct opportunity for APAC, and the level of innovation is substantial,” he said.

There is also strong AI activity in India, China, and Southeast Asia, observed Stefanie Chiras, Red Hat’s SVP, Partner Ecosystem Success.

“When I met with a group of ISVs, the amount of AI they were integrating into their offerings was notable. We formed a co-creation team in Asia-Pacific to work with ISVs, connect their work with Red Hat technology, and bridge it into systems integrators and other partners who bring it to customers,” she said.

Brown also highlighted the importance of aligning with customers’ progression in their AI adoption.

“We’re providing them with platform choice to start small and scale as customer demands or partner requirements evolve,” he said.

Enhancing the ecosystem

Open source, in its simplest form, is only a licence unless a broader community sustains it, Badani noted. According to him, this is why Red Hat continues to work with organisations such as Google, AMD, and Nvidia.

“It’s important that when we talk about open source, we do so within a community, not solely from Red Hat’s perspective,” he said.

One of the key announcements this year was the launch of the llm-d Community, which aims to make production generative AI more widely accessible. The open-source project is developed with founding contributors CoreWeave, Google Cloud, IBM Research, and Nvidia, and joined by industry participants AMD, Cisco, Hugging Face, Intel, Lambda, and Mistral AI, along with university supporters at the University of California, Berkeley, and the University of Chicago.

Using inference technologies for generative AI at scale, llm-d runs on a native Kubernetes architecture, vLLM-based distributed inference, and AI-aware network routing, supporting LLM inference clouds designed to meet demanding production service-level objectives.

“When competitors, partners, and others work around a shared core, that is where open source tends to do really well,” Hicks concluded.

- Advertisement -