VMware CTO charts course through unexplored AI waters

Image created by DALL-E 3.

For many enterprises today, multi-cloud is the preferred choice for several reasons, including agility, scalability, and operational efficiency. This is why companies like VMware, which specialises in cloud computing and virtualisation technology, are continually exploring ways to improve the multi-cloud experience and address growing concerns about emerging technologies like generative AI.

Kit Colbert, the company’s Chief Technology Officer, sat down with Frontier Enterprise to discuss VMware’s journey over the years, the threats of generative AI, and how VMware sees itself in the near future.

You’ve been with VMware for 20 years. How would you describe the company’s evolution?

I’ve been super lucky to have had a bunch of different opportunities within the company, different roles, like moving around to different teams, different technical domains. So it’s been really enriching to be able to get those experiences and learn.

Then, two years ago, I became the overall CTO for the company in September 2021. That was a significant jump up for me, leading a team of over 2,000 people. There’s a lot of work that we’re trying to do around standardisation and driving the transition to SaaS and subscription models. At the same time, we’re bringing out many innovations and incubating new technologies.

Could you tell us a bit about how you navigated the impending arrival of Kubernetes in 2014?

We watched Dark Cloud release Docker around 2013. In 2014, the noise around containers started to build. Some folks were calling for the death of virtualisation, that “We don’t need VMs anymore because of the containers.” Obviously, that wasn’t true. But we were concerned at the time, so I was very much focused on the company’s strategic position. I looked into what was happening within the company concerning containers and found several pockets of activity across different business units. I helped pull that together, and eventually, we got approval from our CEO to create a new business unit focused specifically on cloud-native apps. At that time, it was still early days, so we were experimenting with various approaches. We engaged more with open source communities and tried different extensions of our products to see what would stick. It took us a few years to land on our current strategy, which revolves around Tanzu as the application platform embracing Kubernetes.

Kit Colbert, Chief Technology Officer, VMware. Image courtesy of VMware.

One of the big lessons I learned was that we did well in identifying this emerging or potential threat. At the time, I thought things would play out much more quickly. I expected we’d have maybe 12, 18, or 24 months tops before facing an existential threat to VMware. In reality, it took much longer for the cloud-native apps market to take shape. If I were to do it again, I’d approach it a bit differently. I wouldn’t exactly take my time, but perhaps go a bit slower. We moved really fast initially because we thought the threat was imminent. In hindsight, I’d probably spend more time establishing a solid foundation and focusing on core technologies, and then building up from there.

It’s interesting when you look at the generative AI inflection point that’s happening in the industry. It’s very similar to what we saw with cloud-native apps. There’s a clear industry inflection point, and it’s evident that the architecture for applications will change. However, it’s still unclear exactly how that will happen and what it will look like. Similar to our experience with cloud-native and containers, we’ve been aware of this change and have been watching it closely, particularly since the release of ChatGPT last year. The market is still very early, and while there’s a lot of hype, the value is clearly there. Most customers are interested, but many are not yet ready to fully take advantage. The big lesson I’ve learned is that we can afford to be more thoughtful, spending a bit more time to get the foundations right so that we can execute more effectively once customers are ready.

A pressing concern among enterprise end users experimenting with AI is that they are being done with very poor guard rails. Do you think there’s going to be a standard architecture for AI adoption outside the productised stuff that companies use?

Possibly. What we’re currently seeing, and what we’re actively doing at VMware, is that people are trying to set up some form of internal governance. There’s a lot of concern about the various risks associated with AI, such as data privacy and IP contamination. Often, you’ll find technologists or practitioners diving in headfirst without giving much regard to these risks. Many companies are now putting governance structures in place, essentially saying, “We want to maximise the benefits of these technologies for our business, but we also want to mitigate the downside risks.”

At VMware, we’ve implemented responsible use guidelines for AI, and generative AI in particular. These guidelines specify what is and isn’t allowed. Generally, we discourage the use of external large language models, although we do make exceptions when our internal models aren’t up to that quality yet. Many companies are currently prohibiting any use of these technologies until they have a better handle on it, which is similar to what VMware did initially as well.

There’s a strong appetite for this kind of internal regulation to help manage AI use. The goal is to enable people to reap the benefits while reducing the risks. It’s still in the early days, but I think we’ll see increasingly more best practices revolving around that.

As the leader of a technology team, what most excites you about what’s happening in your labs?

Looking at the larger arc, I think the trend toward multi-cloud and the cloud-smart architecture we’ve talked about will continue. In many ways, generative AI will help to accelerate this trend because it’s about uniting compute data. You have data everywhere—on the edge, in the cloud, and on-prem. You absolutely need an architecture where you can place the compute and carry out the training, fine-tuning, or inferencing anywhere.

I think we’ll see many businesses continue to move toward that cloud-smart architecture. At the same time, there will be significant work on solidifying generative AI architecture. Many startups are receiving funding for this. The jury is still out on where the durable sources of value lie in the technology stack. Some say it’s in the foundation models, and maybe it is. GPT-4 is ridiculously powerful. But we’re also seeing remarkable innovation in the open-source model space. So, the jury’s still out. Many businesses will dive into this, seeing it as a cash cow opportunity, but many probably won’t get it quite right. It’s going to be hit-and-miss.

A lot of folks will be watching this space as a consensus starts to form. We’ll see more individuals jumping in, validating that consensus, and contributing to the standardisation around it. That’s our focus for the next few years. We’ve announced several products but haven’t released them yet. Now, we’re going to get them into customers’ hands, from an alpha or beta standpoint, work with them, gather feedback, and continue to evolve our products—both from a multi-cloud and a generative AI perspective. Honestly, it’s a three-to-five-year journey before we achieve some level of standardisation. It’s an exciting time to be part of this, as we have another chance to help people take advantage of this new technology.