Inside GreenNode’s AI cloud upgrade with VAST Data

Scalable AI infrastructure relies on high-performance storage solutions to handle increasing workloads. Image created by DALL·E 3.

Demand for GPU cloud storage in Asia-Pacific is rising as AI investments, particularly in generative AI, continue to grow. IDC predicts that by 2028, the region’s AI and generative AI funding will reach US$110 billion, expanding at a compound annual growth rate (CAGR) of 24% from 2023 to 2028.

Singapore-based GPU cloud provider GreenNode is aware of this, and needed to ensure that its infrastructure is capable of handling this projected growth in AI deployments. Being an official Nvidia cloud partner in APAC, the company required a storage solution for its cloud platform.

“Finding a general-purpose, high-speed storage for a GPU cloud platform is not an easy task. Our R&D team has been working with many solutions, but each of them is just good for a specific use-case,” observed Tung Vu, Head of AI GPU Cloud at GreenNode.

Scalability challenges

Prior to founding its AI cloud business as part of its partnership with Nvidia, GreenNode had been operating a traditional public cloud business for nearly a decade, providing customers with high-performance block, object, and file storage offerings.

Tung Vu, Head of AI GPU Cloud, GreenNode. Image courtesy of GreenNode.

“While we have experience in building large-scale cloud systems and serving thousands of enterprise clients, we hadn’t handled generative AI workloads before founding GreenNode. Generative AI workloads require a completely different type of storage for checkpoints and data sets while following Nvidia reference architecture. Because of this, we had to conduct extensive research from the ground up to determine which new storage solution would be most suitable for our AI cloud architecture,” Vu said.

After evaluating multiple technology vendors, GreenNode decided to partner with VAST Data for its storage needs.

“VAST Data stood out because of its maturity — it’s already in production use by other AI cloud providers in the United States. It supports multi-tenancy and multi-protocol storage, which aligns with GreenNode’s variety of tenants and their storage requirements. Additionally, among the competing options, VAST Data satisfies the comprehensive list of technical requirements from one of our largest customers,” Vu explained.

During the integration phase, GreenNode encountered several technical hurdles that needed to be addressed. One key challenge was ensuring that its GPU-optimised AI infrastructure worked seamlessly with its existing AI workflows.

“GreenNode required a data platform that could enhance operations without disrupting their established policies, such as access control lists (ACLs), attributes, traceability, and auditability. VAST Data addressed this by offering a multi-protocol platform that integrates file and object services, ensuring compliant, secure, and high-performance data services for GreenNode’s tenants and its parent company, VNG,” said Jeffrey Tay, Regional Director (AI & HPC) at VAST Data.

Beyond this, GreenNode’s tenants required consistent performance, robust quality of service (QoS), and secure multi-tenancy, as they operate at various stages of maturity. VAST Data addressed this challenge through its Disaggregated Shared Everything (DASE) architecture, which reportedly enables GreenNode to scale seamlessly and future-proof its infrastructure to accommodate growing workloads without downtime.

According to Vu, one of the hardest technical challenges they faced during the integration with VAST Data was designing a future-proof hardware and software stack to match the growing demand for storage speed in a short time frame. To tackle this, GreenNode worked closely with VAST Data’s team, ensuring that performance requirements were fine-tuned to meet the needs of the entire system.

Bigger picture

As enterprise AI continues to expand, the accompanying challenges require both tech vendors and end users to realign their business strategies to keep up with rapid demands of AI adoption.

Jeffrey Tay, Regional Director (AI & HPC), VAST Data. Image courtesy of VAST Data.

Some of the key challenges arising from AI advancements include unpredictable performance, cost management, regulatory compliance, and persistent data silos. Addressing these issues requires enterprises to build a robust infrastructure that unifies data across disparate systems, ensures consistent performance, adheres to data regulations, and optimises total cost of ownership (TCO), Tay noted.

Meanwhile, emerging trends such as agentic AI and autonomous agents are increasing the demand for real-time data processing and large-scale decision-making. According to Tay, infrastructure must be designed to support continuous, low-latency, and highly reliable AI operations.

“VAST Data has been used by companies like ServiceNow to support AI agent development, and GreenNode is now applying similar approaches to enhance its AI services and meet the growing customer demands,” he shared.

Additionally, AI traceability is becoming an increasingly critical consideration for enterprises.

“Organisations are prioritising traceability in AI models and data sets to ensure compliance, reliability, and auditability. VAST Data’s platform supports GreenNode in implementing traceability measures that align with industry requirements,” Tay said.

For its part, GreenNode plans to work more closely with Nvidia and VAST Data to integrate its existing cloud services, aiming to build a full-stack AI ecosystem — from the infrastructure layer to the platform layer, and ultimately, to the application layer.

“Solutions like VAST Data and Nvidia NIM will help us to provide advanced data and database platform management for the next agentic AI wave,” Vu said.