Akamai has acquired thousands of Nvidia Blackwell GPUs to bolster its global distributed cloud infrastructure.
The deployment creates a unified platform for AI R&D, fine-tuning, and post-training optimization that intelligently routes AI inference workloads to optimized compute resources across Akamai’s global network.
The architecture is designed to support rapid inference by reducing the latency and data egress issues associated with centralized data centers.
While the first wave of AI focused on model training in centralized hubs, the industry has reached a tipping point where inference matters as much as training. The MIT Technology Review recently reported that 56% of organizations cite latency as the primary barrier preventing AI deployment at scale.
By treating the globe as a single, low-latency backplane, Akamai said it is bridging this gap and providing the foundational infrastructure for physical and agentic AI where decisions must happen at the speed of the real world.
“While hyperscalers continue to push the boundaries of AI training, Akamai is focused on meeting the unique demands of the inference era,” said Adam Karon, COO and general manager of Akamai’s Cloud Technology Group.
Karon said centralized AI factories remain essential for building models, but bringing those models to life at scale requires a decentralized nervous system. By distributing inference optimized compute across our global fabric, Akamai isn’t just adding capacity.
“We’re providing the scale, at minimal latency, that is required to move AI from the laboratory to the street corner and the hospital bed – where the work happens, where the data lives, and where the ROI is realized,” said Karon
Akamai’s adoption of Blackwell GPUs advances Akamai’s efforts for a globally distributed AI compute grid built for the inference era. By extending AI processing beyond centralized AI factories to high-density distributed infrastructure, Akamai allows AI to interact with physical systems — from autonomous delivery and smart grids to surgical robotics and critical fraud prevention — without the geographic or cost limitations of traditional cloud architecture.
The integration of Nvidia Blackwell AI infrastructure enables processing AI workloads on dedicated GPU clusters to generate rapid responses; optimization of Large Language Models (LLMs) on-site to support data privacy and regional compliance needs; and fine-tuning and adapting foundation models on proprietary data to improve accuracy for specific tasks.
This announcement follows Akamai’s recent initiatives to expand its AI inference and generalized compute capabilities. In October 2025, the company announced Akamai Inference Cloud, redefining where and how AI is used by bringing AI inference closer to users and devices.














