AI and the chain reaction on data centre power and cooling

The past year has seen a quantum leap in the world of generative AI, with use cases for the technology now extending beyond the confines of the tech community. With generative AI’s benefits now being reaped by wider society, the outlook is bullish. In fact, Global Market Insights forecasts that the data centre GPU market will quadruple between now and 2028, projecting it will be worth US$63 billion before the decade ends. Meanwhile, citing growing cloud storage demand and larger volumes of data, Mordor Intelligence expects the high-performance computing market to be worth nearly US$100 billion by 2030.

However, behind the scenes, the sheer computational capacity needed to power AI may be a sticking point. Humming with ceaseless activity to crunch the algorithms that enable AI’s magic, data centres produce copious amounts of heat. As data centres sizzle away, energy consumption balloons to keep things running optimally. This isn’t just a consequence of AI, either: The broader adoption of cloud computing and the ever-increasing application of AI technologies — from edge devices to core data centres — will collectively push the boundaries of computing power. This will inevitably strain power and thermal transfer resources.

Diverting more heat away from the data centre might mean turning on more cooling systems, which increases power draw and operational costs. Meanwhile, a failure to address overheating may reduce the lifespan of components as well as computing performance, which not only adds user dissatisfaction to the equation but also affects service level agreements. However, this should not dampen enthusiasm for AI. Instead, it should inspire ingenuity through careful planning that puts environmental stewardship at the focal point of efforts to cultivate AI’s immense potential.

- Advertisement -

The challenge of high-density thermal loads

For years, traditional air conditioning has been the go-to option for cooling servers and equipment in the Asia-Pacific region. While this method continues to prevail across the region, it’s not the most ideal, especially as large swathes of territories are tropical. For the most part, this has been because air cooling was a simpler process before the recent increase in compute demands. But now that is changing, pivoting to flexible data centre infrastructures becomes imperative.

Consider this: Electricity costs increased by 26% globally last year, according to the Asian Development Bank. There’s no question of data centres’ role in this, with consumption typically ten times more than the average household at 1,000 kWh. This is a stark wake-up call, because evidently, the data centre cannot hold — not from an operational perspective, nor from a sustainability one.

Just as AI is emerging to rapidly become central to communities, data centre operators must look to pivot to these new demands. For example, high-performance CPUs and GPUs generate significant heat during operation. Emerging cooling options, such as hybrid cooling and direct liquid cooling, offer compatibility that can be tailored to ensure optimal cooling for both types in data centres, tailored to specific needs.

Because liquid cooling holds the potential of being three thousand times more effective, the world is waking up to its potential. For instance, a study by Nvidia and Vertiv found that liquid cooling improved energy usage effectiveness by 15.5%. Furthermore, by condensing footprints, immersion cooling cuts US$7 million to US$12 million per megawatt of IT load GPU versus US$600 to US$1,100 per square foot for air-cooled facilities.

Unsurprisingly, Asia-Pacific’s data centre operators are moving in this exact direction, with Dell’Oro Group forecasts estimating liquid cooling to surge past US$2 billion in revenue through 2027. But challenges are slowing down this transition, which could be a lot smoother and swifter than it currently is. This is why data centre operators need the strength of high-density power to go with high-density cooling.

Future-proofing high-density data centres with power and cooling technologies 

To address the demands placed by AI on critical infrastructure, enterprises need to embrace alternative cooling approaches such as hybrid cooling that encompasses liquid cooling and air cooling. Hybrid cooling can be a stepping stone for those looking to expand their cooling capacity without needing to build a new data centre. It is also a great alternative to scale critical infrastructure dynamically to accommodate high-performance computing and AI.

With ever-denser racks and surging energy costs, collaborating with experienced teams can help data centre operations deploy hybrid cooling and alternative power approaches to identify bottlenecks and understand key issues. This could look like:

  • Replacing the rear door of racks with liquid passive or active heat exchangers. Rear-door heat exchanger systems like these can be used in conjunction with air-cooling systems to cool environments with mixed rack densities.
  • Direct-to-chip cold plates above the board’s heat-generating components to draw off heat through single-phase cold plates or two-phase evaporation units. This will split the cooling load as much as 75/25 in liquid cooling’s favour.
  • Immersing servers and other rack components in single-phase and two-phase immersion cooling systems. The thermally conductive dielectric liquid or fluid eliminates the need for air cooling, maximising liquid’s thermal transfer properties to be the most energy-efficient form of liquid cooling available.
  • High density servers from high-performance compute applications draw huge amounts of incoming power from the grid. Power protection systems like UPS with Dynamic Grid Support or  Fuel Cell integration, as well as rack PDUs that are working seamlessly as one power train are ideal approaches to support the demands from GPUs.

Enterprises need alternatives to air cooling and power infrastructure technologies — and fast. Data centres have highly specific requirements and considerations, especially when it comes to introducing liquids into the rack and how this can be supported by the existing power infrastructure.