The field of high-performance computing (HPC) has seen many innovations, particularly in parallel with the accelerated digital transformation among enterprises during the pandemic. Whether on-prem, cloud, or hybrid, advancements in HPC continue to open new opportunities for a wide range of industries.
While a lot of businesses have yet to embrace HPC, recent developments may move decision makers to give it a second glance.
During a podcast organised by Jicara Media and hosted by Lenovo, Sinisa Nikolic, Director of HPC and AI at Lenovo Asia-Pacific, discussed the common misconceptions about HPC, and why he thinks enterprises who are still on the fence will soon be more receptive to the technology.
For one, employing HPC is taking less real estate now, compared to a couple of years ago.
“What we’ve seen over the last half decade or so, is this massive increase in speed of technology. Supercomputers, which previously needed a football field-sized data centre, are now becoming smaller and smaller, and (are) small enough to fit into smaller organisations (and) data centres. So this democratising of supercomputing, you’re dropping it into very small data centres,” Sinisa said.
According to Sinisa, the major industries that traditionally rely on supercomputers include the following:
- Oil and gas companies
- Climate research
- Weather analytics
- Government and government research
- Higher education
- Heavy basic sciences
However, smaller markets are opening up for HPC.
“The smaller organisations that surround these industries are really picking up its pace in adoption. And then you loop AI technology into that— and so AI now will go into manufacturing, retail, insurance, finance, and banking,” he explained.
In addition, the simplification of massive amounts of complex data is proving not only beneficial, but a must for every enterprise looking to save time, money, and manpower, Sinisa stressed.
“For HPC, the things that you run on a supercomputer or HPC infrastructure are heavy, complicated mathematical equations, so it’s always going to be scientific in nature. (But) how does one dumb those pieces down? Those same verticals I was talking about, they continue to drive (usage). When you’re doing weather analysis, if you’re doing oil and gas, which is seismic processing, you thump the ground, you look for an image back, and then, you look for fractures in rock, or sometimes you look for those imagery, right? Those level of visualisations, they’ll continue, however, with more compute that’s available today. So (you have) faster processes that spin GPUs that are available in vast quantities today, and (with) reasonable pricing as well. With those inclusions, these complicated math equations run a lot faster. So your time to data shrinks,” he said.
One interesting use case of HPC is in the field of precision medicine, where the combination of HPC, genomics, and AI is being leveraged to customise medical care, and flip the paradigm of healthcare from reactive to predictive.
Through Lenovo’s GOAST, or Genomics Optimisation And Scalability Tool, medical researchers can significantly reduce the time it takes to analyse one genome, from 60 to 150 hours, down to 53 minutes.
“From an AI infrastructure standpoint, you’re seeing a lot more data usage, a lot more visualisation, and better ability for a business using machine learning, and deep learning, and edge-based infrastructures to do the inferencing components to derive data insight from vast amounts of data that they’ve trained the systems on,” Sinisa explained.
“You’re seeing this speed, you’re seeing this transfer of data, you’re seeing more complicated workloads across all of those industries— whether it’s fraud detection in a banking environment, or an insurance environment, or even a bank using even more complicated algorithms in Monte Carlo, which is what they used in the past, i.e. scenarios simulation. You’ll just see that happen more and more in these organisations,” he added.
In terms of physical infrastructure, maintaining a data centre not only costs rental space, but power as well— a common point of hesitancy among businesses.
“As you pump heat into a data centre, you have to remove that heat. Now to remove that heat, you add more air conditioners, which use more power, et cetera. It’s just a cycle, and the cycle today is very noticeable,” Sinisa pointed out.
The question, therefore, lies on how to reduce the carbon footprint of data centres, and whether it is possible to do so.
According to Sinisa, Lenovo already has a solution handy.
“What Lenovo has been working on, and the next big thing is called direct water cooling, and it’s a technology called Neptune. This DWC, or direct water cooling infrastructure that we have in our top-of-the-line systems, cools each component with water. We cool the CPU, memory, storage, and network. We have some very, very cool looking technologies,” he said.
Sinisa elaborated further: “We cool each of these components with ambient water. We pump this through a system, and through the laws of thermodynamics, transfer heat, from hot to cool. We pump this water around; it has algaecides but it’s just plain water. Then we cool down that water through the CPUs, which brings us back to ambient temperatures. Then we recycle it again. So it becomes a very efficient, relatively inexpensive method to increase heat dissipation by close to 90%.”
Through this solution, Lenovo’s HPC customers save between 30% to 50% on power costs, Sinisa revealed.
“If you’re spending less on power, what does that really mean? You’re not producing as much carbon, or you’re not part of that carbon problem. We have a number of facilities across the world that have reduced this into negative carbon. We’re utilising the hot water that we’re pumping out of the data centre before we have to recall it to heat another building, as an example. Then it’ll come back around and it will cool it down again. So we’re actually recycling heat and generating something else from that,” he said.
Aside from this, Lenovo has also recently introduced its TruScale HPC as a service, which combines the power of on-premises supercomputing with a cloud-like experience. Data is secure from installation to utilisation and through decommissioning, delivered in a highly cost-effective offering, without any data egress or ingress charges associated with use.