The enterprise world eagerly anticipates the commercialisation of quantum computing, but with much of the technology still in its exploratory stages, it could be a long wait.
Hewlett Packard Labs, the advanced research arm of HPE, has set its sights on quantum computing, and according to its Chief Architect, Kirk Bresniker, the timeline for mainstream adoption might be shorter than expected.
In a conversation with Frontier Enterprise, Bresniker shared insights on the current state of quantum research and its potential future impact on the enterprise world.
Could you share a bit about your role as Chief Architect of Hewlett Packard Labs?
Hewlett Packard Labs is not just an open-ended research lab. We’re always focused on understanding where Hewlett Packard Enterprise can gain a business advantage through technology. This year marks my 35th anniversary with the company, and when I think back to 30 years ago, I could come down to Palo Alto and meet the team designing the PA-RISC (Precision Architecture-Reduced Instruction Set Computing) architecture, the semiconductor process, and the basic manufacturing methods.
Value-added manufacturing has always been a hallmark of Hewlett Packard. Above that, operating system middleware libraries, HP printers, HP disk drives, and HP terminals are all vertically integrated. This integration has always meant that innovations from the labs could flow directly to our design, manufacturing, and global service and support teams.
Today, innovation has become much more diverse and complex. Now, when someone at Hewlett Packard Labs comes up with a wonderful idea, we need a fab to make it, a semiconductor partner to adopt it, and an open-source Linux kernel or OpenAI framework to integrate it. Only then is it ready for a Hewlett Packard Enterprise product.
As Chief Architect, my role is to bring all these communities together, explore the possibilities that technology offers, and build the case for investment and collaboration.
Moving on to quantum computing, how long do you think it would be before we see it on a commercial scale?
We actually had a team at Hewlett Packard Labs that started about 18 years ago, working on a very low-level qubit. That project involved what’s called a nitrogen vacancy on a diamond lattice. Essentially, you take a lab-grade diamond chip, implant a nitrogen atom to displace one of the carbon atoms, and the extra electron pair freed up in the diamond lattice forms the basis of an artificial atom that can be manipulated as a qubit. From there, you can do the incredible things that make a qubit so different and special compared to an ordinary bit, like entanglement and superposition interference.
About 12 years ago, they shelved the work because they didn’t see a clear path to enterprise value. Again, Hewlett Packard Labs has always focused on linking research with business outcomes. So, they shifted to photonics, as moving data — whether chip to chip or across countries at picojoules per bit — proved to be critical for enterprise needs.
Then, about four years ago, we acquired SGI and Cray, which expanded our role in supercomputing. As we designed and delivered exascale supercomputers to customers from those acquisitions, they expressed interest in what we were doing and asked, “For the next system, where do we integrate the qubits?” This raised the question of how we could re-engage with the quantum community.
We’re not going to dust off our old nitrogen-qubit project—although that work is still ongoing and is one of about six potential quantum modalities. Instead, we’re focusing on a new opportunity. The key question is, now that we have this supercomputing capability, can we provide better tools to solve real-world problems in areas like solid-state, exotic condensed matter physics, quantum chemistry, and industrial applications? For example, could we use quantum computing to find the best material to prevent corrosion on aircraft or synthesise important chemicals for industrial use? What is it like to actually deliver the optimisation we’ve been promised with quantum for quite some time, and achieve that on an industrial scale? That’s really what we’ve been devoting ourselves to — beginning to answer those questions of where and when quantum can make a real impact.
Right now, our approach is holistic co-design. We partner with others who are developing different kinds of qubits and quantum software, and we’re using our ability to simulate quantum systems to tackle real chemistry problems.
One of the first problems we worked on involved a chemical called benzine. You might think of the benzene ring, which has six carbons and hydrogens. If you remove two of those hydrogens and create a triple bond on one side, you get this exotic chemical benzine. It’s a precursor and can only exist for attoseconds. The only way to know it exists is by studying what happens before and after. From there, researchers figured out how to create a sarcophagus of a larger organic molecule to keep it stable long enough to capture faint signals of information.
When we initially tackled this problem with our co-design partners, the solution required 100 million qubits for 5,000 years — that’s a lot of time and qubits. Considering we’re currently working with qubits in the tens, maybe hundreds, 5,000 years is a long time to run a calculation. However, using error correction codes and our simulation methodology, we reduced that requirement to 1 million qubits for one year — a 500,000x reduction. Now, instead of talking about a solution that might take generations, we’re looking at something that could happen in the next 10 years.
This isn’t RSA-2048 encryption cracking, but it’s a problem with real industrial merit. What we’re doing now is figuring out how to take a problem and break it down. Some parts might be perfect for a GPU, some for a superconducting qubit, and others for a trapped ion qubit. By combining quantum processing units as accelerators with traditional classical supercomputing, we create a hybrid environment — that’s the essence of holistic co-design.
What are the roadblocks right now towards commercialisation?
The real question with quantum is coherence time — how long can a qubit remain functional before it goes, “poof!” Then, there are the error rates. I have a superposition of one and zero. Now, it’s not just a value between one and zero, like half of each, but rather the probability. The probabilistic nature of the qubit — the probability that when I measure it, I’ll get a zero or a one — how do those probabilities vary?
Another challenge with qubits is that they’re entangled, meaning they can interfere with each other. Different quantum technologies have varying error rates, coherence times, and capabilities. We need to figure out how these technologies interact, particularly in terms of managing interference and entanglement. Each quantum technology has unique strengths, so we might end up using multiple qubit technologies simultaneously, even on the same problem. Some may offer extremely long coherence times but have slower transition speeds.
One major challenge for all quantum technologies is scale — how many qubits can we bring into an entangled quantum state? Take technologies that use trapped ions held by optical tweezers or lasers trapping individual rubidium atoms — they’re great. You might have 32 qubits inside a vacuum chamber the size of a phone, but we don’t need 32; we need a million. How do we scale from 10, 20, or 100 qubits to a million? For some technologies, like superconducting qubits that rely on standard semiconductor lithography techniques, the challenge is different. How do we scale them while keeping them cooled to four millikelvin in a dilution refrigerator? They all have interesting characteristics.
Each technology presents significant engineering challenges, and that’s even before we consider the integration of control systems over the course of a calculation, which could still take weeks, months, or even years.
Will the quantum computer have a significant impact in the way that AI is done today?
You can do linear algebra on a quantum system, and there are HHL algorithms that support this. Now, is it better? That’s still an open research question. Can we theoretically utilise a quantum system to perform conventional AI algorithms — like all that linear algebra — more efficiently? That’s an area of active research, and we don’t know yet.
Another interesting aspect is whether we’ll see novel, quantum-native algorithms to approach the same tasks. Right now, it’s incredible. There’s a website called epicai.org, which has a fantastic collection of all the current language models, including their costs, energy consumption, data sizes, and parameters. If you draw that curve, showing the linear increase in parameter count causing an exponential rise in resources needed to train a model, you can see that in about three or four years, the cost to train a single model could surpass what we currently spend on global IT. That’s unlikely to happen — we’ll hit a hard ceiling.
The question is, what will that ceiling look like? Will we stop making bigger models and shift to smaller ones? Will we make GPUs more efficient or create application-specific accelerators? Or, could quantum computing provide a novel, yet unknown, form of acceleration?
That intersection of quantum and AI is an active area of research now. Circling back to our quantum program, what’s interesting for us is something we might call quantum machine learning, but it’s not about using quantum processors to run today’s conventional machine learning algorithms. It’s more about asking, “Can we train a machine learning algorithm to model quantum systems — systems that obey the laws of quantum mechanics — without actually needing to create a qubit?” For us, that could be one of the more interesting intersections.