Connects decision-makers and solutions creators to what's next in quantum computing

IBM Demonstrates Quantum Utility for Materials Discovery Use Case

Company uses error mitigation to push current quantum computers past classical supercomputers

Sam Lucero - Chief quantum computing analyst, Omdia, Chief quantum computing analyst

June 15, 2023

12 Min Read
Sam Lucero speaking at The Quantum Computing Summit
Sam Lucero is the chief quantum computing analyst at OmdiaInforma

IBM Demonstrates Quantum Utility for Materials Discovery Use Case

Company uses error mitigation to push current quantum computers past classical supercomputers

IBM has demonstrated that quantum computers can outperform classical computers at simulating new materials, producing accurate results at a scale of 100+ qubits.

In this blog post, Omdia chief quantum computing analyst Sam Lucero explains the significance of this development and the important distinction between quantum utility and quantum supremacy.

On 14 June 2023, IBM Quantum announced interesting research showing that the company’s 127-qubit Eagle-class quantum computer produced more accurate computational results for a problem of commercial relevance than could be obtained running the same calculation on a classical supercomputer during the experiment. While IBM Quantum is careful to note that they expect the computation to eventually be replicated on a classical system, they rightly point to the research results as evidence that the industry is likely to achieve commercially relevant quantum advantage even on near-term, intermediate-scale (NISQ)-era quantum computers.

The research was jointly conducted with a team from the University of California at Berkeley (UCB). The research examined the computation of a quantum Ising model, which essentially is useful for molecular simulation use cases. These are of wide interest commercially, relevant to applications ranging from drug discovery to new materials development to computational fluid dynamics. This is a key point, which differentiates the current results from various “quantum supremacy” results announced in the past by other organizations, which focused on problems that are not commercially useful.

Related:IBM Plans First European Quantum Data Center

IBM Quantum also agrees that the goal for quantum computing remains the achievement of fault tolerance, at which point quantum computers should be able to run arbitrarily long quantum circuits. This capability will be needed to run algorithms like Shor’s Algorithm for prime number factorization. However, achieving large-scale, fault-tolerant quantum computers could be five or ten years (or much longer) away, and so there is a keen debate in the industry as to whether NISQ-era quantum computers can still prove commercially useful in the interim.

This announcement helps push the industry further down the “quantum advantage” spectrum. By “spectrum” we mean that advantage for quantum computers starts at a very low bar— “quantum commercial advantage”—where a quantum computer can provide a commercially-relevant advantage over the typical classical commercial solution that would have been used instead. This advantage could be based on speed, but also on cost, or on quality of results. The point here is not to beat any classical computer; running the computation on a supercomputer might well be faster, for example, but the user would not normally run the computation on a supercomputer, and so that option is moot.

Interestingly, a significant share of two global sets of quantum computing commercial adopters surveyed by Omdia, in 2022 and 2023, indicated that they already believed their organizations had achieved a measure of “advantage” by using quantum computers. In the 2023 survey, for example, 29% of respondents selected the answer “We already see an advantage” when asked, “When does your organization expect to see a ‘commercially-relevant’ competitive advantage to using QCs compared to classical computing?” In contrast, only 2% of respondents selected “Beyond 2033” as their answer to the question.

The next step in this spectrum is “quantum computational advantage” and this is where the IBM Quantum announcement becomes very interesting. This is the realm where quantum computers start to outperform any classical computer, even a supercomputer, typically based on speed, but perhaps on some other measure, such as the accuracy metric highlighted in the announcement. Quantum computational advantage itself will lay on a spectrum: IBM Quantum notes they “fully expect that the classical computing community will develop methods that verify the results we presented.” The point here is that, at least in the near term, “advantage” may not be strictly provable, but rather be an empirical result, subject to being matched by some new development in the classical computing community. IBM Quantum positions this as a healthy give-and-take furthering R&D on both sides of the quantum-classical divide.

The key importance of this announcement is that IBM Quantum is showing that quantum computational advantage for commercially relevant problems could very well be possible on NISQ-era quantum computers. This is very good news for the quantum computing industry, which faces a years-long climb to full fault tolerance and would like to have a strong commercial message about the benefits of NISQ-era quantum computers to push in the meantime.

This is also good news for IBM Quantum directly, in that it demonstrates that the full complement of 127 qubits in the Eagle-class processor can be used to good effect, and this will likely scale as well to the 433 qubit Osprey-class processors. IBM Quantum has done a good job in scaling the volume of qubits over successive generations of processors but has faced criticism about the usability of all these qubits. Namely, what’s the point of having many qubits if they can’t all be used during the circuit, because of error effects? The current results, and IBM Quantum’s path to “100 x 100” computation (circuits that are 100 qubits wide and 100 gates long) running by the end of 2024, help to address such criticisms, though it will be interesting to see how this translates to comparison metrics such as “quantum volume.”

The final stage is what Omdia calls “quantum tractability advantage”—the point at which classical computers, even the fastest supercomputers we can imagine, can’t hope to perform the computation, at least on any reasonable human timescale. The industry commonly accepts that much larger volumes of qubits, and full quantum error correction (QEC), will be needed to achieve this level of advantage. (Omdia, and most industry stakeholders, believe this will be possible, albeit with common projections that 5, 10, or more years will be needed, as noted above. We should mention, however, that some prominent academic physicists and computer scientists do have doubts as to whether this full goal is achievable. So, quantum tractability advantage is certainly not a settled scientific question.)

Error mitigation techniques, like those that IBM Quantum highlighted in their announcement, will form an important interim toolset on the road to QEC. Specifically, in the announcement, the company stated they developed a way to amplify the noise in their quantum computers in a measured, progressive way that then enabled extrapolation back to an “ideal” answer based on a linear regression of the noise data points during post processing of the computation. As this scales up, it could mean that NISQ-era quantum computers show practical commercial value as the industry works towards QEC.

QEC itself, widely regarded as the ultimate path to fault tolerance, is essentially an algorithmic exercise with several inputs. The key idea is to spread the information from a collection of physical qubits into a logical qubit (and to do this in a system that scales up to many logical qubits) such that if any one physical qubit experiences an error due to noise, this can be detected and corrected and the logical qubit itself will not experience an error (or at least be much, much less likely to experience an error.) The QEC algorithm itself is basically a collection of “codes” to encode the many physical qubits onto the logical qubit.

The first input enabling QEC is reducing the error rate of the physical qubits themselves. This involves efforts to develop both more robust hardware and active quantum error suppression techniques. For example, there are efforts underway to create topological qubits at Microsoft Quantum Azure, Nokia Bell Labs, and, in a different way, by Quantinuum and Google Quantum AI. Other techniques seek to shield the qubits from as much environmental noise as possible; for example, several types of quantum computers use cryogenic systems to cool the qubits down to near absolute zero temperatures. 

Error suppression consists of techniques to make individual elements of the quantum computer less susceptible to errors. For example, the quantum control software vendor Q-CTRL uses techniques such as characterizing the error profile of specific quantum computers using machine learning to adjust the control signals sent to the computer during operation in a way that suppresses many of the types of errors that can arise. Using these kinds of techniques can help the hardware perform closer to its theoretical ideal state. This is important to note because hardware vendors often use measures like single-gate and two-gate error rates to characterize their systems’ performance, but these rates are typically not achievable during the actual computation of an algorithm unless error suppression techniques are used. (“Algorithmic benchmarks” that provide a more real-world assessment of error rates are becoming more prominently used as a result.)

The second input is the QEC codes themselves and how good they may be at enabling error correction. The key idea is that these codes require that the underlying physical qubits meet a certain error correction threshold (at an algorithmic level) before they are useful. The threshold needs to be met for adding additional physical qubits to result in an overall reduction in system-level error. In other words, if the threshold is not met, then adding more physical qubits just adds more error from these new qubits, and the overall system becomes less, not more, accurate.

Currently, the industry hasn’t crossed this error correction threshold. Google Quantum AI published an interesting research paper in early 2023 showing they had just crossed over the line, but the results were quite nuanced. For example, the experimenters noted that it was possible that the error rate would in fact creep upwards again beyond a certain volume of qubits. But it is encouraging to see their actual experimental results demonstrating a “skirting” over the threshold. 

There are a small number of these QEC codes, like the “surface code”, the “color code”, and the “toric code”. It does appear that research into QEC codes is accelerating, and new codes are being introduced, with various operational characteristics and error correction thresholds. The threshold has increased by several orders of magnitude since the 1990s, and so it may be that further increasing the threshold at the code level (i.e., making the code more tolerant of physical errors), proves to be a useful path forward on the road to QEC.

The third and final input in QEC is the number of physical qubits themselves in a quantum computer. Just crossing over the threshold (i.e., having physical qubit algorithmic error rates just slightly below the code’s point where more qubits mean less, not more, errors, would require far too many physical qubits to realize one logical qubit. Estimates of exact numbers vary based on code type and error rates, but stakeholders have talked in ranges of several hundred thousand physical qubits per logical qubit. This is clearly not practical when the highest volume of physical qubits in a universal gate-based machine at present is the 433-qubit Osprey-class machine. And then, even if, say, a million physical qubit computers were suddenly available if this resulted in only five or ten logical qubits, then not much could usefully be done with the computer since most estimates are that we will need a thousand or several thousand logical qubits to run “exponential class” quantum algorithms like Shor’s Algorithm. Basically, you need the physical error rate of your qubits to be a hundred thousand times, or a million times, below the code’s error threshold, to get to a more practical ratio of physical to logical qubits, like 1000:1 or 50:1.

Omdia believes that these three inputs will in some fashion be solved and fully fault-tolerant, large-scale (i.e., 1000+ logical qubits) quantum computers will be possible. However, the story doesn’t end there. Having such quantum computers is necessary, but not sufficient. The other big piece of the puzzle is having exponential-class quantum algorithms. That is, having algorithms that provide an exponential speedup (measured as the number of steps the algorithm takes in solving a problem) over classical algorithms.

Shor’s Algorithm, introduced in 1994, was a watershed moment for the quantum computing industry because it proved an exponential speedup in a computation of profound interest: factoring large prime numbers is the cornerstone of our modern cybersecurity protection. It led to intense interest and funding by governments, for example, in quantum computing as a key national security concern. However, Shor’s Algorithm reportedly makes use of some inherent characteristics of the factoring computation that leverages the parallelism of quantum computers. It’s not clear yet what other problems we could find that would have similar useful characteristics. And this is an important consideration because some computer scientists also believe that without such levers, so-called “black box” calculations are limited to, at best, a quadratic speedup. A quadratic speedup in this case is reducing the number of steps a classical algorithm would take to achieve a result down to the square root of that number of steps. This is a speed-up, but not an exponential speedup.

This seems like a key limitation for finding quantum algorithms that will definitively show a “quantum tractability advantage” over classical algorithms, namely because quantum computers, on a per operation basis, are several orders of magnitude slower than, say, modern GPUs. Microsoft Azure Quantum, for example, has published a “resource estimator tool” that compares the capability of a quantum computer to a GPU, and assuming the tool is accurate, basically shows that unless the quantum computer (and algorithm) provides an exponential speedup – not quadratic, but fully exponential – then it will always be faster to ultimately run the computation on a classical computer (GPU), because of the inherent overhead in quantum computers on a per operation speed basis.

Interestingly, Microsoft Quantum Azure believes that quantum simulation (that is simulating the quantum behavior of molecular-level structures) is the one area likely to ultimately show the exponential speedup from both large-scale, fault-tolerant hardware, and exponential-class quantum algorithms. Quantum simulation is also the area where IBM Quantum’s current announcement focuses on, with their quantum Ising model computations.

Ultimately, the best we can say is that it certainly looks possible that large-scale, fault-tolerant quantum computers will be realized. If these are only able to be used for quantum simulation, that would still be an amazing benefit for humanity in solving key challenges related to climate change, healthcare, and creating new materials and products. IBM Quantum’s announcement is a key step along that path, and a reminder that there could be tremendous commercial benefits even short of revolutionary, exponential-class quantum tractability advantage.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like