A team of researchers, led by IBM's Thomas J. Watson Center, has shown that a quantum computer can already help in the calculation of practical scientific problems that are inaccessible to today's classical computers. The "noise" and errors that accumulate still limit the applications of such computers, but the new study shows that, after adding a protocol that reduces these problems, a 127-cubit quantum computer is capable of simulating extremely complex physical states with high reliability. The results are published in the journal Nature.
A few years ago, Google's quantum computing team claimed to have achieved so-called "quantum supremacy" (solving a problem in a short time with a quantum computer that would take an intractable time for any imaginable classical computer) with a machine of about 50 quantum bits (cubits). The problem is that, in that case, the task was useless: it was specifically designed to be very difficult for a classical computer to solve, but feasible for a quantum computer. The open question is whether quantum computers can outperform classical computers on problems that do have some utility.
In theory, we know that there are problems where a quantum computer would outperform a classical one, such as the decomposition of a large number into prime numbers (on which current cryptography is based). But this would require many cubits and many operations (logic gates) to be performed on them. Although quantum computers can already perform each of these operations with few errors (with error probabilities below 1%), when you have to perform so many operations, the probability of making a mistake and the result being unreliable becomes very high.
To avoid this, quantum computers should incorporate error correction mechanisms. Such mechanisms are known theoretically, but they involve increasing the number of cubits and operations much more, so it would only be worthwhile to introduce them in a quantum computer where the probability of error in each operation is even much lower than it is today. The challenge of having a quantum computer with so many cubits and such extremely low error probabilities is still far beyond today's technological capabilities. But the question posed in the Nature paper by IBM's quantum computing team is: can we do anything useful with today's quantum computers, with a small number of cubits and relatively high error probabilities?
The authors' answer is yes, but it has a "trick" called "error mitigation". If we understand well the sources of error due to noise in an experiment and how the results of the experiment vary for different noise levels, we can deduce the result we would have if we had no noise at all. This therefore requires performing different experiments and correcting the results a posteriori, usually with a classical computer. It is in these corrected ("mitigated") results that the authors claim to have demonstrated their superiority over classical computers. They use a 127-cubit machine called Kyiv and run quantum circuits with 2,880 logic gates between pairs of cubits. These operations are not random, but serve to simulate the so-called Ising model, which was originally introduced to explain properties related to magnetism and has found many applications in physics over time. Classical computers use different approximations and methods to solve this model in many circumstances but, as shown in the article, with a number of particles as high as 127 and certain values of the physical parameters, the structure of the generated physical states can be so complex that previous approximations fail and classical machines cannot predict results with sufficient reliability.
This is related to the famous quantum entanglement. A two-cubit system has four possible states: 00, 01, 10 and 11, but, in addition, the cubits may be in a quantum superposition, so that you cannot decompose them into states of each individual cubit (quantum entanglement). With three cubits you would have eight possible states and their superpositions. With 127 you have a huge number of states (2^127) and their superpositions: classical computers don't have that much memory, but they can use approximations assuming that, of all those possible states, not all are important for describing the properties we are interested in, which reduces the amount of memory needed. The problem is that if the system we want to simulate is in a very complicated state, with a lot of entanglement, that assumption is no longer valid and classical computers cannot make accurate calculations. And this is where the usefulness of IBM's quantum computer comes in: in these situations, given certain values of the parameters of an Ising model, the authors show that their machine, after error mitigation, does provide reliable results when calculating the physical magnitudes of the system.
If these results are confirmed (e.g. by Google's competitor's team), they would be a first step in proving the usefulness of today's relatively small and noisy quantum computers when aided by error mitigation. While this particular calculation certainly has no direct practical application (since the parameter values where quantum superiority is shown probably do not correspond to real physical systems), at least Ising's model has a physical inspiration, so it is possible that there are models of similar complexity with more immediate applications that can also be attacked by Kyiv-like machines and an approach based on error mitigation, not correction.
The paper by Kandala et al. is a paper of excellent quality, demonstrating the computational power of IBM's 127-cubit quantum computer. The content of the paper is quite technical, focusing on questions such as the simulation of physics problems on these computers, the extraction of quantitatively good predictions despite the imprecision of the computer, and the comparison with other simulation techniques using conventional computers that quantum physicists have perfected in recent years.
The main message of the paper can be summarised in three statements:
- The quantum computers we have, although imprecise, can simulate problems of interest to physics that are of high complexity. In particular, this work has focused on simulating what is known as the "Ising model with transverse field", a fundamental physical problem in the study of magnetism.
- Although the quantum computer makes errors (cubits lose coherence, operations are not 100% accurate, etc.), the structure of the errors it makes is not arbitrary. Thanks to this, IBM has perfected a protocol that makes it possible to cancel the errors introduced by the quantum computer, obtaining very precise quantitative predictions.
- There are other simulation techniques that physicists use to study this type of problem on large "classical" computers. In this paper they have applied two such methods, known as "tensor networks", and found that they produce less accurate results than the quantum computer.
This is extremely interesting work, and helps to reinforce the usefulness of quantum computers in scientific applications, even in scenarios where we do not have perfect cubits and error correction. After this work, the existence of computers with 413 cubits, such as the Osprey chip, where this type of problem is currently impossible to tackle with tensor networks, becomes even more attractive. It also opens up important questions about what other physical problems can be studied on these processors and whether the error mitigation techniques created by IBM will have the same effect on them.
Is the result definitive? Not necessarily. Just as Google's work on quantum supremacy piqued the interest of scientists, who developed new simulation methods now capable of reproducing that experiment, it is possible that other scientists will improve the state of the art in "tensor networks" and match or surpass what this processor can do with 127 cubits.