190

In the Nature paper published by Google, they say,

To demonstrate quantum supremacy, we compare our quantum processor against state-of-the-art classical computers in the task of sampling the output of a pseudo-random quantum circuit. Random circuits are a suitable choice for benchmarking because they do not possess structure and therefore allow for limited guarantees of computational hardness. We design the circuits to entangle a set of quantum bits (qubits) by repeated application of single-qubit and two-qubit logical operations. Sampling the quantum circuit’s output produces a set of bitstrings, for example {0000101, 1011100, …}. Owing to quantum interference, the probability distribution of the bitstrings resembles a speckled intensity pattern produced by light interference in laser scatter, such that some bitstrings are much more likely to occur than others. Classically computing this probability distribution becomes exponentially more difficult as the number of qubits (width) and number of gate cycles (depth) grow.

So, from what I can tell, they configure their qubits into a pseudo-randomly generated circuit, which, when run, puts the qubits into a state vector that represents a probability distribution over $2^{53}$ possible states of the qubits, but that distribution is intractable to calculate, or even estimate via sampling using a classical computer simulation. But they sample it by "looking" at the state of the qubits after running the circuit many times.

Isn't this just an example of creating a system whose output is intractable to calculate, and then "calculating" it by simply observing the output of the system?

It sounds similar to saying:

If I spill this pudding cup on the floor, the exact pattern it will form is very chaotic, and intractable for any supercomputer to calculate. But I just invented a new special type of computer: this pudding cup. And I'm going to do the calculation by spilling it on the floor and observing the result. I have achieved pudding supremacy.

which clearly is not impressive at all. In my example, I'm doing a "calculation" that's intractable for any classical computer, but there's no obvious way to extrapolate this method towards anything actually useful. Why is Google's experiment different?

EDIT: To elaborate on my intuition here, the thing I consider impressive about classical computers is their ability to simulate other systems, not just themselves. When setting up a classical circuit, the question we want to answer is not "which transistors will be lit up once we run a current through this?" We want to answer questions like "what's 4+1?" or "what happens when Andromeda collides with the Milky Way?" If I were shown a classical computer "predicting" which transistors will light up when a current is run through it, it wouldn't be obvious to me that we're any closer to answering the interesting questions.

Urb
  • 2,608

5 Answers5

54

To elaborate on my intuition here, the thing I consider "impressive" about classical computers is their ability to simulate other systems, not just themselves. When setting up a classical circuit, the question we want to answer is not "which transistors will be lit up once we run a current through this?" We want to answer questions like "what's 4+1?" or "what happens when Andromeda collides with the Milky Way?"

There isn't a real distinction here. Both quantum and classical computers only do one thing: compute the result of some circuit. A classical computer does not fundamentally know what $4+1$ means. Instead current is made to flow through various transistors, as governed by the laws of physics. We then read off the final state of the output bits and interpret it as $5$.

The real distinction, which holds in both cases, is whether you can program it or not. For example, a simple four-function calculator is a classical system involving lots of transistors, but the specific things it can compute are completely fixed, which is why we don't regard it as a classical computer. And a pudding is a quantum system involving lots of qubits, but we can't make it do anything but be a pudding, so it's not a quantum computer.

Google can control the gates they apply in their quantum circuit, just like loading a different program can control the gates applied in a classical CPU. That's the difference.

knzhou
  • 101,976
  • 1
    Would it be accurate to say, then, that this is impressive as an engineering feat? Specifically, they showed that they can set up a 53-bit quantum circuit, and sample it quickly. But in order to extrapolate that we can do impressive calculations, we have to trust that there exist theoretical 53-qubit algorithms that can do important calculations that classical computers can't do efficiently. Is that the right way to interpret it? – Bridgeburners Oct 30 '19 at 17:31
  • 20
    It's impressive that there is any problem that we can solve easily on a quantum computer but not on a classical computer. To quote Scott Aaronson in the NY Times: "The calculation doesn’t need to be useful: much like the Wright Flyer in 1903, or Enrico Fermi’s nuclear chain reaction in 1942, it only needs to prove a point." – d_b Oct 30 '19 at 18:02
  • 9
    @Bridgeburners They did not merely show that they can set up a 53 qbit circuit (that's like showing you can implement a for loop in your CPU). They showed that they can calculate the probability distribution of the interference pattern of laser scattering - a specific program that happens to use 53 qbit. It's like saying you've written a ray-tracing program for a parallel cluster that uses some gigabits of RAM. It was not just a random program they were running – slebetman Oct 31 '19 at 04:44
  • 28
    But can you eat your quantum computer? – Passer By Oct 31 '19 at 07:19
  • But no matter how well you program a classic computer, it's unable to solve a problem like the traveling salesman problem in a finite number of steps, while a quantum computer can solve it (trivially) in 1 step - if you manage how to ask it... – Stian Oct 31 '19 at 13:49
  • 10
    @StianYttervik That is not true. The traveling salesman problem is an NP-complete problem, and as such is certainly solvable by a classical computer. And the best known algorithm for NP-complete problems on a quantum computer are barely faster than on a classical computer. – Chris Oct 31 '19 at 16:18
  • Add reproduceability. They could spill their pudding over and over again and get the same result (up to the sampling precision). In real world pudding spill, you'd get different spill every time – Jeffrey Oct 31 '19 at 16:54
  • 4
    The issue with NP-complete problems is not that they are "not solvable", but rather that they are not tractable. That is, that trivial increases in the size of the problem's inputs result in literally exponential (or worse) increases in the run-time to calculate a solution. – RBarryYoung Oct 31 '19 at 18:19
  • @RBarryYoung What you mean is "worse than exponential", exponential would be "just fine", almost... The traveling salesman needs exponential multiplied by the square of n. – Volker Siegel Oct 31 '19 at 20:15
  • 4
    @StianYttervik That's very wrong: The traveling salesman problem is solvable by any classical computer. It can be solved in a finite number of steps and finite time. The number of steps required is known, and mathematically proven. It just takes long because it's many steps. Same for any other NP complete problem. Very important is that all we are talking about is done in finite numbers of steps. (There are other things that are not, and my brain hurts even thinking of that) – Volker Siegel Oct 31 '19 at 20:22
  • @chris and volker Mea culpa, i had a brain fart about the unable part, the point was that for a quantum computer it can be done with 1 step, trivially if you manage to arrange the system - compared to classical computer where the problem explodes in complexity. So certainly there are very exciting things in contemporary work, especially in how to program for the clusters – Stian Oct 31 '19 at 21:50
  • 9
    @StianYttervik No, it cannot. The traveling salesman problem isn't even known to be in BQP- under the best known algorithm it still can't be done in polynomial time even on a quantum computer. See this answer, for instance. The quantum speedup for NP-complete problems is only quadratic- it turns $2^N$ steps to $2^{N\over 2}$, basically. So problems that take exponential time still take exponential time. – Chris Oct 31 '19 at 22:13
  • @PasserBy Yes, you can, your question didn't ask whether you would die or not. –  Nov 01 '19 at 02:23
  • @William You are missing the important part: will it have a delicious skin on it? – JimmyJames Nov 01 '19 at 15:57
  • 3
    @VolkerSiegel Your correction is not quite right either - specifcally, your claim that "The number of steps required is known, and mathematically proven." The minimum number of time steps to solve Traveling Salesman is not known, and in fact even just a proof that it's greater than polynomial would prove that $\textbf{P} \neq \textbf{NP}$, which most definitely has not been proven. – tparker Nov 01 '19 at 22:02
  • @tparker Oh, thanks for clarifying it - it was explicitly not a good place for not being exact. – Volker Siegel Nov 01 '19 at 22:11
  • @VolkerSiegel Sorry, I missed this before... But, NP-Complete does not necessarily mean "worse than exponential" There are some NP-complete problems, such as the Change-Making problem that can definitely be solved in exponential-only time. However, all NP-complete problems are at least O(n^k * 2^n) where k is some constant value (including zero). – RBarryYoung May 03 '20 at 18:24
49

The big difference between the quantum supremacy experiment and your pudding experiment is that the quantum supremacy experiment solved an unambiguous, well-posed mathematical problem. While people sometimes describe the computational task as "simulating the physical Sycamore computer", that's not right. The actual task was calculating the output of an abstract quantum logical circuit, of which the Sycamore computer was an approximate physical instantiation. The difference is subtle but crucial. From a computational perspective, the math came first and the physics came second.

Crucially, the quantum supremacy problem was mathematically well-specified, and so it could be checked on a classical computer. The parallel classical computation wasn't just there to provide a time benchmark, but also - crucially - to check the quantum computation for accuracy.

There's no such "slower but equivalent" computation to the pudding experiment. In the pudding experiment, you need to specify exact which of these two problems you're trying to solve:

  1. Simulate the pattern that will result if you knock a generic pudding cup off the table.
  2. Simulate the pattern that will result if you knock a particular pudding cup off the table [where you specify enough detail to nail down the initial conditions in enough detail to model its fall].

The first variant is obviously massively underspecified and doesn't have a unique answer. The second variant does in principle have a unique answer, but crucially, you can't actually capture all the necessary details about the initial condition in practice. So neither variant can actually be framed as a mathematically well-posed question.

In the quantum supremacy experiment, the abstract problem to be solved (which was solving the abstract logical circuit, not simulating the physical hardware) was simple enough to pose that you could (very slowly) solve it exactly on a classical computer as well as on a quantum one.

tparker
  • 47,418
  • 2
    Thanks! I tried to think of another example of a mathematically well specified problem that could be solved faster with a different system than with a classical computer (even if that system is meant to physically manifest that math problem), but I couldn't think of one. Would you say that this is the first historical example of such a thing? – Bridgeburners Nov 01 '19 at 14:19
  • 2
    @Bridgeburners Eh, hard to say, since the answer will hinge crucially on exactly how you define "classical computer" and "different system". Where do planimeters, integraphs, ball-and-disk integrators, and differential analyzers fall? Also, do you mean 2019 classical computers, or historical classical computers? – tparker Nov 01 '19 at 22:09
  • 1
    @Bridgeburners But at the most abstract, theoretical level, I believe that every computing device ever built before Sycamore could be effectively described using classical physics, and so I believe that Sycamore is the first device ever build in human history that can practically solve a large instance of a problem that lies outside of a computationally tractable classical complexity class. – tparker Nov 01 '19 at 22:13
  • Crucially, Google's computer is actually a computer. You can run programs on it and compute results. You can't do that with a puddle. – Al Nejati Nov 11 '19 at 20:08
32

I co-run an experimental research group in which, among other things, we develop the ability to control quantum bits so as to do (one day) quantum computing. In our lab we have the most precise quantum bits and operations, but we have only ever performed (or tried to perform) operations on two or three bits at a time. This is partly because we have taken an interest in other aspects of the problem, and partly because if we were to put ten or more qubits into our experiment (which would be easy to do), we would merely reproduce small-scale circuits rather than learn how to build a large computer.

I would say that the question asked by Bridgeburners is well asked, and it correctly characterises the limited nature of Google's calculation. However, one can look at Google's result from an experimental perspective, and then it does become, I think, very impressive.

From an experimental point of view, Google's achievement is that they have achieved sufficient experimental control over a circuit containing 53 qubits that it can generate entangled states involving all or most of the qubits, in such a way that the fidelity of the state is not immediately lost before the state can even be measured in some way. We would certainly not be able to achieve this in our lab today. If we devoted all our effort to doing the same thing, I think it would take us a year or more to implement the required extensions to our experimental equipment with the required precision. So it is indeed very impressive. (Meanwhile with our trapped ion methods we can also do some things which Google's machine could not do.)

Looking now to the near future, there are two main technologies showing promise for quantum computing. These are atomic ions confined in high vacuum, and superconducting circuits of the type employed by Google. A few years ago, a rough 'competition' was held, involving computations requiring only 10 or so qubits, and the trapped ion methods won because they could take advantage of greater inter-connectability of their qubits, and good general precision in operations and measurements. If such a competition were to be held now, it is not so clear which technology would win. Nor is it clear which is the better bet for complete control of 50 qubits in such a way that general-purpose computing could be done, answering questions people really want to know (as opposed to computer-science abstractions). But what is clear, I think, is that this stage will be reached by either or both paths, and this will happen on a timescale of years not decades. What John Martinis and his colleagues have done is shown that the superconducting circuit approach is a very strong contender, and they have shown great expertise and mastery in overcoming many and severe technical challenges to get this far.

Andrew Steane
  • 58,183
  • Great answer Andrew. You say this stage ... will happen on a timescale of years not decades. What type of general purpose computing do you believe will occur in years? I'm wondering about the practicality of these experiments and when will someone be able to market something 'useful'? – sfors says reinstate Monica Nov 04 '19 at 16:17
  • 1
    @sfors The first applications will be, I think, "quantum chemistry" or similar. That is, studying models of quantum systems, getting eigenvalues and info about wavefunctions, beyond what the chemists' toolbox can currently do. Time to market? I can't say beyond what I say above, i.e. that it seems to me we are no longer in the situation where we just smile and say it's interesting and worth trying. It would now be very surprising if it did not succeed. – Andrew Steane Nov 04 '19 at 18:21
14

It is (in isolation) no different from you pudding computer.


However, the context is very important. Their are certain useful problems that we know how to solve much faster on a quantum computer than a classical one. Those problems are miles out of reach, and probably will not be solved for 10 or 20 years still (I am more pessimistic than most, maybe too much so).

In the meantime thousands of people all around the world are working on lots of different quantum computer ideas, superconducting ones like Google and IBM, but also optical ones of various types with quantum dots, down conversion sources or nitrogen vacancy centres.

These different communities have computers that work with fundamentally different hardware, so they are hard to compare against one another. This Google result indicates several things: (1) It shows that they are making some progress. (2) It is to "show off" how they are doing compared to the other approaches. For example optics people are still at around 8-qubits, while superconducting stuff is at 53. This is not a useful comparison (the two machines are too different) but that won't stop people from showing off.

The real story is not the "quantum supremacy", its that they have a 53 qubit circuit that seems to kind of half-work some of the time. (This is where the optics people get to gloat, their stuff has less qubits but actually works most of the time.) In order to "sell" this machine as an interesting step along the way to something useful they had to actually do something with it.

J.G.
  • 24,837
Dast
  • 1,796
-1

If I spill this pudding cup on the floor, the exact pattern it will form is very chaotic, and intractable for any supercomputer to calculate. But I just invented a new special type of computer: this pudding cup. And I'm going to do the calculation by spilling it on the floor and observing the result. I have achieved pudding supremacy.

You are right. But only half of the story

The other half is, this pudding spilled pattern can also be used to solve a mathematical problem which is very useful and impossible to be solved by conventional non-pudding system. It also could be controlled to make it deterministically solve that same problem anytime you want. If your pudding spilled can do that then it gain label of supremacy over conventional computer

The point is we need to solve a problem by any means and method as efficient as possible, if the pudding spilled are completely efficient way to do it, it would reign supremacy and we will make a pudding machine and pudding factory for solving problem

Now, replace pudding with quantum

Thaina
  • 898