## Quantum supremacy: the gloves are off

**Links:**

Google paper in *Nature**New York Times* article

IBM paper and blog post responding to Google’s announcement

Boaz Barak’s new post: “Boaz’s inferior classical inferiority FAQ”

Lipton and Regan’s post

My quantum supremacy interview with the BBC (featuring some of my fewest “uhms” and “ahs” ever!)**NEW:** My preprint with Sam Gunn, On the Classical Hardness of Spoofing Linear Cross-Entropy Benchmarking

My interview on NPR affiliate WOSU (starts around 16:30)

When Google’s quantum supremacy paper leaked a month ago—not through Google’s error, but through NASA’s—I had a hard time figuring out how to cover the news here. I had to say *something*; on the other hand, I wanted to avoid any detailed technical analysis of the leaked paper, because I was acutely aware that my colleagues at Google were still barred by *Nature*‘s embargo rules from publicly responding to anything I or others said. (I was also one of the reviewers for the *Nature* paper, which put additional obligations on me.)

I ended up with Scott’s Supreme Quantum Supremacy FAQ, which tried to toe this impossible line by “answering general questions about quantum supremacy, and the consequences of its still-hypothetical achievement, in light of the leak.” It wasn’t an ideal solution—for one thing, because while I still regard Google’s sampling experiment as a historic milestone for our whole field, there *are* some technical issues, aspects that subsequent experiments (hopefully coming soon) will need to improve. Alas, the ground rules of my FAQ forced me to avoid such issues, which caused some readers to conclude mistakenly that I didn’t think there were any.

Now, though, the Google paper has come out as *Nature*‘s cover story, at the same time as there have been new technical developments—most obviously, the paper from IBM (see also their blog post) saying that they could simulate the Google experiment in 2.5 days, rather than the 10,000 years that Google had estimated.

(Yesterday I was deluged by emails asking me “whether I’d seen” IBM’s paper. As a science blogger, I try to respond to stuff pretty quickly when necessary, but I don’t—can’t—respond in Twitter time.)

So now the gloves are off. No more embargo. Time to address the technical stuff under the hood—which is the purpose of this post.

I’m going to assume, from this point on, that you already understand the basics of sampling-based quantum supremacy experiments, and that I don’t need to correct beginner-level misconceptions about what the term “quantum supremacy” does and doesn’t mean (no, it doesn’t mean scalability, fault-tolerance, useful applications, breaking public-key crypto, etc. etc.). If this is not the case, you could start (e.g.) with my FAQ, or with John Preskill’s excellent *Quanta* commentary.

Without further ado:

**(1) So what about that IBM thing? Are random quantum circuits easy to simulate classically?**

OK, so let’s carefully spell out what the IBM paper says. They argue that, by commandeering the full attention of Summit at Oak Ridge National Lab, the most powerful supercomputer that currently exists on Earth—one that fills the area of two basketball courts, and that (crucially) has **250 petabytes** of hard disk space—one could just barely store the entire quantum state vector of Google’s 53-qubit Sycamore chip in hard disk. And once one had done that, one could simulate the chip in ~2.5 days, more-or-less just by updating the entire state vector by brute force, rather than the 10,000 years that Google had estimated on the basis of my and Lijie Chen’s “Schrödinger-Feynman algorithm” (which can get by with less memory).

The IBM group understandably hasn’t actually done this yet—even though IBM set it up, the world’s #1 supercomputer isn’t just sitting around waiting for jobs! But I see little reason to doubt that their analysis is basically right. I don’t know why the Google team didn’t consider how such near-astronomical hard disk space would change their calculations; probably they wish they had.

I find this to be much, *much* better than IBM’s initial reaction to the Google leak, which was simply to dismiss the importance of quantum supremacy as a milestone. Designing better classical simulations is precisely how IBM and others *should* respond to Google’s announcement, and how I said a month ago that I hoped they *would* respond. If we set aside the pass-the-popcorn PR war (or even if we don’t), this is how science progresses.

But does IBM’s analysis mean that “quantum supremacy” hasn’t been achieved? No, it doesn’t—at least, not under any definition of “quantum supremacy” that I’ve ever used. The Sycamore chip took about 3 minutes to generate the ~5 million samples that were needed to pass the “linear cross-entropy benchmark”—the statistical test that Google applies to the outputs of its device.

(**Technical note added:** Google’s samples are extremely noisy—the actual distribution being sampled from is something like 0.998U+0.002D, where U is the uniform distribution and D is the hard distribution that you want. What this means, in practice, is that you need to take a number of samples that’s large compared to 1/0.002^{2}, in order to extract a signal corresponding to D. But the good news is that Google *can* take that many samples in just a few minutes, since once the circuit has been loaded onto the chip, generating each sample takes only about 40 microseconds. And once you’ve done this, what hardness results we have for passing the linear cross-entropy test—to be discussed later in this post—apply basically just as well as if you’d taken a single noiseless sample.)

Anyway, you might notice that three minutes versus 2.5 days is still a quantum speedup by a factor of 1200. But even more relevant, I think, is to compare the number of “elementary operations.” Let’s generously count a FLOP (floating-point operation) as the equivalent of a quantum gate. Then by my estimate, we’re comparing ~5×10^{9} quantum gates against ~2×10^{20} FLOPs—a quantum speedup by a factor of ~40 billion.

For me, though, the broader point is that neither party here—certainly not IBM—denies that the top-supercomputers-on-the-planet-level difficulty of classically simulating Google’s 53-qubit programmable chip really *is* coming from the exponential character of the quantum states in that chip, *and nothing else*. That’s what makes this back-and-forth fundamentally different from the previous one between D-Wave and the people who sought to simulate *its* devices classically. The skeptics, like me, didn’t much care what speedup over classical benchmarks there was or wasn’t today: we cared about the *increase* in the speedup as D-Wave upgraded its hardware, and the trouble was that we never saw a convincing case that there would be one. I’m a theoretical computer scientist, and this is what I believe: that after the constant factors have come and gone, what remains are asymptotic growth rates.

In the present case, while increasing the circuit depth won’t evade IBM’s “store everything to hard disk” strategy, increasing the number of qubits will. If Google, or someone else, upgraded from 53 to 55 qubits, that would apparently already be enough to exceed Summit’s 250-petabyte storage capacity. At 60 qubits, you’d need 33 Summits. At 70 qubits, enough Summits to fill a city … you get the idea.

From the beginning, it was clear that quantum supremacy would not be a milestone like the moon landing—something that’s achieved in a moment, and is then clear to everyone for all time. It would be more like eradicating measles: it could be achieved, then temporarily unachieved, then re-achieved. For by definition, quantum supremacy all about *beating* something—namely, classical computation—and the latter can, at least for a while, fight back.

As Boaz Barak put it to me, the current contest between IBM and Google is analogous to Kasparov versus Deep Blue—*except with the world-historic irony that IBM is playing the role of Kasparov!* In other words, Kasparov can put up a heroic struggle, during a “transitional period” that lasts a year or two, but the fundamentals of the situation are that he’s toast. If Kasparov had narrowly beaten Deep Blue in 1997, rather than narrowly losing, the whole public narrative would likely have been different (“humanity triumphs over computers after all!”). Yet as Kasparov himself well knew, the very fact that the contest was *close* meant that, either way, human dominance would soon end for good.

Let me leave the last word on this to friend-of-the-blog Greg Kuperberg, who graciously gave me permission to quote his comments about the IBM paper.

I’m not entirely sure how embarrassed Google should feel that they overlooked this. I’m sure that they would have been happier to anticipate it, and happier still if they had put more qubits on their chip to defeat it. However, it doesn’t change their real achievement.

I respect the IBM paper, even if the press along with it seems more grouchy than necessary. I tend to believe them that the Google team did not explore all avenues when they said that their 53 qubits aren’t classically simulable. But if this is the best rebuttal, then you should still consider how much Google and IBM still agree on this as a proof-of-concept of QC. This is still quantum David vs classical Goliath, in the extreme. 53 qubits is in some ways still just 53 bits, only enhanced with quantum randomness. To answer those 53 qubits, IBM would still need entire days of computer time with the world’s fastest supercomputer, a 200-petaflop machine with hundreds of thousands of processing cores and trillions of high-speed transistors. If we can confirm that the Google chip actually meets spec, but we need this much computer power to do it, then to me that’s about as convincing as a larger quantum supremacy demonstration that humanity can no longer confirm at all.

Honestly, I’m happy to give both Google and IBM credit for helping the field of QC, even if it is the result of a strange dispute.

I should mention that, even before IBM’s announcement, Johnnie Gray, a postdoc at Imperial College, gave a talk (abstract here) at Caltech’s Institute for Quantum Information with a proposal for a *different* faster way to classically simulate quantum circuits like Google’s—in this case, by doing tensor network contraction more cleverly. Unlike both IBM’s proposed brute-force simulation, and the Schrödinger-Feynman algorithm that Google implemented, Gray’s algorithm (as far as we know now) would need to be repeated k times if you wanted k independent samples from the hard distribution. Partly because of this issue, Gray’s approach doesn’t currently look competitive for simulating thousands or millions of samples, but we’ll need to watch it and see what happens.

**(2) Direct versus indirect verification.**

The discussion of IBM’s proposed simulation brings us to a curious aspect of the Google paper—one that was already apparent when *Nature* sent me the paper for review back in August. Namely, Google took its supremacy experiments well past the point *where even they themselves knew how to verify the results*, by any classical computation that they knew how to perform feasibly (say, in less than 10,000 years).

So you might reasonably ask: if they couldn’t even verify the results, then how did they get to claim quantum speedups from those experiments? Well, they resorted to various gambits, which basically involved estimating the fidelity on quantum circuits that looked almost the same as the hard circuits, but happened to be easier to simulate classically, and then making the (totally plausible) assumption that that fidelity would be maintained on the hard circuits. Interestingly, they also cached their outputs and put them online (as part of the supplementary material to their *Nature* paper), in case it became feasible to verify them in the future.

Maybe you can now see where this is going. From Google’s perspective, IBM’s rainstorm comes with a big silver lining. Namely, by using Summit, hopefully it will now be possible to verify Google’s hardest (53-qubit and depth-20) sampling computations directly! This should provide an excellent test, since not even the Google group themselves would’ve known how to cheat and bias the results had they wanted to.

This whole episode has demonstrated the importance, when doing a sampling-based quantum supremacy experiment, of *going deep into the regime where you can no longer classically verify the outputs*, as weird as that sounds. Namely, you need to leave yourself a margin, in the likely event that the classical algorithms improve!

Having said that, I don’t mind revealing at this point that the lack of direct verification of the outputs, for the largest reported speedups, was my single biggest complaint when I reviewed Google’s *Nature* submission. It was because of my review that they added a paragraph explicitly pointing out that they *did* do direct verification for a smaller quantum speedup:

The largest circuits for which the fidelity can still be directly verified have 53 qubits and a simplified gate arrangement. Performing random circuit sampling on these at 0.8% fidelity takes one million cores 130 seconds, corresponding to a million-fold speedup of the quantum processor relative to a single core.

(An earlier version of this post misstated the numbers involved.)

**(3) The asymptotic hardness of spoofing Google’s benchmark.**

OK, but if Google thought that spoofing its test would take 10,000 years, using the best known classical algorithms running on the world’s top supercomputers, and it turns out instead that it could probably be done in more like 2.5 days, then how much else could’ve been missed? Will we find out next that Google’s benchmark can be classically spoofed in mere milliseconds?

Well, no one can rule that out, but we do have some reasons to think that it’s unlikely—and crucially, that even if it turned out to be true, one would just have to add 10 or 20 or 30 more qubits to make it no longer true. (We can’t be more definitive than that? Aye, such are the perils of life at a technological inflection point—and of computational complexity itself.)

The key point to understand here is that we really are talking about simulating a *random* quantum circuit, with no particular structure whatsoever. While such problems *might* have a theoretically efficient classical algorithm—i.e., one that runs in time polynomial in the number of qubits—I’d personally be much less surprised if you told me there was a polynomial-time classical algorithm for factoring. In the universe where amplitudes of random quantum circuits turn out to be efficiently computable—well, you might as well just tell me that P=PSPACE and be done with it.

Crucially, if you look at IBM’s approach to simulating quantum circuits classically, *and* Johnnie Gray’s approach, *and* Google’s approach, they could all be described as different flavors of “brute force.” That is, they all use extremely clever tricks to parallelize, shave off constant factors, make the best use of available memory, etc., but none involves any deep new mathematical insight that could roust BPP and BQP and the other complexity gods from their heavenly slumber. More concretely, none of these approaches seem to have any hope of “breaching the 2^{n} barrier,” where n is the number of qubits in the quantum circuit to be simulated (assuming that the circuit depth is reasonably large). Mostly, they’re just trying to get down to that barrier, while taking the maximum advantage of whatever storage and connectivity and parallelism are there.

Ah, but at the end of the day, we only believe that Google’s Sycamore chip is solving a classically hard problem because of the statistical test that Google applies to its outputs: the so-called “Linear Cross-Entropy Benchmark,” which I described in Q3 of my FAQ. And even if we grant that calculating the output probabilities for a random quantum circuit is almost certainly classically hard, and sampling the output distribution of a random quantum circuit is almost certainly classically hard—still, couldn’t *spoofing Google’s benchmark* be classically easy?

This last question is where complexity theory can contribute something to the story. A couple weeks ago, UT undergraduate Sam Gunn and I adapted the hardness analysis from my and Lijie Chen’s 2017 paper “Complexity-Theoretic Foundations of Quantum Supremacy Experiments,” to talk directly about the classical hardness of spoofing the Linear Cross-Entropy benchmark. Our short paper about this ~~should be on the arXiv later this week (or early next week, given that there are no arXiv updates on Friday or Saturday nights)~~ here it is.

Briefly, Sam and I show that if you had a sub-2^{n} classical algorithm to spoof the Linear Cross-Entropy benchmark, then you’d also have a sub-2^{n} classical algorithm that, given as input a random quantum circuit, could estimate a *specific* output probability (for example, that of the all-0 string) with variance at least *slightly* (say, Ω(2^{-3n})) better than that of the trivial estimator that just always guesses 2^{-n}. Or in other words: we show that spoofing Google’s benchmark is no easier than the general problem of nontrivially estimating amplitudes in random quantum circuits. Furthermore, this result automatically generalizes to the case of noisy circuits: all that the noise affects is the threshold for the Linear Cross-Entropy benchmark, and thus (indirectly) the number of samples one needs to take with the QC. Our result helps to explain why, indeed, neither IBM nor Johnnie Gray nor anyone else suggested any attack that’s specific to Google’s Linear Cross-Entropy benchmark: they all simply attack the general problem of calculating the final amplitudes.

**(4) Why use Linear Cross-Entropy at all?**

In the comments of my FAQ, some people wondered why Google chose the Linear Cross-Entropy benchmark specifically—especially since they’d used a different benchmark (*multiplicative* cross-entropy, which unlike the linear version actually *is* a cross-entropy) in their earlier papers. I asked John Martinis this question, and his answer was simply that linear cross-entropy had the lowest variance of any estimator they tried. Since I *also* like linear cross-entropy—it turns out, for example, to be convenient for the analysis of my certified randomness protocol—I’m 100% happy with their choice. Having said that, there are many other choices of benchmark that would’ve also worked fine, and with roughly the same level of theoretical justification.

**(5) Controlled-Z versus iSWAP gates.**

Another interesting detail from the Google paper is that, in their previous hardware, they could implement a particular 2-qubit gate called the Controlled-Z. For their quantum supremacy demonstration, on the other hand, they modified their hardware to implement a different 2-qubit gate called the ~~iSWAP~~ some weird combination of iSWAP and Controlled-Z; see the comments section for more. Now, this other gate has no known advantages over the Controlled-Z, for any applications like quantum simulation or Shor’s algorithm or Grover search. Why then did Google make the switch? Simply because, with certain classical simulation methods that they’d been considering, the simulation’s running time grows like 4 to the power of the number of these other gates, but only like 2 to the power of the number of Controlled-Z gates! In other words, they made this engineering choice purely and entirely to make a classical simulation of their device sweat more. This seems totally fine and entirely within the rules to me. (Alas, this choice has no effect on a proposed simulation method like IBM’s.)

**(6) Gil Kalai’s objections.**

Over the past month, *Shtetl-Optimized* regular and noted quantum computing skeptic Gil Kalai has been posting one objection to the Google experiment after another on his blog. Unlike the IBM group and many of Google’s other critics, Gil completely accepts the centrality of quantum supremacy as a goal. Indeed, he’s firmly predicted for years that quantum supremacy could never be achieved for fundamental reasons—and he agrees that the Google result, if upheld, would refute his worldview. Gil also has no dispute with the exponential classical hardness of the problem that Google is solving.

Instead, Gil—if we’re talking not about “steelmanning” his beliefs, but about what he himself actually said—has taken the position that the Google experiment must’ve been done wrong and will need to be retracted. He’s offered varying grounds for this. First he said that Google never computed the full histogram of probabilities with a smaller number of qubits (for which such an experiment is feasible), which would be an important sanity check. Except, it turns out they *did* do that, and it’s in their 2018 *Science* paper. Next he said that the experiment is invalid because the qubits have to be calibrated in a way that depends on the specific circuit to be applied. Except, this too turns out to be false: John Martinis explicitly confirmed for me that once the qubits are calibrated, you can run any circuit on them that you want. In summary, unlike the objections of the IBM group, so far I’ve found Gil’s objections to be devoid of scientific interest or merit.

**Update #1:** Alas, I’ll have limited availability today for answering comments, since we’ll be grading the midterm exam for my Intro to Quantum Information Science course! I’ll try to handle the backlog tomorrow (Thursday).

**Update #2:** Aaannd … timed to coincide with the Google paper, last night the group of Jianwei Pan and Chaoyang Lu put up a preprint on the arXiv reporting a BosonSampling experiment with ~~20 photons~~ 14 photons observed out of 20 generated (the previous record had been 6 photons). At this stage of the quantum supremacy race, many had of course written off BosonSampling—or said that its importance was mostly historical, in that it inspired Google’s random circuit sampling effort. I’m thrilled to see BosonSampling itself take such a leap; hopefully, this will eventually lead to a demonstration that BosonSampling was (is) a viable pathway to quantum supremacy as well. And right now, with fault-tolerance still having been demonstrated in *zero* platforms, we need all the viable pathways we can get. What an exciting day for the field.

Comment #1 October 23rd, 2019 at 11:25 am

This blog is always the first thing I check if there’s news on quantum computing. Thanks Scott!

Exciting times for computing.

Comment #2 October 23rd, 2019 at 12:02 pm

The claim that out of every 500 samples you would “expect one to be from the hard distribution” seems a bit unfair to me. In particular, 500 samples from 0.998 U + 0.002 D would likely still be close in total variational distance to 500 samples from U. In order to gain enough statistical evidence to show that the D part exists, you would need more like 500^2 or 250000 samples.

Comment #3 October 23rd, 2019 at 12:17 pm

So, what are the remaining (not yet fully addressed) loopholes, subtleties, qualifications, limitations, etc?

One is the private nature of “John Matinis explicitly confirmed for me …”; no one can yet independently verify this.

But surely the biggest is independent replication, a totally independent team making their own QC, running their own experiment/test, and submitting a paper to Nature, etc, right?

Comment #4 October 23rd, 2019 at 12:23 pm

IBM’s paper is actually a good illustration of how against the ropes classical computing is in this arena, where you have to resort to disk storage as your secret weapon.

I wouldn’t rule out a rebuttal of a future experiment using tape devices, and you might as well do it now.

Unlike you, I find Kalai’s arguments far more interesting. They may be wrong but at least they target the claim of quantum supremacy at its core, IBM is just playing word games.

Ok, so it’s supremacy over classical devices using RAM storage. Is that less significant? Not really.

Comment #5 October 23rd, 2019 at 12:23 pm

Another question, from what I can tell from the paper the Linear Cross-Entropy Benchmark involves considering the idealized probabilities for the outputs of your circuit and performing some simple computation on them. Is this correct?

If so, I’m not sure how Google’s claims that:

A) It would take 10,000 years to spoof this test on a classical computer

And

B) They have verified that their circuit passed this test

could simultaneously be true. In particular, any classical computer capable of computing the values p(x_i) should be able to use these values to spoof any statistical test involving them without much added difficulty.

Am I missing something here?

Comment #6 October 23rd, 2019 at 12:27 pm

Scott

Re: IBM paper. Your rear guard action to try to claim that nonetheless quantum supremacy has been established is not really tenable. It is true that the algorithm IBM describes requires massive computational power. But, as I said before, that is just one piece of code. They themselves say that their same algorithm could be sped up by a factor of 9 or so with a little more effort. It also seems plausible that a better algorithm would need less disk space etc. If 2 days is too slow, is 2 hours or 2 minutes?

Overall it is important to remember what is happening. Google are building a physical system that is hard to simulate classically. This is of course easy to do in general. Just take pretty much anything in computational chemistry and ask for a very high quality low level simulation. The reason why Google’s work may be interesting is that we hope it is a step towards building a programming quantum computer. To the extent that it is, that’s exciting. But just building a physical system that needs a really huge computer to simulate is not.

Comment #7 October 23rd, 2019 at 12:37 pm

Daniel Kane #2: Yes, Boaz Barak made exactly the same argument. It’s true that you need ~500

^{2}samples for the statistical test. On the other hand, how would you correctly generate 500 samples without running the huge simulation at least once in expectation? It seems to me that either comparison point could be defended.Comment #8 October 23rd, 2019 at 12:40 pm

Daniel Kane #5: Yes, precisely that point was addressed in part (2) of my post—did you read it?

Comment #9 October 23rd, 2019 at 12:42 pm

One small correction.

The supremacy experiment didn’t use the iswap gate, it used a *variety* of gates *near* the iswap gate. Specifically, gates of the form FSIM(theta, phi) = e^(i theta (XX + YY) + phi ZZ) where in practice theta was pi/4 +- 10° and phi was pi/6 +- 10°. The iswap is theta=pi/4, phi=0. The gate that was used was whatever one was judged to have calibrated the best for each pair of qubits.

At first this really bothered me, because it means that it is not possible to re-run a historical circuit with the same fidelity. You’ll never see the “best gates” be the same. But there were two things that soothed my mind:

1. The certified random number generation task is still possible under this constraint. It just adds another network round trip, where the circuit-generating client asks the randomness-generating server what the available gate set is today and does some sanity checks that it’s not purposefully weakened.

2. In terms of NISQ-y business, where after optimizing the 2-qubit interactions you want to do are almost always unstructured 4×4 unitaries or CPHASEs with weird angles, the decompositions you get with the perturbed ISWAPs are just as good as what you’d get with textbook CZs. You can CPHASE by any angle using two CZs, or two ISWAPs, or two perturbed ISWAPs. For arbitrary 4×4 unitaries you need three CZs and it looks like you usually only need three perturbed ISWAPs (though at the moment I only know an analytic decomposition that uses four).

3. If you really, really want CZs (e.g. you’re implementing a surface code cycle) the overhead factor is two. That’s bad, clearly you would fix this problem, but it’s not some sort of fundamental disaster. You could do fault tolerance on a quantum computer that only gave you perturbed gates, it would just have a lower threshold.

Disclaimer: I work on the google quantum team.

Comment #10 October 23rd, 2019 at 12:45 pm

“As far as we know today, the Summit supercomputer would need the full 2.5 days, or 2×10^{20} FLOPs, even to generate one sample from the output distribution of an ideal, randomly-generated 53-qubit quantum circuit.”

I thought that IBM’s proposed Summit simulation would produce the entire state vector, which can be sampled arbitrarily many times. So after 2.5 days, the time to produce additional samples would be small?

Comment #11 October 23rd, 2019 at 12:46 pm

dorothy #6: I’m just calling ‘em as I see ‘em. I’ve criticized Google in the past, I criticized them even in this post, and I’ll surely criticize them in the future. But I still don’t see how you’re going to scale IBM’s proposed approach to 70 qubits without building a supercomputer the size of a city.

In any case, the claims

(1) Google has not achieved quantum supremacy and

(2) quantum supremacy is not important / interesting anyway

are logically completely independent. Even if we accepted IBM’s arguments for (1), that would provide no support whatsoever for (2). Let’s not pretend otherwise.

Comment #12 October 23rd, 2019 at 12:49 pm

Craig Gidney #9: Ok, thanks!

Comment #13 October 23rd, 2019 at 12:50 pm

Tom Wong #10: Correct! That’s consistent with what I wrote.

Comment #14 October 23rd, 2019 at 12:59 pm

Scott #7: “On the other hand, how would you correctly generate 500 samples without running the huge simulation at least once in expectation?”

Well, what do you mean by “correctly generate”? By giving 500 samples from the uniform distribution, I can produce something that is close in total variational distance to the distribution you’re describing. In other words, this trivial procedure can be thought of as correctly generating 500 samples about 95% of the time.

Comment #15 October 23rd, 2019 at 1:03 pm

The time taken by the classical and quantum algorithms seems like the wrong comparison point. I’d be more interested in knowing, say, the energy consumption or the monetary cost of these algorithms. Do we have such figures? For example, how many kWh of electricity would Summit consume and how much energy is required to cool Sycamore to operating temperature.

I hope that there will be further quantum supremacy demonstrations from other groups. Given the hard-to-verify nature of the claims, it will be much more convincing when there are, say, three independent quantum supremacy demonstrations. (The first one is of course a huge achievement and I don’t mean to detract from it.)

Comment #16 October 23rd, 2019 at 1:03 pm

Scott@13: But doesn’t the “just” very strongly imply comparable difficulty in simulating further samples?

Comment #17 October 23rd, 2019 at 1:09 pm

Scott #11. They are logically independent in much the same sense that me both not being at the crime scene and also not having any motivation to commit the murder are. They are also both not exactly what I said/meant. If Google can scale their experiment to 80+ qubits then clearly IBM’s current algorithm will not do. But they haven’t done that. I am not sure it’s legitimate to talk about Google’s experiment as if it can be trivially scaled to arbitrary numbers of qubits. For the second part, if quantum supremacy means literally any physical experiment which is very hard to simulate on a very expensive PC, then that is worthless as I suggested before. Worthless largely because we have been able to do that for a long time. Surely the whole point of Google’s experiment is the suggestion that it is significant stepping stone to a full programmable quantum computer. So the main question is, is it?

Comment #18 October 23rd, 2019 at 1:10 pm

I would like to reiterate that Google did -not- use either a CZ or iSWAP gate for their two qubit interactions. It was in fact a superposition of those two interactions, and indeed a different superposition for each interacting pair. (These are certainly not Clifford operations.) This is important, I think, because it was almost certainly done this way for an important technical reason: to limit coherent crosstalk. By doing gates on resonance in this way, they are able to keep their interaction strength as low as possible and limit spurious interactions with next-nearest neighbors (etc.) which generally come in at a higher order. They also get to do the gate as fast as possible this way (using the full resonant strength), limiting errors due to T1.

I don’t think this negates the core result, but it does definitely limit the (even theoretical) usefulness of this hardware for executing actual quantum algorithms (at least the way it was tuned up here.) No actual algorithm could make use of these two qubit gates. Additionally, speaking as someone who works on a competing technology (silicon quantum dots), the fact that Google had to use this trick further supports the contention that superconducting qubits are facing a huge hurtle in dealing with their crosstalk / unwanted entanglement problems.

And not for nothing, but I would’ve hoped that a referee for this paper would have understood what Google did in this regard. It’s arguably the sketchiest thing involved in the actual experiment (e.g. not the analysis, c.f. IBM.)

Comment #19 October 23rd, 2019 at 1:16 pm

Craig Gedney #9: thank you for this clarification. The fact that the gates used in the paper were not drawn from the usual gate set bothered me because I wondered if that fact implied that the Sycamore device was really more of a simulator than a fully programmable computer, but I see that this is not the case. I think it would have been nice to make this point in the paper.

Comment #20 October 23rd, 2019 at 1:24 pm

So, it takes 0.02 second for Sycamore to get 1 true sample from D (combined with 499 false samples). It takes 2.5 days for Summit to get “all the samples” from D, i.e., 2^{53} amplitudes. But that token, the “Summit speedup” is 2^{53} * (0.02 seconds/ 2.5 days) or about 10^{10}.

Of course, it depends what you want, one sample or lots of them.

Comment #21 October 23rd, 2019 at 1:29 pm

Scott@13: But doesn’t the “even to generate one” very strongly imply comparable difficulty in simulating further samples?

Comment #22 October 23rd, 2019 at 1:32 pm

I saw people talking about the IBM paper and my first response was “wait until the Scott post”. As always, a good move. Thanks for the great explanation of the disagreements, and I’m so grateful to have been introduced to QM by you so I can fully appreciate this exciting time in computing.

Comment #23 October 23rd, 2019 at 1:49 pm

How many qubits would a QC need in order to simulate the quantum state of IBM’s 250-petabyte hard drive?

Comment #24 October 23rd, 2019 at 2:01 pm

Daniel Kane #14: On reflection, you’re right. I was thinking that if the sampling process is effectively “with probability 1-ε output a uniformly random string, and with probability ε sample from D,” then the quantum computer should be penalized by a 1/ε factor, but you (along with Boaz Barak and others) have convinced me that it really does need to be 1/ε

^{2}, so that the joint distribution over samples has at least Ω(1) variation distance from the uniform distribution. I retract that paragraph of my post and will update accordingly. Thanks!Comment #25 October 23rd, 2019 at 2:02 pm

Scott,

Since you’ve reviewed the paper and seem to fully understand linear cross-entropy benchmarking:

As I understand it F_XEB is an estimate of a sort of divergence between a) the classically computed ideal distribution, and b) the non-ideal empirically observed distribution. It is calculated from the mean of the simulated probabilities of the bitstrings measured, i.e. the probability mass function P(x_i) that goes into equation 1 is that of the classically computed ideal distribution (a).

However, it is also claimed (right below equation 1) that if the sampled bitstrings in fact come from the uniform distribution then F_XEB will be zero. In section IV.C of the Supplemental Information it is explained that this happens because P(x_i) in this case will always be 1/2^n for all i. How can this be? It is clearly stated that P(x_i) is the PMF of the classically computed ideal distribution (a). How can it also be the PMF of the uniform distribution? The classically computed ideal distribution of most random quantum circuits should not be the uniform distribution, right?

I can’t escape the feeling that there is a contradiction here. Could you shed some light?

Comment #26 October 23rd, 2019 at 2:08 pm

Is it fair to ask a classical computer to produce a correct sample from a noise free quantum circuit or should we ask it to produce a correct sample from the noisy circuit Sycamore is actually implementing. In other words, if distance between rho_sycamore and rho_ideal is x%, then will it still take 2.5days for a classical computer to sample from the o/p distribution of rho_sycamore?

Comment #27 October 23rd, 2019 at 2:11 pm

Craig Gidney #9 and Matt Reed #18: Thanks for that correction! I’ve fixed the post. I confess that I don’t see how this changes anything substantive: yes, Google substituted a useful 2-qubit gate for a gate whose only real advantage is hardness of classical simulation, and

that’s exactly what I wrote in the original post.It’s just that the new gates are a combination of Controlled-Z and iSWAP rather than iSWAP itself.Comment #28 October 23rd, 2019 at 2:15 pm

Shruti Puri #26: That issue is already fully accounted for by the ~1/ε

^{2}penalty that I now agree needs to be applied to a QC that samples with fidelity ε—please see the discussion above.Comment #29 October 23rd, 2019 at 2:17 pm

Is there somethingll non-obvious to experts about the nature, relevance, feasibility, or efficacy of the space/time tradeoff approach described by IBM? If not, I’m very puzzled. I can just see an isolated team of very bright people missing an obvious tradeoff such as that in the midst of all else that they were up to – but for basically the whole world to miss it for a month has me scratching my head.

Comment #30 October 23rd, 2019 at 2:32 pm

Just adding a few questions about the issue Craig Gidney and Matt Reed mentioned:

There were several aspects related to this where I hoped the paper would contain more information, and I couldn’t find it. Does anybody know the answers to the following questions?

1. Are the 5-real-parameters they learned for each of their 430 fSim published anywhere?

2. Craig, I see you said that the deviations from ideal for theta and phi were on the order of +/-10 degrees. I found one comment on the paper saying this about theta, but I didn’t see anything about phi deviations in the paper. Did I miss this?

3. This one I wondered about quite a bit: When Google did their classical supercomputer computations of the true distributions, did they use the ‘idealized’ theta = pi/4, phi = pi/6 gate, or did they use the actual learned noisy versions of these gates? I couldn’t seem to find the answer to that question anywhere in the paper.

Comment #31 October 23rd, 2019 at 2:38 pm

Scott #27 This is one of the loopholes (etc) my #3 refers to. Not this specific thing, but an example of a class: statements which include detail need to technically spot on, even if – in some sense or other – something demonstrably different is, effectively, no different.

Comment #32 October 23rd, 2019 at 2:40 pm

Regarding Gil’s objections: On one hand, I mostly don’t follow them. On the other hand, to be fair to him, I don’t think it’s accurate to say the Google group did publish the full histogram for a smaller number of qubits in their 2018 Science paper.

First of all, that paper had only 9 qubits, and in fact they restricted to the span of basis vectors with 4 0’s and 5 1’s, so it’s only a 126-dimensional space (= like 7 qubits).

Second, the setup is very different; the qubits are arranged on a line, and it’s really not at all similar to the later 53-qubit experiment.

Third… well, I currently don’t see where these histograms are. This is the paper, right? http://web.physics.ucsb.edu/~martinisgroup/papers/Neill2017.pdf

On page 2 I see a couple of histograms for the case of 5-qubit strings with 2 1’s, so it’s a distribution on 10 outcomes (~ 3 qubits). This does not really feel like it’s answering Gil’s objection in any way.

If you read the recent Google paper, it’s clear from studying their figures that they did experiments and computed full distributions for all even n beetween 12 and 38. It would really be great if we could see, e.g., one of the circuits they used for n = 12, together with the ideal 4096 probabilities (either in the ideal 2-qubit-gate model, or in their model where they estimate the empirical unitary of each of their 2-qubit gates), and also the results of a few hundred thousand draws from their quantum circuit.

Comment #33 October 23rd, 2019 at 2:43 pm

Scott #26 Is the statement that the time for the classical computer to sample from the noisy quantum computation simply ~ e^2*2.5 days? I would have assumed that noise would simplify the calculations even more than this. One could imagine some clever tensor network contractions.

Comment #34 October 23rd, 2019 at 2:53 pm

Scott,

You said “if you had a sub-2^n classical algorithm to spoof the Linear Cross-Entropy benchmark, then you’d also have a sub-2^n classical algorithm that, given as input a random quantum circuit, could estimate a specific output probability…”

If I am interpreting Gil Kalai correctly, he alluded to the possibility in one of the comments of his blog that the number n=53 is too small for asymptotic results like this to apply. See https://gilkalai.wordpress.com/2019/09/23/quantum-computers-amazing-progress-google-ibm-and-extraordinary-but-probably-false-supremacy-claims-google/#comment-61122

Comment #35 October 23rd, 2019 at 3:01 pm

I agree that using these funny gates does not negate any specific claim of the paper. It is, however, important to understand what Google did (and speculate as to why) to assess the state of the experimental art represented here. As I mentioned in my first post, I suspect that the reason it was done this way was to reduce the effect of spurious entanglement with spectator qubits, which is a key engineering challenge for this technology. My expectation is that this will be a growing problem for large(r) superconducting quantum devices like the one used here. As we as a community try to push to solve real problems, understanding these challenges to scaling may end up being one of the most interesting parts of this result.

Comment #36 October 23rd, 2019 at 3:07 pm

Scott #13

The way you phrased it is ambiguous.

There’s a constant cost of 2.5 days to compute the entire system? While that’s dependent exponentially on the qubit count (*), it sounds like this cost is then independent of subsequent sample count? If so, what’s the additional time per sample?

(*) one could say that Google also needed some significant time to “setup” their QC, with that confusing business of selecting “good” gates and whatnot… how long did that take them?

Comment #37 October 23rd, 2019 at 3:07 pm

I have nothing useful to add to a technical discussion way above my head.

I’m a little surprised you revealed that you reviewed Google’s accepted paper on behalf of Nature. Doesn’t this give rise to concerns about conflict of interest?

Comment #38 October 23rd, 2019 at 3:15 pm

fred #36: Once you’ve computed the entire output state (as you can do if you have

250 friggin’ petabytesof hard disk), it’s then trivial to output as many samples as you want. I apologize if that wasn’t clear.Comment #39 October 23rd, 2019 at 3:17 pm

Michael B #37: Apparently

Naturetoday is taking the unusual step of revealing the names of the reviewers (with the reviewers’ consent, including mine).What exactly is the conflict of interest here? As a reviewer, I tried to evaluate the claims being made. As a blogger, I …

alsotry to evaluate the claims being made.Comment #40 October 23rd, 2019 at 3:19 pm

Most simulations suggested will result in fidelity that exceeds the fidelity of the real computer by orders of magnitude. Moreover, the way their fidelity / linear cross entropy is defined, given a full simulation with full output probabilities, the higher linear cross entropy achievable exceeds the one they assumed: you can cheat by instead of outputing only those samples with the highest probability, and your computer linear cross entropy will exceed the maximum assumed value.

So is it possible to ‘spoof’ the result by just having a small chance of giving an output with high probability?

Your hardness assumptions also reduce to saying it’s hard to give any high probability output of a random circuit with any advantage over random guessing.

But how certain are we of that assumption?

Let’s take for example, an algorithm which does this: for n gates, q qubits, choose m (m very small). Simulate by computing full probability after m gates, then choosing the highest probability output, then continue to the next gates, doing this n/m times.

m is taken to be small, so this is still possible to compute classically. But now the question is, what are the chances, for a random circuit, that the output we got from this isn’t a high probability output. Because of the linearity of everything, the output probability of our result should also be high, unless, by chance, destructive interference from states we discarded after the measurements reduced it down. The bigger we allow the hilbert space to grow before we measure it, the lower the chance that a random distribution from one set of m random gates, has destructive interference with another set of m random gates.

The key question should be, how big does the hilbert space need to grow so that the fidelity reaches the levels of Google’s experiment. It’s also important that this algorithm is ‘cheating’ the benchmarks even further because instead of naively sampling according to calculated probabilities, it tries to sample high probability outputs. So at m = number of gates, it doesn’t just reach the maximum fidelity / linear cross entropy of the naive classical simulation, but exceeds it.

Comment #41 October 23rd, 2019 at 3:31 pm

Scott,

Two questions, which hopefully aren’t too off topic (if so feel free to leave this comment in the moderation queue). You mentioned that you were one of the reviewers for the Nature paper. Reviewers are generally anonymous, but reviewers often (as in your case) choose to be open about who they were. The logic for this being ok is that the primary point of anonymity is to protect the reviewer and so they should be free to do so. However, some people see it as a stronger ethical injunction and think that reviewers should not in general de-anonymize themselves. Can you talk about what made you decide to de-anonymize in this situation and what if any your general thoughts and approaches are on this sort of issue?

Second, regarding your paper with Gunn, which it may be dangerous to ask questions about when it isn’t yet available, but does that sort of result lead to any sort of general conjecture. Something morally like “spoofing any benchmark which can distinguish whether one is non-trivially estimating genuinely random circuits is no easier than the general problem of nontrivially estimating amplitudes in random quantum circuits” or something like that? don’t unfortunately have a rigorous formulation of this statement.

Comment #42 October 23rd, 2019 at 3:34 pm

‘Spoof’ seems like a hugely hostile term for what appears to amount to running a different algorithm to solve the same problem.

“Spoof. A satirical imitation; a parody or send-up. A deception or ruse.”

Comment #43 October 23rd, 2019 at 3:40 pm

Scott #39: Nature took up the policy of naming reviewers (if they want) some time ago: https://www.nature.com/articles/d41586-019-01162-1 Maybe they pushed harder than usual in this case though.

Comment #44 October 23rd, 2019 at 3:51 pm

Is there an actual technological breakthrough with the qubits Google has used, e.g. do they open a novel path for scaling?

If not, this all boils down to a “the coolest tech” marketing pissing contest between IBM and Google, bickering over time constants, which almost no-one will remember in 2 years.

Not because whatever they both realized isn’t currently impressive, but because, by nature, computer tech is almost instantly obsolete, irrelevant, and happily forgotten.

Seriously, they could just as well been using a fraction of 53 qubits and compare it against a ZX Spectrum 48k from 1982…

Comment #45 October 23rd, 2019 at 3:54 pm

On a different topic, the long supplementary paper seems to stop short of claiming that a poly-time classical simulation (of their benchmark or stronger) would collapse P^PP (or etc.) to AM. To claim this seems to need bridging either:

() 2^{-linear(n)} versus 2^{-poly(n)} simulation of random circuits (cf. page 62, left column), or

() a stronger average-case/worst-case equivalence (cf. recent cited papers).

But their AM-TIME result certainly intends something stronger than your “bold” QUATH as the complexity-theoretic basis. Is there a source to clarify this?

Comment #46 October 23rd, 2019 at 3:56 pm

Hi Scott,

Thanks for your blog! I have a question regarding Google’s paper which you seems have not addressed. The fidelity of their output is 0.2%. I’m not sure what is the corresponding total variance distance for that fidelity, but I think it is quite large sampling error. So my question is: with such big sampling error, is it still reasonable to claim you are sampling from a distribution which was proved to be hard?

If you look at Bremner et al’s IQP paper, they chose the error to be 1/192. And in their description of a quantum supremacy demonstration, one of the condition is “if the total effect of all errors can be demonstrated to remain consistently below 1/192 (in L1 distance) even as the complexity parameter increases…”. Of course it doesn’t need to be 1/192, but it has to be small, and clearly this is not the case for Google.

Comment #47 October 23rd, 2019 at 4:19 pm

So does Google’s supremacy experiment complexity theoretic assumptions as strong as Boson Sampling assumptions (namely that P^#P collapses to third level or something weaker)?

Comment #48 October 23rd, 2019 at 4:28 pm

SMBC reminded us (“http://www.smbc-comics.com/comic/mathematicians”) that there have been multiple advances about the asymptotic time complexity of matrix multiplication since your 2011 blog post”https://www.scottaaronson.com/blog/?p=839″. Do you want to tell us anything about that?

Comment #49 October 23rd, 2019 at 4:31 pm

Scott #38

Thanks!

One obvious thing to remember is that the IBM system is truly general/programmable, it wasn’t created on purpose to annoy Google! In that aspect QC still have a long way to go.

Take those 250 freakin’ petabytes of storage – if one could use one atom per bit then a single grain of salt would be enough to store it all 😛

Comment #50 October 23rd, 2019 at 4:35 pm

Is it too early to ask how many qubits would be necessary to find the order of a binary quadrity form with a 1024 bit discriminant?

Comment #51 October 23rd, 2019 at 4:43 pm

Minor historical note re “If Kasparov had narrowly beaten Deep Blue in 1997, rather than narrowly losing, the whole public narrative would likely have been different (‘humanity triumphs over computers after all!’)”: it’s worth mentioning that in fact, Kasparov did (fairly) narrowly beat Deep Blue in their first match in 1996. I’m not sure what the public narrative was, since search results turn up vastly more coverage of their second 1997 match (which Deep Blue won).

Comment #52 October 23rd, 2019 at 4:46 pm

Wow, that new result that you’re talking about in point 3 about spoofing the indirect verification is really good news. I’ll be interested to see the details.

Comment #53 October 23rd, 2019 at 4:56 pm

They’re just using disk storage to extend the super computer’s ability to run the memory intensive algorithm, which is much faster.

Other than having to adjust to the slower storage medium i don’t think there’s anything substantial in it from a theoretical perspective. They’re just making use of the full capabilities of their system.

Nowadays it’s pretty common to have cloud machines without locally attached storage (it’s RAM and NAS) so i wouldn’t be surprised if disk was simply ignored.

For example, the paper that was discussed in a previous post, which placed the cost of simulating a 56-qubit QC (using CZ gates rather than iSWAP) at about $35K focused exclusively on RAM. There’s no mention of disk or tape storage.

The QC has basically knocked the super computer off of primary and into secondary storage, and neither one is sustainable. With a few more qubits you could NAS the whole internet and it wouldn’t help it very much.

Comment #54 October 23rd, 2019 at 6:09 pm

“And right now, with fault-tolerance still having been demonstrated in zero platforms, we need all the viable pathways we can get.”

This remark in your update #2 made me realize that quantum supremacy is still a useful thing to ask about even though it has been demonstrated once already. From here on it’s no longer a question about whether reality satisfies the Effective Church-Turing Thesis, but rather a benchmark for whether

specific approaches to quantum computingare viable. I imagine that as these approaches race to achieve scalable quantum computing novel approaches will aim to achieve quantum supremacy first.Comment #55 October 23rd, 2019 at 6:12 pm

Wondering about the comments from Chris Monroe suggesting they achieved quantum supremacy with their 53 ion experiment a while ago.

Comment #56 October 23rd, 2019 at 6:27 pm

Hi Scott,

In Sec.XI of the supplementary material of Google paper, they give “complexity theoretical justifications” of their supremacy.

I do not think arguments there support their supremacy, because of the following two reasons.

(1) As they write, their fidelity is not enough to use the existing additive-error sampling results (like Bouland et al Nature Phys.). If they could show that the existing results can be used, it would be a very strong support.

(2) They assume a specific noise model and they show that exact classical sampling of the noisy distribution causes #P \subseteq AM, but what their quantum machine actually samples is an approximation version of the distribution they assume, and therefore the hardness of not exact but approximate classical sampling should be shown.

In addition, in the second last paragraph of Sec.XIC, they claim that the hardness of classical sampling can be based on SETH, but it is not clear to me. If you use Theorem 1, what you base is not SETH but something like “#P is not in AMTIME[2^{(1-o(1))n}]”?

I also feel it is strange that Bosonsampling paper and Bremner-Montanaro-Shepherd paper are treated as the additive-error sampling results, while Bouland et al. Nature Phys. paper is treated as if their result is exact, not additive-error, sampling.

Finally, they say AM is in the 3rd level of PH, but I think it is in the 2nd level. Also, the text “there is no sub-exponential time algorithm for this task (unless #P collapses to P)” is not precise.

Thank you very much for your time for reading this comment.

Comment #57 October 23rd, 2019 at 6:49 pm

Scott #11: Yes, but a QC with 70 qubits does not yet exist. Sure, one can have a (high) degree of confidence that the construction of such a thing is “just” a matter of (brilliant) technical physics/engineering, but a possible “loophole” is that there’s a fundamental challenge which makes it impossible … “just” that that challenge has not yet been discovered.

Comment #58 October 23rd, 2019 at 6:51 pm

It seems a bit unnatural to cast this as a Google vs IBM battle a la Kasparov vs Deep Blue when in fact, IBM is trying exactly the same thing is Google with their quantum computer!

Comment #59 October 23rd, 2019 at 7:06 pm

Ryan O’Donnell #30:

Regarding question 2, does the integrated histogram of FIG. S12(a) on page 13 of the Supplementary Information help? Eyeballing it, that suggests that \phi was between ~21 and ~33 degrees while \theta was between ~83 and ~95 degrees.

Comment #60 October 23rd, 2019 at 7:32 pm

Jean Tate #3:

So, what are the remaining (not yet fully addressed) loopholes, subtleties, qualifications, limitations, etc?

One is the private nature of “John Matinis explicitly confirmed for me …”; no one can yet independently verify this.

Actually, John Martinis explained that this is all in their published papers. I just needed him to help me

navigatethose papers. 😀Comment #61 October 23rd, 2019 at 7:35 pm

tas #15:

I’d be more interested in knowing, say, the energy consumption or the monetary cost of these algorithms. Do we have such figures? For example, how many kWh of electricity would Summit consume and how much energy is required to cool Sycamore to operating temperature.

I don’t know the answers to these questions—does anyone else (even ballpark)?

Comment #62 October 23rd, 2019 at 7:42 pm

dorothy #17:

Surely the whole point of Google’s experiment is the suggestion that it is significant stepping stone to a full programmable quantum computer. So the main question is, is it?

It seems kind of obvious that it is. As we discussed in the previous thread, the central feature of a device like this one, which separates it from any old hard-to-simulate quantum system in nature, is its

programmability. Even more to the point (I’d say), you can feed the device an arbitrary instance of a mathematically well-defined problem (not just “simulate yourself”), and then independently judge whether it succeeds or fails on that instance. And assuming that it succeeds, you can study the computational complexity of the problem being solved on its own terms, abstracting away from the idiosyncrasies of the hardware.Anyway, we’ve been over this ground many times before, and I didn’t feel like retreading it.

Comment #63 October 23rd, 2019 at 7:47 pm

matt #20:

So, it takes 0.02 second for Sycamore to get 1 true sample from D (combined with 499 false samples). It takes 2.5 days for Summit to get “all the samples” from D, i.e., 2^{53} amplitudes. But that token, the “Summit speedup” is 2^{53} * (0.02 seconds/ 2.5 days) or about 10^{10}.

Of course, it depends what you want, one sample or lots of them.

Crucially, there’s no claim being made that quantum supremacy has been demonstrated for

allproblems—or even formore than oneproblem. In designing a quantum supremacy demo, we get to pick the problem where the QC will do best. And so we’ll naturally design the problem to require 5 million samples (or more generally:just enoughto verify statistically what’s being done), not 5 trillion or 5 quadrillion samples.Comment #64 October 23rd, 2019 at 7:59 pm

Imagine if IBM have accidentally discovered the approximate cutoff where both classical and quantum computers struggle wrt physics!

I mean isn’t the weak point of their paper the fact that moving from 53 to 54, 55… qubits has a very significant impact on the classical computation?

Comment #65 October 23rd, 2019 at 8:00 pm

Shruti Puri #33:

Is the statement that the time for the classical computer to sample from the noisy quantum computation simply ~ e^2*2.5 days? I would have assumed that noise would simplify the calculations even more than this.

No, that’s not the statement at all. The statement is that the

quantum computerneeds to return ~1/ε^{2}samples before you can even distinguish its outputs from the uniform distribution (and more specifically, verify the outputs using a benchmark like Linear Cross-Entropy).If you use an approach like IBM’s, then the classical time is going to be 2.5 days (or asymptotically, ~2

^{n}), pretty muchregardlessof how many samples you want, so long as you want ~1/ε^{2}samples or more—in other words, sufficiently many samples that they reallycanbe distinguished from the uniform distribution, and youdoneed to run the full simulation to generate the lot of them.My result with Sam Gunn goes some way toward justifying this picture, in that we can start with an assumed fast classical algorithm for spoofing the Linear Cross-Entropy benchmark “merely” as well as an extremely noisy quantum circuit could do so, and end up with a fast classical algorithm for estimating amplitudes, slightly better than the trivial estimator, even for an ideal quantum circuit.

Comment #66 October 23rd, 2019 at 8:12 pm

[…] Scott Aaronson gets into technical details of the disagreement between Google and IBM. He explains that the supercomputer in question is Oak Ridge National Labs’ Summit machine. It covers the area of two basketball courts and has 250 petabytes of disk storage. By exploiting its enormous disk capacity, Summit could simulate Google’s quantum calculation on classical hardware in “only” two and half days. In a nutshell, it seems Google didn’t consider that you could trade off a gargantuan amount of storage for processor power. But as Aaronson points out, if Google’s machine added just a couple more qubits, even Summit couldn’t keep up on this particular problem. […]

Comment #67 October 23rd, 2019 at 8:51 pm

Scott #60: Thanks for the clarification. My turn: I tried to give two, pretty concrete, examples at ~opposite ends, one trivially easy (even trivial), the other likely many years of very hard work, by a possibly (very) large team (not to mention the hard work of the referees of, likely, several papers ;-)).

Comment #68 October 23rd, 2019 at 9:02 pm

[…] responded with an estimate of 2.5 days, not 10,000 years. Scott Aaronson, an early reviewer of Google’s paper, called that […]

Comment #69 October 23rd, 2019 at 10:06 pm

Custom hardware, designed specifically for solving particular problems, would be much faster compared to general purpose computers that Google (and IBM) assumed in their classical estimates. Even higher speed up over conventional classical computers would be expected from more specialized (and more exotic) analog computing circuits running at below 0.2 K, which must be also carefully calibrated, similar to Google’s chip. So, Google comparison of runtimes is very unfair. When using better classical algorithms (IBM’s 2.5 days compared to Google’s 10,000 years) and more representative classical hardware, Google estimates for classical runtimes might be easily off by 10-12 orders of magnitude. It would be more convincing to show lower bounds for classical computers for this specific task.

Comment #70 October 23rd, 2019 at 10:37 pm

Possible you address #47? What in complexity theory gives in if classical computers do whatever google does in a constant factor (possibly $BB(10^9)$) slowdown?

Comment #71 October 23rd, 2019 at 10:55 pm

[…] Scott Aaronson gets into technical details of the disagreement between Google and IBM. He explains that the supercomputer in question is Oak Ridge National Labs’ Summit machine. It covers the area of two basketball courts and has 250 petabytes of disk storage. By exploiting its enormous disk capacity, Summit could simulate Google’s quantum calculation on classical hardware in “only” two and half days. In a nutshell, it seems Google didn’t consider that you could trade off a gargantuan amount of storage for processor power. But as Aaronson points out, if Google’s machine added just a couple more qubits, even Summit couldn’t keep up on this particular problem. […]

Comment #72 October 23rd, 2019 at 11:57 pm

Since supremacy should be confined in its territory, I was wondering if there are known algorithms or applications which can claim classical supremacy, i.e. QC practically cannot perform. If the media hype of quantum supremacy becomes a problem, then classical supremacy may be a good solution for balancing the misled expectation of the public.

Comment #73 October 24th, 2019 at 12:55 am

Whilst it may be fair to say Google won this race, they weren’t exactly racing against anyone. IBM, IonQ etc never seemed much interested in this supremacy business (at least to me), despite having roughly comparable hardware capabilities. Also it’s been public knowledge Google planed to do something like this for at least a couple of years now so they weren’t caught by surprise or anything. I wonder whether that was a scientific decision or just a business one?

Comment #74 October 24th, 2019 at 1:53 am

What a wonderful time to be alive. A very well deserved achievement by the Superconducting Qubit community.

One thing I am curious, and not sure if you can answer it, but when reading the competing interests in the paper, it says there are none. This feels somehow wrong with a company backing them that obviously profits from this achievement and the News generated by this (See the Nature Guidelines: https://www.nature.com/nature-research/editorial-policies/competing-interests)

Is this something that is not considered or just depends on the field of the scientific tradition?

Comment #75 October 24th, 2019 at 2:25 am

[…] computacional el genial Scott Aaronson, “Quantum supremacy: the gloves are off,” Shtetl-Optimized, 23 Oct 2019. Aunque IBM quiere hablar de “ventaja cuántica” en lugar de “supremacía […]

Comment #76 October 24th, 2019 at 2:45 am

[…] Scott Aaronson, a peer reviewer of the Nature paper and long standing quantum supremacy expert, highlights the underlying significance of Google’s achievement “we’re comparing ~5×109 […]

Comment #77 October 24th, 2019 at 3:45 am

Tomiyuki Morimae #56: your points (1)-(2) are interesting ones that I also wondered about. From my point of view, the purpose of including these theoretical arguments in the paper is that they provide some evidence to rule out certain classes of classical simulation algorithms in the low-fidelity regime. For example, one might imagine that it could be easy to exactly simulate a low-fidelity approximation to the quantum circuit output distribution, perhaps using tensor network methods. But the paper’s argument rules this out (subject to complexity theory assumptions), even if we know that the low-fidelity approximation comes from simple depolarising noise. (By the way, from discussions with Mick Bremner and Dan Shepherd about this, it seemed like it should be possible to replace the argument used here with a somewhat more standard one based on Stockmeyer’s theorem.)

Of course it would be nicer to actually have an argument (or even a proof!) ruling out approximate simulations up to small additive error in this low-fidelity regime.

Comment #78 October 24th, 2019 at 4:53 am

“I’m a theoretical computer scientist, and this is what I believe: that after the constant factors have come and gone, what remains are asymptotic growth rates.”

But isn’t it right that their fidelity degrades exponentially in the number of qubits (see Fig 4 of the paper), and so since we agree that it is appropriate to apply a 1/eps^2 penalty, their apparatus also has a cost which is exponential in circuit size?

(Although to be fair since they are winning by so much at 53 qubits there will certainly be a sweet-spot where their experiment is feasible and IBM’s simulation isn’t. Another remark is that from eyeballing the graph it seems that their fidelity is going like roughly 2^{-n/8}, implying a cost of something like 2^{n/4}, whereas I guess the simulation is 2^n. But I don’t understand the fidelity measure well enough to say that these constant factors are correct)

Comment #79 October 24th, 2019 at 5:01 am

[…] Scott Aaronson gets into technical details of the disagreement between Google and IBM. He explains that the supercomputer in question is Oak Ridge National Labs’ Summit machine. It covers the area of two basketball courts and has 250 petabytes of disk storage. By exploiting its enormous disk capacity, Summit could simulate Google’s quantum calculation on classical hardware in “only” two and half days. In a nutshell, it seems Google didn’t consider that you could trade off a gargantuan amount of storage for processor power. But as Aaronson points out, if Google’s machine added just a couple more qubits, even Summit couldn’t keep up on this particular problem. […]

Comment #80 October 24th, 2019 at 5:59 am

I’m still not clear on the manner in which the Google setup is programmable. Are they affecting gates in the way you would switch a classical logical gate from an OR to an AND?

And is this ever running in a regime where all the qubits are entangled at once?

Comment #81 October 24th, 2019 at 6:22 am

CTH #72, “I was wondering if there are known algorithms or applications which can claim classical supremacy, i.e. QC practically cannot perform”

No. Anything a classical computer can do efficiently, a quantum computer can also do efficiently. More precisely, BPP is contained in BQP.

Comment #82 October 24th, 2019 at 6:51 am

[…] 量子計算專家Scott Aaronson 認為Google的量子處理器確實是一大突破，他指出Google的估算沒有考慮到IBM的超算Summit有驚人的存儲空間，正好能將53量子比特的所有量子狀態矢量都儲存在它250 petabytes硬盤裡。 […]

Comment #83 October 24th, 2019 at 7:36 am

CTH #72: If by “classical supremacy” you mean a problem that “QC cannot practically perform” using *today’s technology*, then that’s easy – literally every computational task other than random circuit sampling! If you mean “a problem for which QC cannot give a significant speedup, even in principle,” then there very strong theoretical conjectures that (say) any NP-hard problem (of which there are many) falls into this category (depending on exactly how you define “significant”). If you mean “a problem where a QC necessarily performs *worse* than a classical computer, even in principle”, then this is impossible, because (reversible) classical computers are special cases of quantum computers.

David #73: If you search for “race for quantum supremacy” you’ll get lots of hits referencing a race between Google, IBM, and Alibaba, so at the very least there was a race in the popular narrative (if not in reality). It’s also conceivable that there would have been more of a real race if people hadn’t perceived Google to already be so far ahead.

fred #80: Google claims that they could have set up any circuit they wanted using gates drawn from their chosen gate set (up to the subtleties described above regarding the near-iswap gates, which needed to chosen based on the specific qubits they were acting on).

Comment #84 October 24th, 2019 at 8:41 am

Scott, I’m unhappy about the answers you gave me in the previous post about the strong versus weak simulation stuff.

First of all, now it seems that the fastest weak simulation of the Sycamore circuit is actually a strong simulation, the brute-force Schrödinger approach proposed by IBM.

Secondly, even the Schrödinger-Feynman simulation Google used is a strong simulation at its heart: surely, it first does some smart sampling, but then it actually computes the amplitude of the sampled bitstrings.

The crucial point is that in both cases the classical simulation is doing something that is hard even for a quantum computer to do! I find this very disturbing. Can it really be that the optimal classical algorithm needs to do something that a quantum algorithm cannot do?

Comment #85 October 24th, 2019 at 8:53 am

tas #15, Scott #61: I find this a very interesting point. It is arguable whether it is fair to compare the time a single quantum chip takes against the time a hundred million GPUs running in parallel take. Maybe one should multiply the classical time by a hundred million?

The memory cost can also be argued. Sure, the quantum chip uses 53 qubits (plus some classical bits for the controller), whereas the classical simulation needs more than 10^16 bits, but can we really compare a quantum bit to a classical bit?

Energy, on the other hand, is a very solid quantity to compare. In the supplemental material of their paper Google estimates the energy cost of one run of the Sycamore to be around 1 kWh, whereas the partial simulation done by Summit (with depth 12) costed around 8,000 kWh. That’s brutal.

Keep in mind that depth 12 does not come even close to depth 20, given that the complexity of the classical simulation (with Schrödinger-Feynman) is roughly exponential with depth. Of course, much more interesting will be the energy consumption of the IBM simulation of the whole circuit, which I’m sure will also be on the thousands of kWh.

Comment #86 October 24th, 2019 at 9:09 am

“… has 250 petabytes of hard disk space—one could just barely store the entire quantum state vector of Google’s 53-qubit Sycamore chip in hard disk.”

Serious question: how long would just loading 250 petabytes of data in hard disk take? I am not familiar with loading protocols and specs for Summit; my guess would be that in theory it can be pretty fast, but in practice partitioning the data and loading up would take longer than the 2.5 days of computaation.

Comment #87 October 24th, 2019 at 9:25 am

Just to see if I understand the process correctly:

A quantum circuit is randomly arranged and presented to the Google quantum computer. Using |0> states as inputs, it outputs a binary string. It does this repeatedly and creates a histogram of those strings.

In order to check if the binary string samples that are outputted by the QC are drawn from the right distribution, a classical computer runs a quantum simulation (but much more “slowly”). If the two distributions match (statistically), then the QC gets the seal of approval that it did its job. The fact that it did it much more quickly because it’s a QC comes along naturally, and means that the process shows there was quantum supremacy. My understanding is that the classical checking process for this has not yet occurred.

My questions:

– Is IBM planning on doing their classical supercomputer run using the very same random circuit Google used?

– If so, does that mean that in an ironic twist they would essentially act as the very checking process Google needs to claim quantum supremacy?

Comment #88 October 24th, 2019 at 9:35 am

Hi again Scott,

Ignore my question #25. It turned out to be quite trivial, just some slightly unfortunate / misleading language in Supplementary information IV.C.

For anyone interested I wrote up an explanation here: https://quantumcomputing.stackexchange.com/a/8565/8704

Cheers,

Björn

Comment #89 October 24th, 2019 at 9:58 am

[…] El investigador Scott Aaronson compara este hito no con la llegada a la Luna, que se logró y punto, sino con la erradicación […]

Comment #90 October 24th, 2019 at 10:23 am

[…] The researcher Scott Aaronson compare this milestone not with the arrival on the Moon, which was achieved and period, but with the […]

Comment #91 October 24th, 2019 at 10:45 am

Hi Scott. Greetings from England ! Thank you for the original “Supremacy FAQ” and your current blog. Both save me enormous amounts of time since I simply copy and paste the link to your site when, for the umpteenth time, I am asked to comment about the recent Google Sycamore project 😊 However I can’t help but bring to your attention the fact that the key real world application of the experiments that Google have completed, which is the generation of certifiably quantum entropy, is in fact something that has been done already with a device that was built to instantiate innovations expressed through many papers over the years on noise tolerant randomness amplification (an example of a great paper is here https://arxiv.org/abs/1310.4544 ). Ultimately I think what really will matter is when quantum devices can do something practical and repeatable easily, that classical computers simply cannot, and in this context the Ironbridge device that was constructed to experimentally prove previous theoretical work in device independent ways of generating quantum randomness is now living proof of quantum supremacy with a genuine and much needed real world application. I don’t often contribute to your blog-page, but since I am here, I may as well put my own hat in the ring on the Google-IBM spat. As you know, the company where I work, Cambridge Quantum Computing, (which has built the Ironbridge device), is a partner with both Google and IBM and we have benefitted from them both of them immensely. I think that the IBM commentary shows that perhaps the Google champagne has to be put on ice just for a wee while longer, but that the flow of experiments that show that quantum computing has entered a whole new exciting phase will continue to come along at a rapid pace. In this context I like the spirit of Google’s work and their ambition. Ironbridge and Project Sycamore are just two examples of a great many experimental steps that together will prove “quantum supremacy”. Thanks again for the amazing work you do and for the way you present it.

Comment #92 October 24th, 2019 at 10:49 am

Thanks for your post, Scott, I appreciate the time you spend on this blog.

Does anyone know how to actually download the 2GB data file? The Nature paper links here: https://datadryad.org/stash/dataset/doi:10.5061/dryad.k6t1rj8 , but I get an error “You do not have the permission to download the dataset.” when I put in my email address to download it.

Comment #93 October 24th, 2019 at 10:55 am

Arch1 #29:

Is there somethingll non-obvious to experts about the nature, relevance, feasibility, or efficacy of the space/time tradeoff approach described by IBM? If not, I’m very puzzled. I can just see an isolated team of very bright people missing an obvious tradeoff such as that in the midst of all else that they were up to – but for basically the whole world to miss it for a month has me scratching my head.

As someone who fumbled this as well, maybe I should field this one. 🙂

When I read Google’s paper back in August, and saw that they used Schrödinger-Feynman to produce their estimates for the classical simulation time, I wondered about this too: is memory

reallyso much more limited than time? Couldn’t you just store all 2^{53}amplitudes in some gargantuan memory and then do a naïve Schrödinger simulation?But then I figured, well, I’m just a theorist who sometimes knows about asymptotic scaling behavior, Google knows huge constant factors if any organization on earth does, so they must’ve costed it out and concluded that that 2

^{53}memory would be ridiculously prohibitive. Asking about this would only “out” myself further as a comically out-of-touch complexity theorist.Broader lessons left as exercises for the reader…

Comment #94 October 24th, 2019 at 11:14 am

Joshua Zelinsky #41:

Can you talk about what made you decide to de-anonymize in this situation and what if any your general thoughts and approaches are on this sort of issue?

What made me decide to do it was that

Natureasked me if I’d consent to being de-anonymized and I was like yeah, sure, why not? So probably this question should be punted to them. 🙂regarding your paper with Gunn, which it may be dangerous to ask questions about when it isn’t yet available, but does that sort of result lead to any sort of general conjecture. Something morally like “spoofing any benchmark which can distinguish whether one is non-trivially estimating genuinely random circuits is no easier than the general problem of nontrivially estimating amplitudes in random quantum circuits” or something like that? don’t unfortunately have a rigorous formulation of this statement.

It seems plausible to me that

somethingin that vicinity should be true, because our reduction strategy is an extremely simple and natural one. In a nutshell, suppose there exists an efficient classical algorithm, A, to spoof your benchmark. And suppose you have a randomly-generated quantum circuit, C, for which you’d like to estimate some specific amplitude. Then first “cloak” which of the 2^{n}amplitudes is the one that you care about, by randomly adding NOT gates into the circuit or whatever. Next, run A on your cloaked circuit, and see whether the (now cloaked) basis state that you care about is one of the ones output by the spoofing algorithm. If yes, then congratulations! You now have strong evidence that the amplitude you care about is likely to be larger than average. Even if not, you now haveextremely slightstatistical evidence (but still, better than we know how to obtain otherwise) for the amplitude you care about being smaller than average. QED.In practice, what we found is that as you vary the statistical test, you

alsoneed to vary the type of estimation that you’re assuming to be hard, in order to get this reduction strategy to work. Thus, for my and Lijie Chen’s “HOG” test, we assumed it’s hard to guess whether a given amplitude is larger or smaller than the median of all 2^{n}amplitudes, with non-negligible advantage over random guessing. For the linear cross-entropy test, by contrast, we assume it’s hard to estimate an amplitude with non-negligibly bettervariancethan that of the trivial estimator.Because of this issue, I’m not sure how one would formulate a general statement, but it’s certainly a question worth thinking about. Thanks!

Comment #95 October 24th, 2019 at 11:19 am

Onlooker #42:

‘Spoof’ seems like a hugely hostile term for what appears to amount to running a different algorithm to solve the same problem.

In CS, “spoof” is a totally accepted term for intentionally passing a verification test, in any way other than the way that the test’s designer had in mind. It carries no connotation whatsoever of the spoofer being evil, although it might carry a connotation of their being clever. 🙂

Comment #96 October 24th, 2019 at 11:25 am

fred #44:

Is there an actual technological breakthrough with the qubits Google has used, e.g. do they open a novel path for scaling?

I don’t think there’s any single breakthrough that one can point to as having made this achievement possible. Rather, it’s “just” many smaller ideas plus a huge amount of terrifyingly complicated engineering—engineering of which the Google/Martinis group is arguably the world’s leader right now (though that could of course change), and almost all of which is directly and straightforwardly relevant to the challenge of scaling up further.

Comment #97 October 24th, 2019 at 11:29 am

As someone who uses Summit and has managed to perform parallel calculations on >95% of the system simultaneously, I think IBM may have made a mistake in their paper. They estimate the performance of Summit by rescaling Cori II estimates, and compute the performance of Summit based on the available number of IBM Power 9 sockets. However, the vast majority of the computational power of Summit is in NVIDIA GPUs, not Power 9 CPUs. All the real and peak performance estimates (148.6 Pflops/s and 191 Pflops/s) are based on using all the GPUs and not just IBM CPUs. Tables 1 and 2 of the papers assign “Tensor ranks per socket”, but unless the GPUs are implicitly included I’m not sure how they achieve the quoted numbers. While they state in the paper that “there is room to potentially improve upon these estimates by leveraging the capabilities of Summit’s NVIDA[sic] GPUs” the performance numbers they quote I think *already* include the GPUs. I assume I’m missing something because it seems like a real howler of a mistake to make.

Comment #98 October 24th, 2019 at 11:56 am

HYX #46:

I have a question regarding Google’s paper which you seems have not addressed. The fidelity of their output is 0.2%. I’m not sure what is the corresponding total variance distance for that fidelity, but I think it is quite large sampling error. So my question is: with such big sampling error, is it still reasonable to claim you are sampling from a distribution which was proved to be hard?

Since some variant of this question came up many times on this thread, I added a new “technical note” to the post about it. Let me know if it clarifies things!

Comment #99 October 24th, 2019 at 12:09 pm

Ken Regan #45, A #47, Tomoyuki Morimae #56 and others: Personally, I feel that results of the form “if an efficient classical algorithm could exactly simulate this quantum process, then the polynomial hierarchy would collapse” are

conceptuallyrelevant to the current generation of sampling-based quantum supremacy experiments. Certainly, they werehistoricallyrelevant in giving us the idea for such experiments in the first place.But the relevance is a lot like the relevance that NP-completeness had for the development of public-key cryptography. I.e., it’s more at an inspirational level than at a technical one.

And just like in the example of NP-completeness and public-key cryptography, this is true for multiple reasons. Firstly, what’s relevant to real-world experiments is not the hardness of

exactsimulation, but the hardness ofapproximatesimulation (indeed, simulation that’s mostly noise and only an ε fraction of signal). And secondly, you can’t characterize the full distribution being sampled—you can only apply some statistical test to a relatively small number of samples, and then argue about the classical hardness of spoofing that test.It’s for these reasons that Lijie Chen and I, in our 2017 CCC paper, advocated switching the focus to the classical hardness of spoofing the test, and made an initial step in that direction (with Sam Gunn and I having taken a small further step). But certainly, in this setting, we don’t yet know how to base classical hardness on anything

remotelyas secure as the non-collapse of PH. I view getting better theoretical evidence for the classical hardness of spoofing as probablythecentral theoretical open problem in this area. (Along with the problem of designing polynomial-time verification procedures, for supremacy experiments that can be run on near-term devices.)Comment #100 October 24th, 2019 at 12:15 pm

fred #49:

One obvious thing to remember is that the IBM system is truly general/programmable, it wasn’t created on purpose to annoy Google! In that aspect QC still have a long way to go.

I honestly don’t understand why so many people insist on describing Sycamore as “not truly general/programmable.” Like,

obviouslyit’s severely limited in both the number of qubits (53) and the circuit depth (~20). Buteveryreal-world computer has some finite limitations of time and memory. And there’s no further relevant restriction here on the allowed gates or whatever: as the number of qubits and the circuit depth went to infinity, Sycamore would converge to being a universal quantum computer.Comment #101 October 24th, 2019 at 12:18 pm

Bram Cohen #50:

Is it too early to ask how many qubits would be necessary to find the order of a binary quadrity form with a 1024 bit discriminant?

Probably. It all depends on the required overhead for fault-tolerance, which in turn depends both on how much the hardware improves and on how much the fault-tolerance protocols improve. Current estimates are in the millions of qubits, but that could change.

Comment #102 October 24th, 2019 at 12:25 pm

Bert #55:

Wondering about the comments from Chris Monroe suggesting they achieved quantum supremacy with their 53 ion experiment a while ago.

Well, under certain definitions of “quantum supremacy” they probably did! But my understanding is that their device was far from being fully programmable in the sense that Sycamore is. (And also, that they never tried to lay out the evidence for supremacy in a way that theoretical computer scientists could evaluate.) But if anyone understands their 53-ion experiments in more detail, let me

stronglyencourage them to chime in!Comment #103 October 24th, 2019 at 12:34 pm

James Gallagher #64:

isn’t the weak point of their paper the fact that moving from 53 to 54, 55… qubits has a very significant impact on the classical computation?

I mean, sure, but only in the sense that you could take

anyexperimental QC paper involving n qubits, and ask “but isn’t the weak point that they didn’t do n+1?” 🙂There’s nothing magical about the numbers 53, 54, 55—the cost of classical simulation simply gets exponentially more expensive with every qubit that you add, until it exceeds the amount of money that anyone in our current civilization is willing to spend to prove that they could do it. And the broader picture is that our civilization is now finally at that transition point, even if we haven’t yet firmly crossed to the other side.

Comment #104 October 24th, 2019 at 12:38 pm

DBS #69:

It would be more convincing to show lower bounds for classical computers for this specific task.

Alas, as I explained in my original supremacy FAQ, computational complexity theory is

unbelievablyfar from being able to prove unconditional lower bounds of that kind. And even if wecouldprove asymptotic statements, it would probably be a hugefurtherendeavor to extract concrete numbers for particular input lengths. The best we can do, when we’re lucky, is to prove conditional hardness results (along with other evidence, like lower bounds in the black-box model and lower bounds against specific classes of algorithms).Comment #105 October 24th, 2019 at 1:13 pm

I have a question, which seems to me crucial for getting a really useful quantum computer some day based on Googles qubits.

Can the Google Sycamore processor scale in principle?

I mean,

a) can the qubits be made significantly smaller by a silicon shrink process?

b) can qubits of one silicon die entangled with qubits on another silicon die?

Comment #106 October 24th, 2019 at 1:13 pm

You could argue that protein folding also converges to being universal as the “circuit depth” goes to infinity. It would have to be a pretty constrained set of proteins that don’t approach universality after an arbitrary number of operations. 🙂

How about this as a metric for programmability. We consider all possible circuits with the same width and depth as the supremacy circuits, using the closest standard gate set and the same qubit connectivity (that way we focus on gate set).

Then, we transform these circuits to use the available gates, and estimate how many of them actually fit in the QC.

The resulting percentage reflects the programmability of the system. A computer based on protein folding would have a really low score (despite approaching universality at large depths). How does the QC score on this?

I’m wondering how the varying 2-qubit gates impact the process of embedding a standard circuit into the QC. What’s a good ballpark score, 50%? It’s a genuine question.

If it’s too low (1% or something) then it’s like an otherwise fine QC is being turned into a protein folder for the purpose of the experiment. Especially considering that a QC using standard gates would have a score of 100%.

Comment #107 October 24th, 2019 at 1:18 pm

First, Scott — thanks for your exposition on all of this, it has been illuminating. But I have question that don’t think I’ve seen addressed concerning the low fidelity.

I agree with the discussion that if the distribution is (1-e)*U + e*D (where U = uniform and D = “desired” and e<<1) that we can then extract D to whatever precision we like with a number of samples that is polynomial in 1/e. But how do we know the "noise" is really a s1imple uniform distribution? (for all I know it could even depend on the circuit instance in someway) Is some way to estimate/subtract this "background"? [Maybe this is addressed more properly in the "spoofing" preprint you mention]

Comment #108 October 24th, 2019 at 1:21 pm

[…] El investigador Scott Aaronson compara este hito no con la llegada a la Luna, que se logró y punto, sino con la erradicación del […]

Comment #109 October 24th, 2019 at 1:23 pm

[…] The researcher Scott Aaronson compare This milestone not with the arrival on the Moon, which was achieved and point, but with the […]

Comment #110 October 24th, 2019 at 1:35 pm

My understanding is that six of the 20 photons from the linear optics experiment you mention are lost to the environment. If that’s correct, I’d say that 14 seems a fairer number to quote, especially as the exact classical algorithm for the same problem involves computing permanents of (no larger than) 14×14 matrices.

Comment #111 October 24th, 2019 at 1:42 pm

I reacted to your quote of Greg Kuperberg on IBM’s counterpoint: “I’m not entirely sure how embarrassed Google should feel that they overlooked this. I’m sure that they would have been happier to anticipate it, and happier still if they had put more qubits on their chip to defeat it.”

I’m not an expert in QC, but I think they did anticipate it. In their 2018 Science paper (https://arxiv.org/pdf/1709.06678.pdf) I believe they talk about what size of Hilbert-space corresponds to what can be simulated on a classical machine in Section IV of the supplementary materials (page 7 on the PDF above). In particular, they estimate that a 1-petabyte supercomputer would have the storage necessary to simulate a 47 qubit machine, so a >47 qubit machine is the real starting point for demonstrating supremacy.

At least, that’s what I think they’re saying. I don’t have the depth of knowledge to put what they’re saying there in the context of this recent paper.

Comment #112 October 24th, 2019 at 1:43 pm

Scott #93:

Thanks for your answer to my number #29 Scott. Unrelated Q: You said

“Instead, Gil—if we’re talking not about ‘steelmanning’ his beliefs, but about what he himself actually said—has taken the position that the Google experiment must’ve been done wrong and will need to be retracted.”

..which begs the question: If you *were* talking about “steelmanning” Gil’s beliefs, what would you say?

Comment #113 October 24th, 2019 at 1:50 pm

Mateus #85,, tas #15, Scott #61:

Perhaps energy usage should be part of the headline? My initial impression was that there is no way for this technology to spread if it requires cooling to milli-Kelvin temperatures with liquid helium. However, as noted, in the Supplementary Information, Section X.H, they estimate that one run of 200 seconds of the experiment only used about 1 kWh of energy (i.e. ~13 cents of electricity). And the cooling method apparently does not require any liquid helium or liquid nitrogen, it’s “just” a “cryogen-free dilution refrigerator.” Available soon at your nearest supermarket?

Comment #114 October 24th, 2019 at 2:07 pm

[…] minutes versus 2.5 days is still a quantum speedup by a factor of 1200,” he wrote in a blog post, discussing the […]

Comment #115 October 24th, 2019 at 2:12 pm

Minor unrelated comment: The link to the Nature paper in this previous post includes the UT proxy stuff and is broken. No need to approve this comment — just a head’s up.

Comment #116 October 24th, 2019 at 2:14 pm

Scott #62: Even more to the point (I’d say), you can feed the device an arbitrary instance of a mathematically well-defined problem (not just “simulate yourself”)

If the chip can’t simulate another chip that was fabricated by the same process, it can only simulate itself. Being reconfigurable doesn’t change that fact.

Scott #100: And there’s no further relevant restriction here on the allowed gates or whatever

But the allowed two-qubit gates are restricted to only allow one specific calibrated combination of CZ and iSWAP.

What is the difference between the classical simulations in arXiv:1807.10749 (Markov et al.) where similarly complex quantum circuits were simulated for $35k, and the Nature result? The arXiv paper discusses CZ gates for 56 qubits at a depth of 40, and states that they could simulate half the depth with iSWAP. The Nature paper describes 53 qubits at a depth of 20 with approximate iSWAP gates.

Comment #117 October 24th, 2019 at 2:35 pm

Scott,

your comment is as always very insightful in many dimensions.

The working of your praise of IBM, however, seems to be a bit of an insult in disguise, no? IBM is not the dinosaur, stubbornly clinging to classical computing and defending it with its teeth, but the pioneer of quantum computing. They did loose this round to Google – and Google’s work is truly awesome – but that should not diminish IBM’s contribution to the field of quantum computing.

(by contrast, Garry’s contribution to AI is basically non-existent, and he certainly was not a pioneer or even an early proponent in that field)

Thanks for considering

Comment #118 October 24th, 2019 at 2:43 pm

A #70:

What in complexity theory gives in if classical computers do whatever google does in a constant factor (possibly $BB(10^9)$) slowdown?

Nothing does—except for complexity theory’s ever-present background assumption, that the constant factors are not so utterly insane as to sever any possible connection between asymptotic behavior and real life. 😀

(Incidentally, this background assumption, while of course not 100% reliable, has proven itself more reliable than many people might imagine.)

Comment #119 October 24th, 2019 at 3:19 pm

CTH #72: As several others already pointed out, there’s no such thing as “classical supremacy” (at least not in a formal sense), simply because classical computation is a subset of quantum.

Comment #120 October 24th, 2019 at 3:26 pm

David #73:

Whilst it may be fair to say Google won this race, they weren’t exactly racing against anyone. IBM, IonQ etc never seemed much interested in this supremacy business (at least to me), despite having roughly comparable hardware capabilities. Also it’s been public knowledge Google planed to do something like this for at least a couple of years now so they weren’t caught by surprise or anything. I wonder whether that was a scientific decision or just a business one?

To me, that’s one of the strangest aspects of this “race.” Google won it partly because of their hardware capabilities, but partly also because they seemed to be the only ones who cared enough. When I talked to experimenters who I like and respect, in certain unnamed competing groups, they would sometimes dismiss quantum supremacy as “just a bunch of hype”—but then, ironically, they would explain that their focus was squarely on the “real-world applications” of QC, like machine learning, neural nets, big data, optimization, QAOA, etc.—all subjects that have been maybe 10

^{11}times more hype-infested than the scientific quest to prove quantum supremacy. 🙂Comment #121 October 24th, 2019 at 3:32 pm

[…] Scott Aaronson notes that this clearly shows that quantum supremacy is clearly emerging. […]

Comment #122 October 24th, 2019 at 3:34 pm

Perfecto #116 – I can answer the question with regard to previous quantum circuit simulations. The new definition of depth used for the Google sycamore layout is essentially twice as dense as the old circuits. I think previously it was tricky to drive all qubits with 2-qubit gates in parallel (another nice technical improvement they have made!) so the circuits defined e.g. here https://github.com/sboixo/GRCS only have gates acting on half the qubits per round.

Roughly speaking, with regard to their algorithm, 20 new ISWAP ‘cycles’ is like 80 old CNOT ‘depths’.

Also the rectangular lattice geometry as compared to bristlecone / sycamore is slightly easier to simulate.

Comment #123 October 24th, 2019 at 3:35 pm

Thanks, Scott (#99)! We’ll base on what you say at the end and mention the results with Gunn too.

Regarding Deep Blue and Kasparov (including #51 and #58), my feeling before the 1997 match was that computers would see diminishing returns and humans would stay competitive for awhile. But what actually happened over two decades was that software advanced even faster than hardware did.

Comment #124 October 24th, 2019 at 4:05 pm

Thanks for the excellent explanation, Scott.

One question about the part where you said that Google could increase the number of q-bits to make it exponentially harder to simulated: what if it’s also exponentially expensive to add q-bits? If that’s the case then I’ll agree with IBM: you’ll get the same speed up with the same amount of investment, up to some constant. (I was hoping the IBM paper would make that point based on the structure of the Sycamore chip as they couldn’t just based their opposition on simulating the current set of q-bits knowing full well the exponential explosion of simulation but from your post, it seems that they didn’t.)

The Google’s is beyond my level of understanding so I’ll just straight out ask you: from your understanding, how hard it is to increase the number of q-bits in the Sycamore chip?

Comment #125 October 24th, 2019 at 4:36 pm

DavidM #78:

But isn’t it right that their fidelity degrades exponentially in the number of qubits (see Fig 4 of the paper), and so since we agree that it is appropriate to apply a 1/eps^2 penalty, their apparatus also has a cost which is exponential in circuit size?

That’s certainly true, and everyone in the field knows it. The usual way of stating the point is that unless and until you can implement

fault-tolerance, QC doesn’t scale. Hence the tremendous emphasis placed on fault-tolerance—including in my FAQ, where I stressed that Google hadnotachieved it, and that it was an obvious milestone that all of the players would be racing toward next.(Although to be fair since they are winning by so much at 53 qubits there will certainly be a sweet-spot where their experiment is feasible and IBM’s simulation isn’t.)

Right, this is a key point. Vacuum tube computers with no fault-tolerance also weren’t particularly scalable, and yet they often worked well enough to be useful. Over the next decade, Google, IBM, IonQ, Rigetti, and the others will be trying hard to achieve useful quantum speedups with 100-200 qubit “NISQ” devices. There’s no guarantee that they’ll succeed at this. But if they do, to me it seems perfectly fair to say that the

explanationfor their success, really will involve the asymptotic exponentiality that’s inherent to quantum mechanics (and that’s the thing we care about in QC).Comment #126 October 24th, 2019 at 4:39 pm

Probably the first time Ivanka Trump is quoted in SO:

“It’s official the USA has achieved quantum supremacy! US in collaboration w/ the Trump Admin & UC Santa Barbara, Google quantum computer Sycamore has completed a calculation in 3 min 20 sec that would take about 10,000 years for a classical comp.

#QIS is a critical industry of the future. That’s why @POTUS signed the National Quantum Initiative Act-record level funding for quantum R&D – into law 2018. We’re proud to have contributed to this major milestone, quantum supremacy, and usher in the next gen of quantum tech in America.”

Comment #127 October 24th, 2019 at 4:50 pm

I don’t understand why the milestone of “quantum supremacy” is solely based on speed ups. The fact that we are able to implement and measure from a state built on 53 qubits that would otherwise take 250 petabytes to store, well that seems pretty supreme to me, independent of speed ups for simulating updates to that state.

Comment #128 October 24th, 2019 at 4:58 pm

Raoul Ohio #126: When someone first sent me that Ivanka tweet—crediting the Trump administration for a Google effort that was already well underway before Trump took office (and for a bill that passed Congress unanimously, that would’ve become law even had Trump vetoed it, and that’s almost certainly much too recent to have had any effect whatsoever on Google’s effort)—I assumed it must be some Ivanka parody account.

But it’s real.

What a time we live in.

Comment #129 October 24th, 2019 at 5:00 pm

fred #80:

I’m still not clear on the manner in which the Google setup is programmable. Are they affecting gates in the way you would switch a classical logical gate from an OR to an AND?

Yes.

And is this ever running in a regime where all the qubits are entangled at once?

Yes.

Comment #130 October 24th, 2019 at 5:08 pm

Mateus Araújo #84: It’s absolutely true that the best known classical algorithms for simulating quantum circuits, typically do more than the quantum circuits themselves do. E.g., they explicitly calculate one or more final amplitudes. Or in the case of the Schrödinger algorithm, explicitly calculate the entire final state vector.

The part I don’t understand is why you find this “disturbing,” or why you’re unhappy with my answers about this. What if this is just the way things are?

By analogy, think about the CHSH game. Classically, if Alice and Bob want to win the game with more than 3/4 probability, then they’ll have to do something that they would

nothave to do quantumly: namely, communicate.That’s how it is.

Comment #131 October 24th, 2019 at 5:11 pm

Plamus #86:

Serious question: how long would just loading 250 petabytes of data in hard disk take? I am not familiar with loading protocols and specs for Summit; my guess would be that in theory it can be pretty fast, but in practice partitioning the data and loading up would take longer than the 2.5 days of computaation.

I don’t know the exact answer, but a key point is that (of course) this is all being done in parallel by tens of thousands of processors.

Comment #132 October 24th, 2019 at 5:13 pm

lewikee #87:

Is IBM planning on doing their classical supercomputer run using the very same random circuit Google used?

I don’t know, but I sure hope they will! (Though note that it’s not “IBM’s supercomputer”—they installed it, but Oak Ridge National Lab now controls it.)

If so, does that mean that in an ironic twist they would essentially act as the very checking process Google needs to claim quantum supremacy?

Yes, it would mean exactly that! Ironic, right? 🙂

Comment #133 October 24th, 2019 at 5:16 pm

Ian Hincks #92:

Does anyone know how to actually download the 2GB data file? The Nature paper links here: https://datadryad.org/stash/dataset/doi:10.5061/dryad.k6t1rj8 , but I get an error “You do not have the permission to download the dataset.” when I put in my email address to download it.

FWIW, I just tried and was able to download the files (though I haven’t unzipped them to see what’s there).

Comment #134 October 24th, 2019 at 5:18 pm

Brant Robertson #97: Thanks for the extremely interesting insight from an actual user of Summit! Let me encourage you, in the strongest terms, to email your question to the authors of the IBM paper, and to let us know what they say.

Comment #135 October 24th, 2019 at 5:38 pm

[…] Quantum Supremacy: The Gloves are Off (Scott Aaronson) — Google demonstrated a quantum system that will be exponentially faster, as the number of qubits increases, than a classical system simulating the problem. IBM proposed a way of using the world’s most powerful classical supercomputer (Summit) to brute force a solution faster than the method listed in Google’s paper—but the classical computer solution will still be exponentially slower, as the number of qubits increases linearly. If Google, or someone else, upgraded from 53 to 55 qubits, that would apparently already be enough to exceed Summit’s 250 petabyte storage capacity. At 60 qubits, you’d need 33 Summits. At 70 qubits, enough Summits to fill a city … you get the idea. Amusingly enough, Google made a particular engineering choice purely to extend the gap between quantum and the classical simulations they foresaw (missing IBM’s “just brute force it, bro” solution). […]

Comment #136 October 24th, 2019 at 6:17 pm

Scott #130: I’m unhappy because you said that the fastest weak simulation of the quantum circuit is a weak simulation, and there’s an interesting sense in which this is not true, as in all known algorithms one has to actually compute the amplitudes. (I had understood your answer to mean that SFA does sampling without computing amplitudes).

I find this disturbing because with strong simulation you can’t even simulate classical stochastic circuits, for the simple fact that a probability distribution over n bits needs 2^n real numbers to be stored.

Instead, to simulate them you just apply the stochastic gates to the n bits one after another, updating the bit string according to the probabilities, and you end up with a sample.

(Of course, this doesn’t work for quantum circuits, because of interference, and I don’t know how to write a purely weak simulation better than SA or SFA, otherwise I’d be writing a paper, not a blog comment.)

You might have a point with the CHSH analogy. What makes quantum computers so good is precisely that they don’t need to compute exponentially many amplitudes, but rather they just have n qubits that are updated by the quantum gates.

Comment #137 October 24th, 2019 at 6:52 pm

Mateus #136: I see. What’s true is that, if you’re doing a weak simulation of an extremely

noisyquantum circuit, then you can get a scaling advantage by usually outputting a uniformly random string, and only occasionally (with probability equal to the fidelity) doing a strong simulation. And I believe there are a few other situations (for example, in the work of Bravyi, Gosset et al on quantum circuits with limited numbers of T gates?) for which the fastest known weak simulation beats the fastest known strong simulation, but I’m on my phone and I don’t feel like looking it up. 🙂Comment #138 October 24th, 2019 at 6:55 pm

Scott #137: Thanks, I’ll look it up!

Since you’re in the mood of answering questions, there’s something irrelevant that annoyed me when I was reading the paper. Just after equation (1) they claim that in the ideal case F = 1. Why did you let that pass (since you were the referee)?

I would have told them that no, the equality sign is sacred. Either use \approx, or the correct expression, F = (2^n – 1)/(2^n + 1).

Comment #139 October 24th, 2019 at 6:58 pm

Mateus #138: It sounds like they should’ve asked

youto review the thing. 🙂Comment #140 October 24th, 2019 at 7:08 pm

[…] investigador Scott Aaronson compara este hito no con la llegada a la Luna, que se logró y punto, sino con la erradicación del […]

Comment #141 October 24th, 2019 at 8:26 pm

Rainer #105:

Can the Google Sycamore processor scale in principle?

“In principle,” sure. 🙂

In practice, my understanding is that it looks feasible to scale superconducting chips up to a few hundred qubits, but going beyond that is going to be really, really hard without new ideas. Maybe someone who knows the engineering side better could explain what the issues are.

Comment #142 October 24th, 2019 at 8:33 pm

Jeff #107:

But how do we know the “noise” is really a simple uniform distribution? (for all I know it could even depend on the circuit instance in someway) Is some way to estimate/subtract this “background”? [Maybe this is addressed more properly in the “spoofing” preprint you mention]

The noise doesn’t need to be uniform. Whatever it is, it just needs to let you still pass whichever statistical test you’re applying (in this case, linear cross-entropy). And if you believe a hardness assumption that talks only about

idealquantum circuits, then you can deduce the hardness of classically spoofing the linear cross-entropy benchmark, even at a threshold where a very noisy quantum circuit can still easily pass. (Or at least, we know that a quantum circuit with uniform noise could pass, and the actual quantum circuit isempirically observedto pass, which renders largely irrelevant the question of whether the noise could also have deviated from uniform in some way where it wouldn’t have passed.) Some of this will indeed be explained in my preprint with Sam Gunn.Comment #143 October 24th, 2019 at 8:39 pm

arch1 #112:

If you *were* talking about “steelmanning” Gil’s beliefs, what would you say?

I would say: I considered it vitally important to science that

someonetry as hard as possible to prove that quantum speedups were impossible, so much so that I consider it a disgrace that so few people besides me worked on it. But I now admit that I failed. 😀Comment #144 October 24th, 2019 at 8:46 pm

Alex Mellnik #115: Thanks! Fixed.

Comment #145 October 24th, 2019 at 8:55 pm

Perfecto #116:

But the allowed two-qubit gates are restricted to only allow one specific calibrated combination of CZ and iSWAP.

But by varying the 1-qubit gates, you could still get any 53-qubit unitary that you wanted, were there no restriction on the depth. So it’s indeed a “universal QC,” except for the limitations on depth and number of qubits. And it’s doing something that’s classically hard. Now, you might complain that it was

alreadya universal QC, before it was beefed up (by moving beyond CZ gates) to also do something that’s classically hard.Will it make you happier if Google, or someone else, builds a universal QC for which the quantum supremacy “just happens,” without the designers having aimed at quickly reaching that specific milestone? How would they convince you that they hadn’t aimed at it? 🙂

Comment #146 October 24th, 2019 at 9:04 pm

Philipp Gerbert #117:

The working of your praise of IBM, however, seems to be a bit of an insult in disguise, no?

I mean, it seemed too obvious to state that IBM is Google’s closest competitor in superconducting QC, and that their criticism of Google’s supremacy announcement can and will be understood in that context.

If you really,

reallywant to know what I think—well, alright. I’m actually a huge, longtime fan of IBM’s QC efforts. I have wonderful friends in Yorktown Heights, I’ve always loved visiting there, I’ve already placed a student there and I hope to place more. And I’m rooting for my friends in IBM’s QC group who want to continue doing great science, to win out over those within IBM who seem to want to rush to monetize things.Comment #147 October 24th, 2019 at 9:12 pm

TNguyen #124:

One question about the part where you said that Google could increase the number of q-bits to make it exponentially harder to simulated: what if it’s also exponentially expensive to add q-bits?

Fortunately, a QC doesn’t need “q-bits,” only qubits. 🙂

Seriously, we all know that the fidelity currently goes down exponentially with the depth, while the depth increases polynomially with the number of qubits assuming that the gates are spatially locally and every qubit needs to affect every other one. In that sense, it’s

truethat it’s “exponentially expensive to add qubits” … butonly until you cross the threshold of quantum fault-tolerance, at which point the fidelity no longer goes down exponentially and you can add as many qubits as you want. And if quantum fault-tolerance turns out to be impossible, then there’s something extremely deep that we don’t yet understand about quantum mechanics itself. This is the real answer to your question.Comment #148 October 24th, 2019 at 9:19 pm

Nicholas Teague #127:

I don’t understand why the milestone of “quantum supremacy” is solely based on speed ups. The fact that we are able to implement and measure from a state built on 53 qubits that would otherwise take 250 petabytes to store, well that seems pretty supreme to me, independent of speed ups for simulating updates to that state.

There’s actually a clear answer to this. Namely: if you insist on listing all the amplitudes explicitly in memory, then

an equal superposition over 53-bit strings(in other words, 53 diagonally-polarized photons) would also take petabytes to store! Yet no one, presumably including you, would want to call that “quantum supremacy.” And if you instead allow implicit descriptions of quantum states, then the output state of the Sycamore chip couldalsobe described in an extremely compact way—by simply listing the sequence of gates that was used to produce it starting from the all-0 state. So we conclude that, to see a huge quantum advantage here, you really do need to talk about computation cost, or at any rate aboutsomethingbeyond the sheer number of bits needed to describe the final state.Comment #149 October 24th, 2019 at 9:22 pm

[…] El investigador Scott Aaronson compara este hito no con la llegada a la Luna, que se logró y punto, sino con la erradicación del […]

Comment #150 October 24th, 2019 at 9:30 pm

Everyone: I’ve now spent at least 12 hours over 2 days answering comments here, and finally reached the end of the queue. Alas, with 2 small kids, an undergrad course to teach, 6 PhD students, 5 postdocs, 7 unwritten rec letters, and (right now) at least 10 journalists per day asking me for interviews about quantum supremacy, I no longer have the time that I used to for blog-question-deluges. So please get in any final thoughts tonight and tomorrow morning, and then I’ll shut this thread down. Thanks for understanding!!

Comment #151 October 24th, 2019 at 9:52 pm

Hi Ashley #77: Thank you very much for your comment. i don’t think the additive-error hardness for their noisy distribution can be shown for enough large additive error, because their noisy distribution is 2F-close to the uniformly random distribution in the L1-norm.

More precisely, let

q_z=F p_z+(1-F)/2^n,

where p_z is the probability that a quantum computer outputs an n-bit string z, and F is the fidelity. Then,

\sum_z |q_z-1/2^n| =\sum_z |F p_z+(1-F)/2^n-1/2^n|<2F.

Comment #152 October 24th, 2019 at 10:22 pm

Scott #103

Scientifically speaking, you do have to check the n+1 case, you know, just sayin’ 🙂

But my comment was a (weak) attempt at humour, that perhaps IBM’s gargantuan classical simulation of 53 qubits was maybe suggesting this was also about the number of qubits that Nature herself was starting to struggle with and Google just got lucky.

But seriously, I really think even if we can get to just a few hundred qubit QCs there are enough imaginative people to invent amazing applications of the technology

Comment #153 October 24th, 2019 at 10:54 pm

Scott #143: You really are a professional.

Comment #154 October 24th, 2019 at 11:26 pm

Scott #134:

I sent the following email. I’ll let you know if they respond.

Dear Dr. Pednault,

I read with interest your arXiv posting 1910.09534 about the potential of using Summit, including its main storage system, to enable a simulation of the 54-qubit Sycamore circuit. As a user of Summit who has experience using most of the system’s nodes (>4400) simultaneously, I wanted to inquire about the performance metrics listed in your paper.

Section 5 of your paper provides the estimated running time for your simulation. You state:

“These averages can be used to estimate execution times for arbitrary numbers of aggregate gates, assuming simulations are performed on Cori II. To obtain corresponding time estimates for Summit, we scale the estimates by the ratio of the High Performance Linpack (HPL) benchmark figure for Cori II (14,014.70 TeraFLOPs/sec) versus Summit (148,600.00 TeraFLOPs/sec); this accounts for the substantially greater performance of Summit when performing, e.g., the matrix-vector calculations entailed by the use of gate aggregation.”

And then later:

“As can be seen, the time estimates for gate operations yield results that are all near or below 11% of the 191 PetaFLOPs/sec peak double-precision performance expected across 8,192 sockets. Therefore, there is room to potentially improve upon these estimates by leveraging the capabilities of Summit’s NVIDA [sic] GPUs: this would allow the simulation of individual gate operations, without resorting to gate aggregation, and the use of cuBLAS routines to implement the corresponding matrix-vector operations.”

Any of these double-precision performance numbers for Summit seemingly require the use of the system’s NVIDIA GPUs, as would any of the “Achieved PFLOPS” performance numbers in Table 1 or 2 in excess of ~10 PFLOP/s given the expected double precision floating point performance of 8192 IBM Power 9 CPUs. Tables 1 and 2 describe the “Tensor ranks per socket” which “indicate the sizes of the corresponding tensors in terms of the effective number of local qubits per socket”, however these numbers are not divisible by 3 (the number of NVIDIA GPUs per IBM Power 9 socket) and it’s therefore unclear how the work is to be distributed across the GPUs. The available GPU memory is “only” 96GB per node (16GB per GPU, and much less than 256 GB per socket of RAM main memory) and it wasn’t clear whether your performance estimates accounted for the more restricted amount of simultaneously available GPU memory. Lastly, the bandwidth of the NVLINK2 connections between the GPU and CPU will add an additional communications overhead, and it was unclear whether the performance cost of this overhead was incorporated into your estimates. Can you clarify how the GPU capabilities of Summit are used in your proposed calculations to achieve your projected performance in simulating the Sycamore circuit?

A clarification of how you leverage the GPU capabilities of Summit, which are seemingly required to reach your claimed performance estimates, would strengthen your argument. I am sure you are quite busy given the attention these calculations have deservedly received, but I would appreciate hearing more about how the specific Summit architecture would be leveraged in your proposed calculations.

Sincerely,

Brant Robertson

Comment #155 October 24th, 2019 at 11:27 pm

Johnnie #122: Thanks for the explanation. That makes the number of gates in the two papers easier to connect.

Scott #145: As a practical minded person, the term “Quantum Supremacy” just rubs me the wrong way, because this quantum processor is truly a feeble machine. And “gaming the benchmark” is frowned upon in other contexts.

Nevertheless, the idea behind the term is a milestone. But has the Google team really reached it? Taking their word for it plus a little peer review is the standard for recognizing achievement in science, but would be laughed at in other domains of life, especially when hundreds of millions of Dollars are on the line. Should we lower our standards just because the people making these claims are scientists?

https://www.wsj.com/articles/google-claims-breakthrough-in-quantum-computing-11571843751

To answer your question about what would convince me: For this kind of made-up problem where most classical computer scientists probably don’t even care, the quantum team would have to really blow regular computers out of the water with 70 or so qubits, standard gates, and a nice circuit depth.

Comment #156 October 25th, 2019 at 12:21 am

Thanks Scott for a very thorough post!

It was a lot of fun to read and it helped me better understand the milestone that was achieved, what it is and what it isn’t (unlike popular science articles which are usually terrible)

Thanks for sharing your knowledge, I enjoy your blog a lot!

Comment #157 October 25th, 2019 at 12:56 am

I’d suggest showing quantum supremacy by implementing Shor’s algorithm but so far that seems like a nonfactor.

Comment #158 October 25th, 2019 at 1:00 am

[…] Scott Aaronson notes that this clearly shows that quantum supremacy is clearly emerging. […]

Comment #159 October 25th, 2019 at 2:04 am

Mateus 84 – it’s not so surprising that the classical computer does more than a QC to get a result. Maybe the following is a reasonable comparison. A hypothetical NP box will return you a Hamilton cycle in a graph if one exists in linear time. Classically we don’t know how to do anything like that; we need exponential time to brute-force it.

But what we then actually get is the lex-first Hamilton cycle. Which we would not know how to get from our NP box in subexponential time either.

Comment #160 October 25th, 2019 at 3:24 am

[…] Quanten-Simulationsstrategie gerate außerdem schnell an ihre Grenzen, schreibt Informatiker Scott Aaronson in seinem Blog. Schon ein Chip mit 55 Qubits übersteige die Kapazitäten des Supercomputers Summit. Für die […]

Comment #161 October 25th, 2019 at 3:37 am

It is the end of the queue, so no answer to Ryan O’Donnell’s questions? Or did I overlook something?

Comment #162 October 25th, 2019 at 3:43 am

How will P=NP or ETH failure change all this hype?

Comment #163 October 25th, 2019 at 5:02 am

Thanks for the nice post.

What are the assumptions for the calculatuon that 60 qubits would require 33 Summits? Is it just that the amplitude of each state takes 8 bytes and since there would be 2^60 this would imply that 33*250 petabytes will be needed?

Comment #164 October 25th, 2019 at 6:51 am

[…] Quantum Supremacy: The Gloves are Off (Scott Aaronson) — Google demonstrated a quantum system that will be exponentially faster, as the number of qubits increases, than a classical system simulating the problem. IBM proposed a way of using the world’s most powerful classical supercomputer (Summit) to brute force a solution faster than the method listed in Google’s paper—but the classical computer solution will still be exponentially slower, as the number of qubits increases linearly. If Google, or someone else, upgraded from 53 to 55 qubits, that would apparently already be enough to exceed Summit’s 250 petabyte storage capacity. At 60 qubits, you’d need 33 Summits. At 70 qubits, enough Summits to fill a city … you get the idea. Amusingly enough, Google made a particular engineering choice purely to extend the gap between quantum and the classical simulations they foresaw (missing IBM’s “just brute force it, bro” solution). […]

Comment #165 October 25th, 2019 at 7:05 am

[…] a quantum computing researcher at the University of Texas at Austin, said on his blog that even IBM’s alternative is 1,200 times slower using the world’s most powerful classical machine, a supercomputer called […]

Comment #166 October 25th, 2019 at 8:07 am

Scott #120: Is it possible that you and them are working from different definitions of “hype”, and different perspectives on how badly different claims need to be checked in general?

My inclination, and I think yours, is to apply great skepticism to claims of practical applications of scientific research, and arguably lesser application to claims of theoretical importance (except in the case where someone claims credit for ideas already known). I guess this is because we don’t want to mislead people to think something is useful when it actually isn’t, which could have negative consequences in the real world, but on the other hand even if a new experiment only disproves some highly dubious theory, or otherwise is of limited value, we have still learned SOMETHING new about the universe.

A different perspective might apply more skepticism to claims of theoretical importance rather than practical applications. I think this can be justified as well – for instance we can say that even if the supporters of a purely theoretical result are right, it will have no practical benefits, but only lesser theoretical ones, so we had better make sure they are right. On the other hand if we build something practical and the practical application fails, the fact that we tried to make it usable and low-cost might mean that it finds a different practical explanation.

So while for you the word “hype” conjures up an image of an engineer claiming their magic box will cure cancer and solve Sudokus, for them it might conjure up the image of a physicist claiming to have uncovered the secrets of the universe without producing anything useful.

Comment #167 October 25th, 2019 at 8:10 am

mateus #85

“It is arguable whether it is fair to compare the time a single quantum chip takes against the time a hundred million GPUs running in parallel take. Maybe one should multiply the classical time by a hundred million?”

Sure, but one could also take the reverse view.

This Google quantum system referred to as a programmable can’t even do the simple math that any 1970s calculator wristwatch can do. Go explain that to the general public…

And it’s not like we needed this to understand the meaning of exponential cost.

Again, on the flip side, even the most badass QC built in a 100 years from now will spin its wheels when subjected to a trivial NP-Hard problem.

Comment #168 October 25th, 2019 at 8:15 am

Ferdinand #161:

It is the end of the queue, so no answer to Ryan O’Donnell’s questions? Or did I overlook something?

Ryan had extremely detailed questions that seemed best directed to the Google team, though I was happy to leave them here in case anyone else happened to know the answers.

Comment #169 October 25th, 2019 at 8:20 am

A #162:

How will P=NP or ETH failure change all this hype?

If P=NP in a practical sense, that would

certainlyremove a good part of the motivation to build scalable quantum computers—along with changing the world in dozens of more obvious and consequential ways!And yet, extremely ironically, P=NP would have almost no effect on the current sampling-based quantum supremacy experiments—which are not targeting problems known or believed to be in NP! A complexity collapse that ruled out sampling-based quantum supremacy would need to be an

even strongerone than P=NP, like P=P^{#P}or P=PSPACE.Comment #170 October 25th, 2019 at 8:25 am

Daniel #163:

What are the assumptions for the calculatuon that 60 qubits would require 33 Summits?

As a PhD in computer science, I used the following advanced methodology: I

(1) took how many petabytes they said they needed to simulate 53 qubits,

(2) multiplied by 2

^{60-53}= 128, and then(3) divided by 250 (the number of petabytes in Summit). 😀

Comment #171 October 25th, 2019 at 8:48 am

Will Sawin #166: Thanks for the interesting reflections. For my part, though, I’m perfectly willing to admit that there can be egregious hype in claims of practical relevance, and

alsoegregious hype in claims of theoretical significance!I was trying to claim something more narrowly tailored to this topic. Namely, that with machine learning, optimization, and so on, there often haven’t been strong theoretical grounds to believe that the claimed quantum speedups actually

exist—beyond, of course, the square-root speedup of Grover’s algorithm (and some closely related speedups), which we all agree about, but which seem virtually impossible to realize before you get error-correction and fault-tolerance. Claims of quantum speedups for optimization and machine learningthat you can realize in the near termhave been incredibly shaky.In the worst cases, no theoretical justification whatsoever is offered—just a vague hope that QCs will magically provide huge speedups for whatever application domain the customer or funder cares about. In the best cases, an interesting theoretical justification

isgiven, but then sometimes it’s collapsed on attempts to make it more rigorous. Two famous examples were(1) the beating of Farhi et al.’s QAOA by a classical approximation algorithm, on the special instances of Max-Cut where it had seemed to have an advantage—a development directly inspired by a post on this blog, and

(2) my then-student Ewin Tang’s dequantization of the Kerenidis-Prakash quantum algorithm for recommendation systems (which inspired several further dequantizations of quantum machine learning algorithms, to the point where not much is left).

With sampling-based quantum supremacy benchmarks, by contrast, virtually nobody—not even “skeptics” like the IBM group or Gil Kalai—has ever doubted that, at least theoretically, the task admits an asymptotic exponential quantum speedup. So, that’s the sense in which this stuff is on much more solid ground than the quest for near-term quantum speedups for optimization and machine learning problems!

Comment #172 October 25th, 2019 at 8:52 am

I’ll add my thanks for all that you’ve done and written here, Scott; well done! 🙂

For me, you’ve dotted a lot of i’s, and crossed many a t. But as you’d be the first to acknowledge, no doubt, there’s a great deal more work to be done before this can be nailed down, scientifically. History will record this day (well, the papers’ publication dates anyway) as the milestone … if that extra work does deliver independent verification or replication.

Care to guess when a 60, or 70, qubit QC will be built and run successfully? A modest fault-tolerant QC?

Comment #173 October 25th, 2019 at 8:53 am

For the record, this is the weak simulation paper Scott mentioned. Intuitively, I would expect their sum-over-Cliffords approach to be an excellent way of simulating quantum circuits. Apparently this is only the case if the number of non-Clifford gates is small, though.

Comment #174 October 25th, 2019 at 8:53 am

Everyone: By request of Ashley Montanaro, I’m going to leave this thread open a while longer, just in case people want to continue using it as a clearinghouse to discuss the Google and IBM papers. But please no more questions directed at me (not even implicitly so 🙂 ). Thanks!!

Comment #175 October 25th, 2019 at 8:56 am

I have a physicist’s question concerning the assertion “if Google, or someone else, upgraded from 53 to 55 qubits, that would apparently already be enough to exceed Summit’s 250-petabyte storage capacity. At 60 qubits, you’d need 33 Summits.” Adding new q-bits is a decoherence-prone process. Great progress on this issue has been made since the beginning of quantum computing, but still one cannot exclude that some “hard decoherence wall” be met one day by quantum computing. Have Google given a estimation of decoherence scalability on their chip?

Comment #176 October 25th, 2019 at 8:58 am

JeanTate #172:

Care to guess when a 60, or 70, qubit QC will be built and run successfully? A modest fault-tolerant QC?

No. 😀

Comment #177 October 25th, 2019 at 8:58 am

Will #166: I have found that people – especially in scientific fields – curate their personal definition of hype with more granularity than they will publicly admit. I think it’s better if these definitions are made public, even at the cost of being abrasive, so that society can collectively seek out harmful patterns and work towards eliminating it.

Comment #178 October 25th, 2019 at 9:09 am

Jean-Claude Garreau #175: Yes, decoherence is clearly the central issue here—it’s the reason why we don’t have million-qubit universal QCs already. Having said that, the biggest driver of decoherence is not the sheer number of qubits, but the circuit depth, which is indirectly related to the number of qubits in that with 2D nearest-neighbor gates, you obviously need larger depth to interact more qubits. I’d expect replicating the fidelity of Sycamore with 60-70 qubits to take at least another year or two (indeed, fabricating and calibrating a new chip typically takes that long even if there are

nointeresting new engineering challenges). But I could turn things around and ask you: do you seriously believe at this point that Nature is going to tell the experimenters, “building a QC with 53 qubits is totally fine—but 60? 70? no, that’s too many!”Comment #179 October 25th, 2019 at 9:18 am

It seems to me that IBM’s refutation is conceptually similar to saying that we can sort in linear time using counting sort, provided that we have a sufficiently large amount of memory and that the amount of this memory would need to scale with the size of the input problem (in this case the range of integers being sorted). Is this an oversimplification of their rebuttal?

Comment #180 October 25th, 2019 at 9:34 am

Did the Google experiment really minimize cross-talk or to some extent did they bake cross-talk into their fitted representation of executed 1-qubit plus 2-qubit stages that included all 53 qubits?

Other question that paper raised for me … did they run 1-qubit plus 2-qubit stage calibration runs and supremacy circuit runs within a certain time window to minimize drift … then post-execution fitted the stage calibration, then compared expected results to measured results? or did they really calibrate what 2-qubit operations they were running before running them in supremacy circuit?

Either way, it seems like they proved single stage fitting was good model for multiple stages of similar 1-qubit/2-qubit operations.

Comment #181 October 25th, 2019 at 10:15 am

[…] a quantum computing researcher at the University of Texas at Austin, said on his blog that even IBM’s alternative is 1,200 times slower using the world’s most powerful classical machine, a supercomputer called […]

Comment #182 October 25th, 2019 at 10:47 am

Pontus #179: Maybe a closer analogy—and here I’m borrowing from something Greg Kuperberg mentioned to me—is precomputing a gigantic, exponential-sized lookup table for some Boolean function f you care about. Once you’ve paid that upfront cost, calculating f(x) for any given x is then a simple matter of table lookup, just like it would be trivial for Summit to generate as many samples as desired once it had stored the final quantum state to hard disk. Of course, this approach has the inherent problem of exponential explosion in the resources needed to precompute the lookup table in the first place.

Comment #183 October 25th, 2019 at 11:01 am

[…] critiques vient du chercheur en informatique Scott Aaronson de l’université d’Austin (Texas). Dans son blog, il souligne qu’IBM appuie son scénario sur son ordinateur Summit à l’Oak Ridge National Lab. […]

Comment #184 October 25th, 2019 at 11:14 am

[…] a quantum computing researcher at the University of Texas at Austin, said on his blog that even IBM’s alternative is 1,200 times slower using the world’s most powerful classical machine, a supercomputer called […]

Comment #185 October 25th, 2019 at 11:24 am

What would a picture of the experimental results look like?

My understanding is that it’s supposed to look like a speckle pattern, something like this:

https://en.wikipedia.org/wiki/Speckle_pattern#/media/File:Laser_speckle.jpg

But we have to factor in the noise, so it’s more like 0.2% speckle, 99.8% cat pictures? 🙂

And we can only tell speckle from cat statistically? So the speckle is not just green.

Is that basically what the blurry donut for the quantum supremacy experiment would look like?

If we ran a smaller, less noisy experiment could we actually get a nice speckle picture out of a 53 qubit QC? That would be so cool.

It’s like we skipped that.

Comment #186 October 25th, 2019 at 11:25 am

JeanTate #172:

“Care to guess when a 60, or 70, qubit QC will be built and run successfully? A modest fault-tolerant QC?”

Depends on when the first Artificial General Intelligence will reach singularity.

Comment #187 October 25th, 2019 at 11:33 am

Job #185: After you’d filtered out the noise, and also extrapolated the results to the entire probability distribution (rather than the mere few million samples that were observed), it would just look like a speckle pattern big enough to fill several basketball courts’ worth of hard drives. 🙂

Comment #188 October 25th, 2019 at 11:39 am

Scott #182

“Maybe a closer analogy—and here I’m borrowing from something Greg Kuperberg mentioned to me—is precomputing a gigantic, exponential-sized lookup table for some Boolean function f you care about.[…] Of course, this approach has the inherent problem of exponential explosion in the resources needed to precompute the lookup table in the first place.”

Take the so-called “Von Neumann’s probe”:

“Von Neumann proved that the most effective way of performing large-scale mining operations such as mining an entire moon or asteroid belt would be by self-replicating spacecraft, taking advantage of their exponential growth.”

Shouldn’t be too hard to turn an entire moon/solar system/galaxy into a gigantic lookup table?

Comment #189 October 25th, 2019 at 11:39 am

Is it bigger than a blackhole? 🙂

Actually, maybe it is. But that’s my point, the experiment is too large, we could get a nice speckle pattern for a smaller circuit.

I think I would trade the complexity implications of a supremacy experiment for cat-free speckle.

Comment #190 October 25th, 2019 at 11:51 am

I understand P=NP not PP is possible. However if we have a linear time algorithm for SAT and PP is not P and requires exponential time classically then still what good is a practical application of sampling generally and as done here for supremacist reasons?

ALSO I DO NOT GET IT. This experiment has NO implication on complexity theory. Therefore there could be a linear time algorithm to do the same on a classical computer. WHAT GOOD IS THIS SUPREMACIST EXPERIMENT?

Comment #191 October 25th, 2019 at 12:04 pm

[…] a quantum computing researcher on the College of Texas at Austin, stated on his weblog that even IBM’s various is 1,200 instances slower utilizing the world’s strongest classical machine, a supercomputer known as […]

Comment #192 October 25th, 2019 at 12:36 pm

[…] Aaronson, director of the Quantum Information Heart at the University of Texas at Austin, in a blog post about Google’s demonstration and the different responses. Soon after all, no exploration group […]

Comment #193 October 25th, 2019 at 1:13 pm

A #190:

ALSO I DO NOT GET IT. This experiment has NO implication on complexity theory. Therefore there could be a linear time algorithm to do the same on a classical computer. WHAT GOOD IS THIS SUPREMACIST EXPERIMENT?

I mean, do

youhave any ideas for a linear-time algorithm? If you don’t, then maybe you should lay off the all-caps and think about it more.Incidentally, quantum simulation problems are neither known nor believed to be in NP. So even in a hypothetical world where P=NP, it’s entirely plausible that those would still be useful applications for QCs. Indeed, quantum simulation seems very likely to be the most important application for QCs even in

ourworld—a world where (I’d say) almost certainly P≠NP, but even if not we obviously have no idea how to solve NP-complete problems efficientlynow.Comment #194 October 25th, 2019 at 1:30 pm

I do not have a subexp algorithm. However isn’t the whole point of complexity theory about foundations and NOT nitbits?

Comment #195 October 25th, 2019 at 2:22 pm

A #194: Is it your view that no one should even try to build quantum computers, until we complexity theorists have succeeded in proving conjectures like P≠NP and P≠PSPACE, which we’ve tried and failed to prove for the past half-century? I’m almost charmed by the vast importance such a view would assign to us—but in any case, I’m just trying to clarify. 🙂

Comment #196 October 25th, 2019 at 2:48 pm

Scott #193

“quantum simulation seems very likely to be the most important application for QCs even in our world”

QCs will probably be really useful for simulating complex “human made” materials, like near room temperature super conductor materials or … to simulate the qubits of potentially better QC candidates!

So the line between the simulation vs the realization of complex materials will become blurry: any progress in the process of scaling up QCs will simultaneously spur a progress for all nanotechnologies beyond computation: telecommunication, radars, sensors, photonics, fabs for digital chips, batteries, solar panels, …

Comment #197 October 25th, 2019 at 3:10 pm

Now that the race to produce the first QC-generated speckle picture is on, i’m trying to understand how i would do this.

I assume that any random supremacy circuit is good enough (that’s like the one thing it

cando, right), but wouldn’t need 20 cycles because it doesn’t need to be classically difficult.What’s the minimum circuit size that would still produce a nice speckle pattern? Is that an answerable question?

I know how i would render it. For n bits, just sample 2^n times or so. Each distinct sample corresponds to a distinct pixel in the image, and we adjust the brightness of the pixel to reflect the number of hits.

With 16 qubits we can get a small 256×256 speckle image, with 65K samples or so. What do you think the fidelity would be? It’s doable right?

You know what, i’m going to put all the complexity implications on hold until i see that image.

Comment #198 October 25th, 2019 at 3:37 pm

Scott #133, thanks, it’s working for me now, too.

Comment #199 October 25th, 2019 at 4:22 pm

Scott- what are we talking about here? Google generated 1,000,000 random numbers in 200 seconds. Then they claim it would take 10,000 years to do something else entirely. Have a nice day. My phone can generate 1,000,000 random numbers in one second. Google’s quantum solution is 200x slower than my phone. Some supremacy. The rest of the paper – is garbage. Thanks. (corrected)

The idea that we should simulate the entire “quantum space” is absurd, like saying a conventional computer must enumerate the entire range of numbers before spitting out one value.

Comment #200 October 25th, 2019 at 4:44 pm

Suppose that the state-of-the-art for classical computers was still mechanical relays and paper tape for storage, but Google was still able to produce this QC chip (or an ancestor thereof). In that world wouldn’t supremacy be demonstrated at maybe 15-20 qubits, instead of the 40-50+ where Google has achieved it in this one? Given that, I’m not clear why this supremacy is so much more interesting than being able to do the same kind of experiment with selectable gates on 9 qubits as they had demonstrated previously. Clearly it’s a remarkable achievement, but isn’t it just a scaling/”engineering” problem, so to speak?

I do admire your dedication to answering both very well-informed and otherwise questions in your blog! Thanks in particular for your Supremacy FAQ.

Comment #201 October 25th, 2019 at 5:26 pm

JRSterj #199: When you were writing that comment, did you even once think yourself—“there

mustbe more to this than just generating purely random numbers; otherwise all these experts wouldn’t be so excited about it?” Was your self-satisfied ignorance ever troubled in the least? Did it ever occur to you to look into it more?tl;dr: The distribution being sampled is not uniform. It has tiny deviations from uniformity. With huge effort, those deviations are statistically extracted from the millions of samples. They’re what prove that, under reasonable computational assumptions, a quantum computer must have been generating the samples—not your iPhone, and not even the largest supercomputer that currently exists on earth.

Comment #202 October 25th, 2019 at 5:33 pm

Scott Says:

Comment #178 October 25th, 2019 at 9:09 am

But I could turn things around and ask you: do you seriously believe at this point that Nature is going to tell the experimenters, “building a QC with 53 qubits is totally fine—but 60? 70? no, that’s too many!”

Well, what I said is that this cannot be excluded with the present knowledge. Decoherence is a issue that presently can be studied with precision only in relatively small systems, so the situation might be like classical mechanics: Nature did not appeared to say that there was a limit in increasing the velocity of a material body, but finally there *is* one. That said, what is more likely is that there is no fundamental limit, but a technological one that can stand for decades or more. So having an estimation of how decoherence increases in the Google’s chip (with the number of qbits, or with the depth) would be very interesting.

Comment #203 October 25th, 2019 at 5:48 pm

Michael MacDonald #200: You’re absolutely right that “quantum supremacy” is a milestone that heavily depends on the current state of classical technology. Perhaps a better name for it would’ve been “classical non-sneerability.” I.e., it’s the point at which a classical skeptic who’s—ahem—minimally informed can no longer sneer that quantum computing must be a sham, since no QC has ever done anything that a classical computer couldn’t do faster.

Comment #204 October 25th, 2019 at 6:34 pm

[…] Some caution seems necessary in interpreting this claim, however. First, it does not refer to any of the practical tasks that people would like quantum computers to solve. And second, IBM claims that 2.5 days is a more realistic estimate for the time required on a classical supercomputer. It does seem that “quantum supremacy” is not quite here yet (see also what Scott Aaronson has to say). […]

Comment #205 October 25th, 2019 at 6:53 pm

‘Is it your view that no one should even try to build quantum computers, until we complexity theorists have succeeded in proving conjectures like P≠NP and P≠PSPACE, which we’ve tried and failed to prove for the past half-century?’.

No however we made a decision in 60s and 70s to decide certain problems are hard (P not NP), then we decided hierarchy should also be hard, then we decided randomness has power only in oracle models, then we decided we cannot even budge SAT (ETH), then we decided not quantumness can even mildly budge SAT (SETH), along similar time we decided quantumness has some power (Shor) and then we decided quantumness can do something not in PH and then we decided not to evaluate any of our decisions. We are going for a problem outside even the slightest importance to complexity theory and glorifying quantumness.

Comment #206 October 25th, 2019 at 7:03 pm

Scott #178

[do you seriously believe at this point that Nature is going to tell the experimenters, “building a QC with 53 qubits is totally fine—but 60? 70? no, that’s too many!”]

Possibly. It all depends on whether you support the following conjecture…

“Everything in Nature is finite and discrete. Nothing in Nature is infinite or continuous.”

Being old, I can remember optical image processing being a thing long before the laser or the high speed digital computer was invented. Using just lenses, film transparencies, and a single spectral line from a mercury arc lamp at some distance from the optical bench, you could make holograms, filter spacial frequencies, calculate fourier transforms and convolutions, and sharpen images that were out of focus.

All the equations describing how these things worked were continuous, and as long as you didn’t do anything to make it glaringly apparent that your film emulsion had a grain size, these equations would give you the right answer.

Similarly, the equations of hydrodynamics are continuous, and as long as you don’t have things moving close to the speed of sound in the medium, or look sufficiently close that you see thermal noise kicking individual particles around, those equations will also give you the correct answer.

In a discrete universe, reality also has a “grain size,” the Planck scale, and although most things we calculate in quantum mechanics will give the right answer if we use continuous wavefunction amplitudes, extracting exponentially increasing amounts of classical computation from a quantum system by mutually entangling larger and larger numbers of qubits is likely not one of those things.

Eventually, it will matter that wavefunction amplitudes are discretized, and come from a finite set, and at some point, the configuration size of your entangled system will exceed the number of quantum microstates, and you will be unable to assign a unique state to each distinct quantum logic function.

So in a finite universe, quantum computing breaks at some number of qubits. If we had a correct theory of quantum gravity, we could probably calculate exactly when this occurs. it’s quite possible and even likely that it occurs before you have enough qubits to do things like threaten cryptography and implement other proposed applications for general purpose quantum computers.

It is extremely dangerous to assume that because something works with 40 qubits and you can check the answer, “Nature” makes it work the exact same way at 80 qubits when you can’t check the answer.

As a reviewer, you see nothing wrong with Google’s paper, because you have brought all the problems you saw to the authors’ attention, and they have responded to them. There is a name for this effect, which I can’t remember.

Comment #207 October 25th, 2019 at 7:31 pm

Scott #169: “A complexity collapse that ruled out sampling-based quantum supremacy would need to be an even stronger one than P=NP, like P=P#P or P=PSPACE.” —

Are there lists of protocols that exploit these post-NP problems on a quantum computer? A reference would be wonderful. Thx

Comment #208 October 25th, 2019 at 7:32 pm

A #194 and Scott #195

What is interesting to me is the fact that a QC can be queried for a “grossly approximate” answer to a Combinatorial Optimization problem which can be used very effectively to speed up my classical algorithm. I can give a guarantee that the QC will not be queried “too many times”.

How “grossly approximate” the answer has to be is application dependent.

So, yes. All the power to a QC that can be used as a co-processor in a meta-algorithm which does something practical, say, solve a toplogy optimization problem in Statics.

Comment #209 October 25th, 2019 at 7:53 pm

A #204: I’m glad to work in a field full of people who do their best to explore the universe of computational problems–welcoming anyone wiser than they are to come along and fix their understanding wherever it’s inadequate–rather than just carping from the sidelines like you are.

Comment #210 October 25th, 2019 at 8:11 pm

Eric Cordian #205: If the equations of quantum mechanics were to break down in going from 53 to 60 or 70 qubits, wouldn’t that itself be a profound new discovery about the laws of physics? Indeed, far more profound than a “mere success” in building the QC would be? And isn’t that a powerful argument for groups like Google’s to continue pushing ahead exactly as they are right now, in order to discover whatever is there? Furthermore, isn’t this point obvious? Was it not obvious to you as you were writing your comment?

Comment #211 October 25th, 2019 at 8:15 pm

MK #206: We don’t actually know

thatmany interesting examples of problems that are easy for QCs but plausibly outside NP or the polynomial hierarchy. The main classes of examples are:(1) Quantum simulation

(2) Oracular problems like “Forrelation” (which I defined in 2009)

(3) Sampling problems like BosonSampling and random circuit sampling

Comment #212 October 25th, 2019 at 8:25 pm

MatrimC #207: But the problem is that

we don’t actually knowthat quantum algorithms would give speedups in the situations you describe, beyond the Grover speedup. Yes, even for finding “grossly approximate” solutions—believe it or not, we have classical algorithms that do that as well! That’s what I was trying to say.Comment #213 October 25th, 2019 at 8:26 pm

Everyone: OK, I’m closing down the thread tonight—this time for real. Closing call; get in any parting thoughts.

Comment #214 October 25th, 2019 at 8:35 pm

Scott #211

Yes, I understand that. That said, there is a lot of freedom when modeling engineering problems. Numerical solutions need not have more precision than what is really achievable within manufacturing tolerances.

Say one of my sub-problem is to compute a reasonable approximation to a optimal graph separator which is not necessarily planar but has a low bounded genus, for example 40 layer VLSI circuit.

Comment #215 October 25th, 2019 at 8:38 pm

Scott, excellent work. I follow Gil Kalai as well. My advisor/mentor/senior partner is in the camp that while questions like P =? NP need to be resolved, he firmly believes in solving problems with what we have now.

Comment #216 October 25th, 2019 at 8:39 pm

Scott #209

Physical theories have domains where they work and domains where they don’t. We generally don’t describe general relativity as having “broken down” every time we discover a situation where it doesn’t work well, for instance.

Google throwing billions at a 70 bit QC and not being able to get it to work would not be a “profound new discovery” absent some sort of constructive theory of why the problem occurred. Just because there are situations in which the discretization of wavefunction amplitudes needs to be considered, such as in arguments about thermodynamics and entropy at small scale, doesn’t mean we should start referring to quantum mechanics as “wrong.”

So no, the possible breakage of QC at higher numbers of qubits is not a “powerful argument” for throwing unlimited amounts of money at it with no useful result forthcoming, especially in the absence of a testable theory of the new physics involved.

Most qubits in the universe are entangled pairwise anyway. Nothing in how the universe looks to us depends on being able to do the types of entanglement proposed for useful quantum computers.

Given that spacetime and gravity appear to be the hydrodynamics of quantum entanglement at large scale, it seems a far more interesting experiment would be to entangle trillions of qubit pairs between physically separate quantum computers, and see if they are connected by a wormhole, and whether laser interferometers can detect path length distortions in their vicinity. That might, for the expenditure of reasonable amounts of money, give us gravity experiments we can do at low energy and laboratory scaie. That would be far more interesting and useful than a practical quantum computer that is always ten years away.

Comment #217 October 25th, 2019 at 8:46 pm

MatrimC #214: For the approximation problems that you’re talking about, exactly what quantum algorithm would you propose to use? Why do you think it would be faster than available classical algorithms?

Comment #218 October 25th, 2019 at 9:07 pm

Eric Cordian #216: But in quantum mechanics as we’ve had it from 1926 to the present, there is no indication—none, zero—that the amplitudes are discretized in any way. So if the effort to build scalable QCs were to lead to the discovery that they

arediscretized, wouldn’t that be a once-per-century-level revolution in physics? And wouldn’t you be thrilled to have your worldview so dramatically vindicated? Why then don’t you want this effort to move forward as fast as possible? As the kids say these days, “smh”Comment #219 October 25th, 2019 at 9:32 pm

Well i guess i’m not going to get to post the speckle pattern that i’m currently generating using a simulator (the first one ever, i’m sure).

I’m wondering, will I be able to tell the difference (visually) between a 16-qubit version of the quantum supremacy experiment and a circuit that produces uniformly random samples?

Is that why no one has bothered to do this?

Comment #220 October 25th, 2019 at 9:44 pm

Job #219: If you were just eyeballing the samples that were output, you obviously

couldn’ttell them apart from uniformly random. If, on the other hand, you were eyeballing the entire observed probability distribution,after the 99.8% noise had been filtered out, you certainlycouldtell it apart from the uniform distribution (and could also visually compare it against the predicted distribution).But this seems like an absolutely terrible criterion for judging algorithms. It’s like, can you tell whether an algorithm to factor 300-digit integers has succeeded or not, just by eyeballing the output for a couple seconds? If so, then I may have something to sell you… 🙂

Comment #221 October 25th, 2019 at 9:57 pm

The simulator will have no noise – although it does have that capability, which is cool.

It’s why i’m asking, naturally i don’t expect to be able to distinguish 0.2% non-uniformly random values among uniformly random values.

The image will capture all of the possible outputs for a 16-qubit machine, such that the full noiseless probability distribution is visually represented in terms of pixel brightness.

Of course it will still be driven by a random number generator, since the experiment itself requires evaluating random circuits.

I’m asking whether the resulting image will stand out among images of uniformly random circuits. If not, then i’m wasting my time.

Yes i know that well. In fact i had a reply to JRStern above about how period finding is still just picking random numbers. This is an unfortunately timed follow up.

I’m trying to generate the speckle pattern for the supremacy experiment, and i’m wondering if it has any visually discernible features when completely noise free.

Comment #222 October 25th, 2019 at 10:21 pm

“Except, this too turns out to be false: John Martinis explicitly confirmed for me that once the qubits are calibrated, you can run any circuit on them that you want.”

You can run any circuit you want, but how good will that run be?

Comment #223 October 25th, 2019 at 11:40 pm

What evidence do you have the google task cannot be done in classical polynomial time?

Comment #224 October 25th, 2019 at 11:56 pm

Eric Cordian #216:

How exactly do you expect anyone to come up with the proper replacement for QM you claim exists without experimental evidence as to in what cases it fails and what in those cases happens instead?

Comment #225 October 26th, 2019 at 1:42 am

Comment #226 October 26th, 2019 at 5:10 am

Eric Cordian #216: Not only there is no evidence that amplitudes are discrete, there is not even a

formulationof quantum mechanics where they are discrete. I illustrate some of the difficulties in writing down such a formulation here.Also, you got this wrong: “Most qubits in the universe are entangled pairwise anyway.” It’s the other way around: producing a pair of particles that are entangled with eachother and nothing else requires huge experimental effort. If you let Nature run its course one of the entangled particles will scatter off a third particle, which will also get entangled with the first two. This third particle will scatter off a fourth one, and so on. This massive spread of entanglement on the environment is called decoherence, and it is the main source of noise in quantum experiments.

Comment #227 October 26th, 2019 at 5:14 am

Nike Dattani #222:

You can run any circuit you want, but how good will that run be?

From their published data, you can estimate exactly how good it will be as a function of circuit depth. For example, if the depth is 20 then you can expect a fidelity of about 0.2%, meaning that getting enough samples to extract the signal will take about 3 minutes.

Comment #228 October 26th, 2019 at 5:18 am

A #223:

What evidence do you have the google task cannot be done in classical polynomial time?

Section (3) of the post was all about that question. Did you read it?

Comment #229 October 26th, 2019 at 5:21 am

Ok, that’s probably enough intellectual whack-a-mole for now—thanks for playing everyone, and until next time! 🙂

Comment #230 October 26th, 2019 at 11:50 am

Comment #231 October 27th, 2019 at 11:13 am

[…] Aaronson not only has made two great posts on these and many other aspects of the claim, he independently proposed in 2015 the sampling task […]

Comment #232 October 31st, 2019 at 12:41 pm

[…] Aaronson detailed in his blog, a comparison in terms of elementary operations, i.e. using FLOPS (floating-point operations) as […]

Comment #233 November 1st, 2019 at 10:51 am

[…] pointing out that researchers John Preskill—who coined the term “quantum supremacy”—and Scott Aaronson accept Google’s understanding of quantum […]

Comment #234 November 2nd, 2019 at 6:47 am

[…] pointing out that researchers John Preskill—who coined the term “quantum supremacy”—and Scott Aaronson accept Google’s understanding of quantum […]

Comment #235 November 2nd, 2019 at 1:07 pm

[…] el resultado de esta carrera es una conclusión inevitable. Este es un ejemplo, lo que se traduce en uno de los principales expertos en el campo de los ordenadores cuánticos Scott Aaronson: […]

Comment #236 November 5th, 2019 at 12:07 am

[…] El problema en cuestión, al igual que sucedió con el Problema de Deutsch, es un problema creado específicamente para demostrar esto y, como tal, no buscaba aplicaciones prácticas. En este sentido cabe destacar que Scott Aaronson, una de las figuras de referencia en el campo, posee un protocolo de certificación de aleatoriedad que puede utilizar el problema de Google. […]

Comment #237 November 27th, 2019 at 7:34 am

[…] now, the historical allusion that I find most intriguing is one UT’s Aaronson made in a blog post at his own site. Ruminating on Google’s research paper and IBM’s reaction to it, he concluded that IBM […]

Comment #238 December 18th, 2019 at 2:54 am

[…] an experimental confirmation of what quantum information theory predicted in the early 1990s: in Scott Aaronson‘s words, “an outcome that’s at once expected and mind-boggling, conservative and […]

Comment #239 January 7th, 2020 at 12:50 am

[…] pointing out that researchers John Preskill—who coined the term “quantum supremacy”—and Scott Aaronson accept Google’s understanding of quantum […]