My “Quantum Supremacy: Skeptics Were Wrong” 2020 World Speaking Tour

(At a few people’s request, I’ve changed the title so that it no longer refers to a specific person. I try always to be accurate, amusing, and appropriate, but sometimes I only hit 1 or 2 of the 3.)

As part of my speaking tour, in the last month I’ve already given talks at the following fine places:

World Economic Forum at Davos
University of Waterloo
Perimeter Institute
UC Berkeley
University of Houston

And I’ll be giving talks at the following places over the next couple of months:

Louisiana State University
Pittsburgh Quantum Institute

For anyone who’s interested, I’ll add links and dates to this post later (if you want that to happen any faster, feel free to hunt them down for me!).

In the meantime, there are also interviews! See, for example, this 5-minute one on Texas Standard (an NPR affiliate), where I’m asked about the current state of quantum computing in the US, in light of the Trump administration’s recent proposal to give a big boost to quantum computing and AI research, even while slashing and burning basic science more broadly. I made some critical comments—for example, about the need to support the whole basic research ecosystem (I pointed out that “quantum computing can’t thrive in isolation”), and also about the urgent need to make it feasible for the best researchers from around the world to get US visas and green cards. Unfortunately, those parts seem to have been edited out, in favor of my explanations of basic points about quantum computing.

More Updates:

There was a discussion on Twitter of the ethics of the “Quantum Bullshit Detector” Twitter feed—which dishes out vigilante justice, like some dark and troubled comic-book hero, by rendering anonymous, unexplained, unaccountable, very often correct albeit not infallible verdicts of “Bullshit” or “Not Bullshit” on claimed quantum information advances. As part of that discussion, Christopher Savoie wrote:

[Criticizing] is what we do in science. [But not calling] “bullshit” anonymously and without any accountability. Look at Scott Aaronson’s blog. He takes strong positions. But as Scott. I respect that.

What do people think: should “He takes strong positions. But as Scott.” be added onto the Shtetl-Optimized header bar?

In other news, I was amused by the following headline, for a Vice story about the MIP*=RE breakthrough: Mathematicians Are Studying Planet-Sized Supercomputers With God-Like Powers. (If I’m going to quibble about accuracy: only planet-sized???)

59 Responses to “My “Quantum Supremacy: Skeptics Were Wrong” 2020 World Speaking Tour”

  1. Anon Says:

    C’mon Scott: it’s so dishonest to title a post on your (very popular) blog “Gil Kalai was wrong” and then claim “Oh, I don’t actually believe that.” You can’t have your cake and eat it too.

  2. Scott Says:

    Anon #1: I’ll say openly and plainly that my good friend Gil was almost certainly wrong about quantum supremacy, his oft-expressed confidence in its impossibility misplaced. Indeed, I’ve already said as much in many other contexts! I just wouldn’t have thought myself of making that the title of my speaking tour. 😀

  3. Anon Says:

    Fair enough, though you can see how this reads a bit contrary to “I wish to clarify that I by no means endorse my friend’s title”.

  4. Scott Says:

    Anon #3: Edited, thanks! I have many, many laugh lines that seem to work great when I give talks, even if the wording isn’t perfect. But on a blog, possibly because it lacks voice intonation, people will leap to something you didn’t mean if the wording is even slightly off.

  5. Raoul Ohio Says:

    I’m trying to keep score on “QS is a done deal”:

    Does Gil think he is or was wrong?

    What about other contrarians?

    As a pretty sharp kid with a half century plus of studying math, physics, and computer science, but basically a spectator on QC, I am kind of waiting for the dust to settle. The fact that Scott says “yes” to QS is certainly a strong data point. But I am slightly concerned that Scott might wearing his enthusiast hat as opposed to his impartial commentator hat.

  6. Scott Says:

    Raoul #5: Scalable QC is not a done deal until people are actually using it to break RSA and so forth. Or at the very least, until useful error-correction is demonstrated.

    Quantum computational supremacy, on the other hand, does now look to be a done deal, modulo tying up a few loose ends with the Sycamore experiment. And crucially, quantum supremacy is already something that Gil long theorized would be impossible. And Gil has acknowledged for months that the experiment (if upheld) refutes his view, and he’s been concentrating on trying to find flaws in the experiment. He’s repeatedly seized on candidate flaws only to withdraw his complaint after the relevant issue was explained.

    You don’t need to look to my authority or anyone else’s—you’re perfectly capable of forming your own judgment about the situation above!

  7. jonas Says:

    I don’t like that title either, but I admit that that is one of the more innovative solutions to finding a term to replace “quantum supremacy”.

  8. Justin Says:

    I actually heard your Texas Standard interview tonight just by chance. Very cool to spread the gospel around Texas.

    Regarding @BullshitQuantum, I’ve been disappointed by every twitter thread I’ve seen discussing the ethics, and especially with any calling for the account to demask themselves.
    The account is good for some laughs and reminds us all of the hype problem – both good things. And in the event the account has a false-positive, people’s work and their confidence in their work should be able to stand up to a single word tweet.

  9. Gil Kalai Says:

    Let me just add that indeed I disagree with Scott assessment of the Google’s supremacy demonstration and my judgement is that the Google’s supremacy claims are incorrect.

    Scott and I participated in an interesting debate regarding it last December – Here is the video. The whole debate was interesting, and my main point was raised at minute 35 and was extensively discussed afterwards.

    For my position on the Google supremacy “demo” see my post The Google Quantum Supremacy Demo and the Jerusalem HQCA debate, and in the larger context of the quantum computer debate see my post Gil’s Collegial Quantum Supremacy Skepticism FAQ. (There you can also find links to my papers and other resources.)

  10. maline Says:

    For a while, Gil Kalai was sticking to his guns, with an argument that Google’s claimed results assume an error rate that is a sum of the rates for the individual gates, and that this is implausible. Although Google claims experimental support for this assumption from experiments with fewer qubits, Kalai claims that those results are “too good to be true” and can be explained away.

    Is this issue settled? Would you be interested in posting an analysis of the merits of this argument?

  11. Strong position as anon Says:

    Calling bullshit anonymously is fine, and when accompanied by reasoning not rooted in the identity of the caller should be encouraged.

    If we wanna address accountability it’s time remember that we currently assign none whatsoever for promulgating inflated, misleading and even downright false claims outside of some very narrowly interpreted legalistic contexts .

    A private company can spend millions of dollars on PR and marketing implying to have built a universal np-complete solving quantum computer that fits in your pocket and as long as they don’t explicitly misrepresent this to private investors – they’re golden, and if you’re a public company you just need other people to do the misrepresentation for you. With Trump essentially having put the relevant parts of the SEC on a gardening leave this is only going to get worse.

    In science very few researchers have both the security and motivation to openly stand up to their university PR people to de-hype press releases.

    So more of this please.

  12. Scott Says:

    maline #10: Firstly, I should confess that I don’t know how to understand Gil’s most recent complaint—namely, “but the noise data matches too well the theoretical expectations for what the noise data should’ve looked like”—if he’s not raising the possibility of somebody having tampered with the data. More specifically, Gil’s complaint is that the total circuit fidelity just looks like the product of the fidelities of the individual gates. If a good experiment was supposed to have produced something other than that (what exactly??), then I don’t know how a bad experiment would produce what they reported. As far as I can see, only a faked experiment would produce it in that case.

    But my real response is simple: this is exactly what a good experiment should have produced! It’s consistent with a “null hypothesis,” where the signal you’re trying to extract indeed gets exponentially attenuated with the depth of the quantum circuit—that’s why these experiments are difficult, and that’s also why quantum error-correction will eventually be needed—but it does so in a way that’s smooth and predictable if you understand the performance of the individual gates, with no unexpected “global error conspiracies.” (And, as Gil well knows, so long as the signal goes down exponentially in this smooth and predictable way, however frustrating that is for near-term QC it’s ultimately good news for the scalability of QC, since it means that quantum error-correction should ultimately work.)

    Indeed, one of the great ironies here is that, if the total circuit fidelity had not gone down like the product of the gate fidelities, one could easily have imagined Gil harping on that as suspicious and noteworthy and as grounds for QC skepticism! 😀

  13. Scott Says:

    Strong position as anon #11: I never said that what Quantum Bullshit Detector does isn’t fine! (Nor, to be fair, did you say I said that.)

    Like a police chief regarding Batman, I simply decline to take a public position right now. One might feel like Quantum Bullshit Detector is fine so long as its judgments remain, let’s say, at least 97% correct.

    In the interests of truth, though, when you write:

      In science very few researchers have both the security and motivation to openly stand up to their university PR people to de-hype press releases.

    … maybe I should point out that, back when I had no job security—i.e., back when I was a postdoc and then an assistant prof on a tenure clock—I was a lot more aggressive in criticizing QC hype than I am now (still always under my own name), and I also spent a lot more time doing it. 😀

  14. maline Says:

    Scott #12:
    I’m afraid I don’t understand the issue very well, but Kalai seems to think that what you call “global error conspiracies” are more or less inevitable, and that by finding only a simple exponential, the Google group has failed to quantify the true scaling behavior of these global effects.

    The basic points that a quantum state exists globally, and that the enormous dimensionality leaves a lot of room for interesting new kinds of errors, seem intuitive for me. Is there an easy-to-understand reason why we should expect only the product of gate errors?

  15. Scott Says:

    maline #14: The issue is, the published results are consistent with the hypothesis that Gil’s “global effects” simply don’t meaningfully exist—which is exactly the hypothesis many of us have held since the beginning. Now, if you put a prior probability of 0 on that hypothesis (as Gil sometimes seems to), then obviously no evidence can possibly budge the prior. For me personally, though, it does increase my probability on that hypothesis.

  16. mr_squiggle Says:

    //[Criticizing] is what we do in science. [But not calling] “bullshit” anonymously and without any accountability.//

    It would be great if that were true. Unfortunately though, it isn’t.
    In practice anonymous reviewers can get papers rejected on the most spurious of grounds.

  17. Justin Says:

    Scott #13: Does it really matter how accurate Quantum Bullshit Detector is?
    Unlike in the comic books, this vigilante justice doesn’t involve any violence or superpowered property damage.
    At this point, nothing would be funnier than if we found out the account really was giving some responses randomly.

  18. Ajit R. Jadhav Says:

    Look elsewhere, Scott, QBD was doing just fine the last time I checked.

    [The “Bourbaki” of the QM BS didn’t even run anything on my Outline document, which was posted for the public more than a year ago. Despite my explicit request to them. Some *good* sense!]

    Guess they’re doing better than establishing QS in the Industry/Public using the RCS protocol, IMO. [Yes, I could easily defend my Opinion.]

    PS: Hi Gil!

  19. AdamT Says:

    Scott #12,

    “… if he’s not raising the possibility of somebody having tampered with the data.”

    Not to stir the pot too much, but I regard Gil’s call and emphasis on the need for *blind* tests on his blog as implicitly raising exactly that possibility. Perhaps he might afford the possibility that the tampering could have been done unintentionally or subconsciously, but the call for *blind* tests would seem to me exactly what you’d want to remedy possible tampering.

  20. Scott Says:

    Justin #17:

      Does it really matter how accurate Quantum Bullshit Detector is?

    Like, it matters if you’re in a situation that I’m constantly in, of talking to people who truly and earnestly want to know which specific quantum information claims are or aren’t bullshit, and who (rightly or wrongly) place zero trust in their own judgment of such matters, and who are nice enough not to want to bug me every single time (thanks btw!). What do you think: should I recommend QBD to those people or not? How well should I trust QBD to remain on (or very close to) the path of truth and rightness?

  21. Job Says:

    If the noise ratio for the QS experiment had significantly deviated from the product of the underlying components, wouldn’t the system have been further calibrated or tweaked to address stuff like crosstalk, until it matched the expectation?

    I don’t find the noise model to be particularly surprising or revealing, given how much calibration was involved in the experiment.

    Wouldn’t anything other than the published results have meant that there was still some way to go?

    It’s the anthropic principle applied to a paper. 🙂

  22. Vampyricon Says:

    I personally think QBSD is bad. Not because of the anonymity or the fact that they could be wrong, but the fact that all they do is type out “Bullshit” or “Not bullshit”, and that’s that. You have to let people know why it is bullshit, or else people won’t learn how to tell whether something is bullshit or not without QBSD, and knowledge by revelation is not knowledge at all.

  23. Scott Says:

    Vampyricon #22: To give the QBD its due, were it capable of responses beyond “Bullshit” and “Not Bullshit,” it might tell us that all we need to do is apply a neural net or other machine learning algorithm to its responses, and then we’ll be good to go even on examples where it never rendered a verdict. Alas, 15 years of writing this blog have left me dubious about whether much of the audience interested in quantum information would indeed be able to generalize in such a way.

  24. Philip Calcott Says:

    Hi Scott,

    Do you have any dates for your Yale visit yet?

  25. anon Says:

    Apart from Davos, this seems to be a North American tour. Any chance you’d like to come across the pond over to London and make it a proper world tour?

  26. Gil Kalai Says:

    Maline (#10, #14), Job (#21), Scott (#6, #12, #15) and others, a few points

    1. The remarkable prediction power of the formula for estimating the overall fidelity as the product of the individual fidelities (Formula (77)) is indeed a point of disagreement. In my view this predictive power: the ability to predict the failure probability of hundreds of elements in a device  up to 10-20 percent is an unbelievable nonsense.  Scott disagrees. The complete statistical independence is one of the issues. But another issue is that one needs to believe that the deviations for the individual errors are unbiased which is also very unreasonable. 

    2. Of course, the  prediction power of Formula (77)  is an independent miracle which is not required for building quantum computers but is just needed for the indirect specific extrapolation argument of the Google paper. In the videotaped discussion that I referred to Scott indeed dismissed this claim (similar to his comments here) but other participants did realize that this is an issue and even offered some ways around it. 

    3. The idea that by repeatedly measuring the quality of a gate (say) you can reach an unbiased estimate is also incorrect. (But is a common misconception.) So is Job’s bolder idea (#21) that with enough calibration you can get such excellent predictions for the entire device.

    4. There are a few other  (related) places where the Google experimental outcomes seem “too good to be true” and this deserves further checking. And most importantly carefully documented replications mainly by the Google team itself.

    5. (As for the last sentence of Scott #12.) Of course, if there are cases where the experiment deviates from the theoretical expectation in an unreasonable way this is also a bad sign and may suggest that the data is not reliable. 

    It is like an alleged proof that P is not NP. If its too good and seems to apply to 2-SAT then it is a bad sign and if the proof of some Lemma is wrong then this is also a bad sign. Also, we are not talking here about QC skepticism. Trying to find a mistake in an alleged NP=!P proof is not a NP=!P skepticism. 

    6. Scott #6 (about me). “He’s repeatedly seized on candidate flaws only to withdraw his complaint after the relevant issue was explained.–“Raising candidates flaws is the way a scientific work should be reviewed. However, since the actual publication of the Google paper (and the embargo lift) all my candidate flaws are still in the run.) 

    7. Scott #15. Err… you do not need “global effect” to go well beyond the 10-20 percent prediction. you need miraculous out-of-the word independence and symmetry to be in this range. 

  27. Greg Kuperberg Says:

    Scott #12 – Sometimes experimental results are unintentionally faked because of bad data analysis. For example, that was the resolution of the claim from an experiment called OPERA that seemed to find faster-than-light neutrinos. In fact, that claim would have upended so many apple carts that many people immediately said that the data analysis must be wrong somehow, before they had any idea how it could be wrong. Or a more subtle example is the BICEP2 experiment that seemed to claim dramatically more direct evidence for cosmological inflation. Even though most cosmologists believe inflation, they didn’t expect it to be that easy to see, and the BICEP2 experiment was resolved with the conclusion that they had not correctly modeled foreground dust (i.e. dust in our own galaxy that is merely thousands of light years away rather than 13 billion). Anyway, in both cases scientists running the experiments were not deliberately dishonest, but they fell victim to what I might call “drama bias”, i.e., believing that your experience is more spectacular than it is actually likely to be. (E.g., believing that you have coronavirus when you probably just have the flu, etc.)

    As best I understand Gil, I think that what he is saying is that error in complicated, real-life systems has many black swan events and simply does not follow simple patterns. E.g., it would be preposterous to assume a simple noise product formula to estimate the probability that space rockets blow up and crash. I think that he is saying that this complex systems principle applies to Google Sycamore, and that therefore the data analysis must in some way be wrong. (Again, let’s say not intentionally faked, but messed up in some way due to drama bias.) This argument strikes me as weak; in particular, much too weak to declare that the Google Sycamore paper must be wrong. I don’t see why Google Sycamore is complicated in any way that necessarily makes its errors irregular. They might well be somewhat irregular at some local parts of the circuit, but I don’t see any need to believe in dramatic irregularities. Meanwhile if you take many stages of the circuit, the mixing properties of the circuit itself should be homogenizing the error. (In fact making it less local and worse in that respect; but you cannot increase error as measured by total variation distance without adding new error.)

    Pervasive but honest scientific bias is a well-known problem for instance in medical research. I certainly that it’s an overreaction to demand steps like double-blind procedures and quantum circuits chosen by third parties to believe the Google Sycamore experiment. On the other hand, since this nascent computer technology, there is nothing wrong with that as part of future research. Or before that, I would be glad to see more conventional independent replication of the experiment.

  28. Haelfix Says:

    Anonymity is absolutely crucial (I feel like the case for this was well understood back in the 90s but somehow has lost its way in recent years). First off, b/c it allows truth to overcome entrenched institutional barriers. But it also has important sociological applications in some cases.

    First off, it allows you to say something controversial and to be wrong without the professional repercussions. That may sound crazy or undesirable, but when you are looking at a crazy result (say Heisenberg’s uncertainty principle, or more modernly something not altogether concrete like ER = EPR) there is a case to be made. Those examples weren’t made anonymously, b/c the physicists who discovered them were already established or at least had powerful backing. But suppose you were a grad student in some place where you didn’t have much institutional support. Would you really subject yourself to that level of scrutiny and critique, or would you send out feelers first to gauge the reaction and to check to see if there wasn’t something obvious that you were missing?

    Much more commonly, there is the case for answering a claim made by a colleague you might respect, but his/her error is so egregious that you might feel embarrassed correcting them under your real name. That frequently happens. An anonymous comment here gets them to thing about their mistake such that they can discretely retract the paper without having to write a potentially soulcrushing half page reply to xyz et al on the arxiv about a missed minus sign…

  29. Douglas Knight Says:

    Scott, you don’t take strong positions.
    Have you ever called anything bullshit?
    QBD gives conclusions without reasons, but Scott, you give reasons but refuse to draw conclusions, demanding that people read between the lines.

    Here you called D-Wave bullshit, but refused to do so in a short quotable way. Here you said that it’s a bad idea to call it bullshit. Maybe, but it’s striking how close that is to the current situation. If it’s good to save time by directing journalists to QBD, why wasn’t it good to save time by giving journalists short answers of the form that QBD gives?

    (Here are all the examples I found of you calling bullshit: sociology, S*dles on QC, and this. Pretty obvious pattern.)

    Doesn’t that Christopher Savoie quote sicken you? By “accountability” he means retaliation. At least, that’s the context of the tweet he’s responding to. Is this clarity better or worse than all the other people talking about “ethics”? In any event, that is another reason not to include it on your masthead.

  30. maline Says:

    Gil #26:
    Thank you for joining the discussion!
    Would you spell out in more detail what sort of errors you think are more probable than was estimated?

    Also, about statistical independence: if the gate error probabilities were positively correlated, that would increase the probability of a successful calculation (all gates error-free). So is it that you worry there might be negative correlations, i.e. that some gates might be more likely to fail specifically when others function properly? Is this plausible?

    Scott #15:
    How strongly do the experiments at Google support the “product of errors” model? That is, supposing that the true scaling of the fidelity is worse than exponential as Gil believes, what is the probability that the tests they did would fail to show that? By what factor do you think Gil should update?

    Also, setting a zero prior on almost anything is terribly irrational, and I don’t think it’s fair to accuse Gil of that. In fact, he has described fairly mild conditions for “what it would take to convince him”.

    Greg #27:
    I don’t think the OPERA “FTL neutrino” story is an example of bias. From the beginning, the OPERA team made clear that they were almost certain the result was somehow erroneous. The only question was whether to simply ignore it – avoiding the inevitable embarrassment when the error was found – or to publicize the issue in the hope of clearing things up. I think that choosing the latter shows great integrity on their part.

  31. ryan williams Says:

    I suggested calling it the “See, I Told You So” tour… 🙂
    Was great to have you at Music Night!

    I enjoy the Quantum BS detector. It’s utilizing the limits (and really, the entire vibe) of twitter nicely.

  32. Scott Says:

    Douglas Knight #29: It’s true that calling things “bullshit” is not really my style. I’m a professor, so why should I give you one word if I could give a whole devastating disquisition? 😀

    More seriously, some of the issues are subtle, and people constantly want to round them down to soundbites: “is this a real quantum computer or isn’t it?” “is the HHL algorithm an exponential speedup or not?” “does a QC try all the answers in parallel or does it not try them all in parallel?” But rounding down to soundbites is exactly how people are led astray. Enlightenment dawns only when the person finally understands, e.g., that a device can be a computer that exhibits quantum-mechanical behavior without clearly being able to harness that behavior to outperform a classical computer, and that whether or not to call such a device a “quantum computer” is a question of words rather than of reality.

  33. Greg Kuperberg Says:

    Gil #26:

    But another issue is that one needs to believe that the deviations for the individual errors are unbiased which is also very unreasonable.

    Given the mixing properties of the quantum circuit, I don’t understand why anyone needs to believe any such thing. As a toy model, suppose that you implement a randomly chosen classical reversible circuit to make an random-ish permutation on the 2^53 states of 53 bits. Suppose that a handful of errors sprinkled throughout are biased in favor of changing 0 to 1. Then because the circuit is mixing, so what?

    Besides, actually even if the circuit weren’t mixing, if you only assume that the errors are approximately independent — the approximation doesn’t even have to be all that high quality — then you would get an approximate product formula for the ratio of signal to noise for the final output.

  34. Greg Kuperberg Says:

    I think that choosing the latter shows great integrity on their part.

    Except that the OPERA team didn’t think that it showed great integrity. According to Wikipedia, “A vote of no confidence among the more than thirty group team leaders failed, but spokesperson Ereditato and physics coordinator Autiero resigned their leadership positions anyway on March 30, 2012.” I don’t think that anyone saw it as being nearly as bad as faked data, and you could be right that they never actually disbelieved special relativity. Nonetheless, I think that drama bias distracted them away from common sense. They could have redoubled their efforts to find mistakes that were ultimately their mistakes, rather than (at best) asking the rest of the world to find their mistakes for them.

  35. maline Says:

    Greg #34:
    The problem with keeping quiet about such a result is that, on the tiny chance that it actually is real, it is much too important to ignore. And there always is such a chance, simply because you are not allowed to set your priors to zero! It was a true double-bind situation.

    It’s a tragic fact about the world that even if everyone did everything right, raising an international furor about something that turns out to be a mistake must necessarily lead to someone losing their job. And again, that result was obvious from the beginning.

  36. David Shaw Says:

    I think QBD is an unfortunate development. The relatively good use that the academic community puts Twitter to is a welcome change to the ‘troll-like’ debates common elsewhere. Self-appointed, anonymous, undebated critique is not the way forward.

    I don’t think the accuracy of QBD is anywhere near ‘97%’. The issues are usually much more nuanced and any single call an oversimplification. IMHO the calls betray a hostility to anything with a corporate connection, or any political backing. The field needs a more mature approach than this.

    As a particular example, QBD routinely calls BS anything claiming there will be useful quantum applications within 10 years. On that basis all of Scott’s panels and presentations in the last couple of years are BS because he argues for the usefulness of provably random numbers. Scott doesn’t use Twitter (as far as I know), but if I promote an article/video referencing Scott’s proposed protocol it will be called BS. Does that seem a right?

  37. Scott Says:

    David Shaw #36: I’d make the following distinction. People working toward commercial applications of NISQ QCs within the next decade, or hoping that some such efforts will succeed, is not bullshit. People confidently claiming that there will be such applications, or making plans based around that assumption, is bullshit. This applies to my certified randomness protocol as much as to quantum simulation or the other more speculative applications.

  38. Greg Kuperberg Says:

    The problem with keeping quiet about such a result is that, on the tiny chance that it actually is real, it is much too important to ignore.

    See, that is exactly drama bias. I don’t think that any reasonable person would have advise OPERA to outright ignore their neutrino speed anomaly. No, the right question is how they should not ignore it: Should they redouble their efforts to debug their equipment? Or should they make a public announcement that there is a small chance that they saw something very important? Using Hollywood’s laws of probability, the answer is #2. Using real-life probabilities, the answer is #1. Faced with overwhelming evidence that their equipment was miscalibrated, they ran to the press and said that they were in a quandary and who knows, it just might be revolutionary.

    raising an international furor about something that turns out to be a mistake must necessarily lead to someone losing their job

    But it wasn’t an international furor, it was an international humiliation. That’s what it was the staff of OPERA itself and not any higher authorities who were upset. Those two people resigned because they had lost the confidence of the people under them. Probably some of the OPERA physicists and lab techs asked everyone to stay calm, and were ignored. For their part, these two PR men agreed that it had been sensationalized — only that it was the media’s fault and not their fault.

  39. Craig Gidney Says:

    AdamT #19

    > the call for *blind* tests would seem to me exactly what you’d want to remedy possible tampering.

    Part of the supremacy experiment actually was done blind, though not intentionally.

    The supremacy paper mentions “verification circuits”, which are circuits that are identical to the supremacy circuits except the verification circuits use a slightly different ordering of the two qubit gates. This ordering introduces a weakness that can be exploited to perform simulation much more efficiently.

    The verification circuits were not originally intended to be verification circuits. They were intended to be the actual supremacy circuits. We didn’t know the ordering was so important; someone just picked something that looked reasonable. It was only after we collected the experimental data for these circuits that the weakness was noticed.

    So of course we fixed the ordering. But we also updated our simulation software so it could now compute the fidelity of the data that was collected before we know how to simulate the associated circuits. And the fidelity results for the accidentally-blinded data landed on the expected trend line, as you can see in the paper.

    In fact, there is supremacy data that is *still* blinded. Along with the paper we published the data collected for the circuits that we still don’t know how to reasonably simulate. I fully expect that if people make more advances on simulating supremacy circuits, and are able to compute the fidelity of that data, then the result will again land on the trend line.

  40. Scott Says:

    Philip Calcott #24:

      Do you have any dates for your Yale visit yet?

    Haven’t bought tickets yet, but looks like Friday April 24.

  41. Scott Says:

    anon #25:

      Any chance you’d like to come across the pond over to London and make it a proper world tour?

    I’d love to visit London again. My travel schedule is already completely packed for the spring, but summer or fall might be possibilities.

  42. Justin Says:

    Scott #20: I think you shouldn’t trust QBSD for anything nor should you recommend it for anything but a joke. Maybe this is where I diverge from people who think QBSD is wrong or unfortunate; anybody whose opinions about an article, etc., might be influenced by QBSD must not be serious enough for it to matter. As long as the account sticks to two-word answers, it’s hard to take it seriously, so I just take it for a laugh and an anti-hype reminder.

    And here are some updates:
    QBSD detector weighs in on this blog post:

    Clarifies its process:

    And the debate with Chris Savoie continues:

    And Scott, you as well as Chris Savoie are now quoted in QBSD’s bio:

  43. Karen Morenz Says:

    I’m intrigued by this idea of blind experiments: why are we only worried about that in the case of Google’s quantum supremacy demo? Shouldn’t it be just as much of an issue in any experiment where oodles of data analysis are required before a result pops out?

    Re: QBD – I don’t see a problem with it. A QBD reading is like a single qubit reading; it doesn’t quite mean nothing, but you also probably shouldn’t bet your life on it. It’s just twitter, no one should take twitter too seriously. As for accountability, this is the internet: anyone could be lying about who they are anyways. I mean I only have to type my name and email into these boxes on this site, and anyone could have my email since it’s publicly available…I doubt Scott is implementing any text analysis to determine if this is really me or not. I agree with what Douglas Knight #29 said: by accountability, Savoie seems to mean possibility of retaliation.

  44. Jason Says:

    Scott, I’m curious if you’ve seen the recent preprint arXiv:2002.07730 showing that tensor network methods can find a low-entanglement approximation for random circuit wave functions which have a similar fidelity/cross-entropy to the results of the quantum supremacy experiments. I’m not sure if I’m missing some subtlety, but it seems to me that this is a much more serious threat to the supremacy claim than, say, the IBM response.

  45. Scott Says:

    Jason #44: Yes, of course I saw that; I also had an email correspondence with colleagues about it. A crucial point to understand (and it’s in the paper, but not clearly emphasized) is that the quoted simulation times, just a few minutes for quantum circuits with 54 qubits and depth 20, assume Controlled-Z gates rather than iSWAP-like gates. Using tensor network methods, the classical simulation cost with the former is roughly the square root of the simulation cost with the latter (~2k versus ~4k for some parameter k related to the depth). As it happens, Google switched its hardware from Controlled-Z to iSWAP-like gates a couple years ago precisely because they realized this—I had a whole conversation about it with Sergio Boixo at the time. Once this issue is accounted for, the quoted simulation times in the new paper seem to be roughly in line with what was previously reported by, e.g., Johnnie Gray and Google itself. Thus, this does not seem to pose a threat to Google’s quantum supremacy claim as it currently stands.

  46. age bronze Says:

    Im a bit skeptical about the meaning of the theoretical hardness underlying this:
    I do not doubt that the problem of calculating probabilities is hard. But the same problem of calculating probabilities is hard even for a classical circuit receiving random bits. Calculating the exact probability of a polynomial sized circuit receiving random bits is p# complete. Yet we run these all the time.

    Sampling is an order of magnitude easier, and can’t be reduced to probability calculation without exponential number of samples.

    I haven’t seen a good hardness result that applies to the random quantum circuit, that doesn’t also apply to a normal classical circuit with random bits.

    Yes, there’s the result that getting a good linear cross entropy is equivalent to getting a high probability output. But this problem, getting high probability outputs, phrased with random classical circuits instead, is also unsolved, probably hard to solve, for a deterministic computer.

    given that all suggestions for algorithms to simulate the quantum circuits were deterministic, it’s not surprising they aren’t efficient.

    I’m just not convinced there doesn’t exist some random algorithm that doesn’t have even slightly better than uniform distribution chance of generating high probably output, for a randomized circuit.

    yes, the burden of proof might be on the skeptics side. but I think it’s just a matter of time until someone notices that for a subset of circuits, guessing some output better than chance is possible. These things aren’t cryptographic functions, yet the hardness assumptions are as hard as theoretical cryptography. I don’t think this will stand.

  47. Scott Says:

    age bronze #46: Have you read a single one of the papers we wrote about this? Like, getting evidence that it’s hard for a classical computer to approximately sample the QC’s output distribution—as opposed to merely calculate the probabilities—was the entire frigging point of all the theory that we did in this subject over the last decade. The case is still not airtight, but if you can’t even learn the basics then there’s nothing to respond to.

  48. age bronze Says:

    Scott #46: I did read it. The best reduction from sampling random quantum circuit is far from air tight as you said yourself. I mainly have a problem with the assumption that a random circuit is hard to sample, as opposed to a specific circuit, which I am convinced it is hard to sample.

    There are no good reductions from a random circuit to specific circuit. And the random circuit reductions assume there’s no way a classical computer can guess even slightly better than random. I am aware of the current results, I don’t think they will stand because assuming there’s no subset of circuits which is big enough and for which you can guess slightly better than random sounds wrong.

  49. age bronze Says:

    Scott #46: I have good reason to think that some subsets of random circuits can be guessed better than random: if that wasn’t the case, physics would have no chance of giving any predictions, because I bet there is a reduction from random configuration of particles to random circuits. Yet many physical theories exist and know properties about them better than random guess and with just classical computation.

  50. Scott Says:

    age bronze #48: OK then. If you read my paper with Sam Gunn, you presumably know that any fast classical algorithm to spoof Google’s benchmark, even by a little, in expectation over a random choice of circuit, would imply a fast classical algorithm to estimate specific amplitudes in such random circuits a little better than the trivial estimator. So presumably you’re committed to the belief that such a classical estimation algorithm exists, and there’s little I can suggest for you at this point except to go and look for it! 🙂

  51. age bronze Says:

    I was looking at the old QUATH / HOG paper, and wasn’t up to date with the new paper.

    About the new paper, I think you missed a little subtlety: XQUATH requires -distinct- samples, whereas the quoted 30 million samples from the google paper were never mentioned to be distinct in the original paper. You should talk to them and find out how many samples were distinct. In the appendix, “they needed 4 million samples,
    much less than the 30 million they did take.” – while original paper did not say those 30 mill samples are distinct, and I’m quire sure they were not distinct.

    XQUATH requiring distinct samples does tighten up the chances of spoofing it. I’m not really sure how do you count the cases where not enough distinct samples exist? The paper seems to miss the chance of randomizing a circuit for which XHOG is unsolvable because there are no k distinct samples.

    In the paper, you say trying k random Feynman paths the error will decay exponentially with the number of gates. Is this the worst case or average, over all randomized circuits, case? What is the probability of randomizing a circuit for which the error from k random Feynman paths does not decay exponentially with the number of gates? Any references on that would be great.

    In general I only think there’s a non-negligible chance to randomize a ‘weak circuit’, for which classical fast algorithms just work. I didn’t see enough compelling proofs that such ‘weak circuits’ don’t exist, or any other analysis that says stuff like “the probability (over random circuits) for algorithm A to work is P”. The circuit can also be weak only in the sense of having a slight bias for outputs with some property, in which case, just randomly sampling with that bias is also enough. Similar things can happen and it wouldn’t contradict any known result so long as the intersection of weak circuits and the circuits from those results in empty.

  52. Scott Says:

    age bronze #51: There are 253 possible samples. By the usual birthday statistics, we’d expect to see the first collisions after taking about 226.5 samples. 30 million samples is ~225, which is just barely under that bound—so, they might or might not have seen any collisions. More importantly, even if they did see collisions, it would’ve been a vanishingly small number, way too small to have any effect on the final result they reported.

    Even if it has no effect on the current results, though, whether to require collision-freeness in the samples or not is an interesting technical issue, one that Sam and I (and before that, Lijie Chen and I) went back and forth on. If we care only about the expected performance on Linear XEB, then requiring collision-freeness is unnecessary and just feels like an ugly complication. If, on the other hand, we want to set a threshold Linear XEB score, and assert that a fast classical algorithm has only a negligible probability of exceeding that threshold … well then, it’s easy to see that collision-freeness does become important! In the end, the latter consideration won the day for us.

  53. age bronze Says:

    I just realized that while your paper models the Fidelity as a constant (b in the paper for be which you’re looking for heavy output with probability >b2^n), the original Google paper models its Fidelity as [1 − e1/(1 − 1/D^2)]^m, where m is the circuit depth. So the error in Google’s QC is predicted to decay exponentially with the number of gates, while XQUATH assumes a constant (the 1.001 Fidelity from the Google paper actually came from e^m, and isn’t a constant as assumed in your paper).

    This raises the question of whether k paths Feynman won’t solve this with similar fidelity. As you said, the error decays exponentially with number of gates, but so does the error of sycamore. It looks like there should exist a k for which k paths Feynman approximation will reach the same expected fidelity as sycamore.

  54. Scott Says:

    age bronze #53: This is an issue that’s already been “rediscovered” and then re-addressed probably at least 20 times in the comments of this blog! Yes, of course, circuit fidelity decays exponentially with depth if you don’t error-correct. Indeed, in a simple model, it might even decay just as fast as the advantage over random guessing achieved by the “random Feynman path” algorithm—if we ignore the fact that the exponential governing the fidelity has a much more favorable base.

    This is exactly the reason why we don’t call Sycamore a “scalable quantum computer.” It’s also why all serious QC researchers agree that error-correction and fault-tolerance will ultimately be needed for scalability.

    In the meantime, what can be done now is to achieve a circuit fidelity of ~0.002 with n=53 qubits and a depth of d=20. And while ~0.002 looks small, the key point for our purposes is that it can be distinguished from 0 using a number of samples that was feasible for Google to take (indeed, in about 3 minutes). And once the fidelity has been distinguished from 0, my claim is that, in some sense, the burden shifts to those who claim that 253 ~ 9 quadrillion amplitudes were not harnessed to do a computation. Those people now have the burden of explaining: then how was the measured Linear XEB score obtained? What’s your alternate theory of how it happened, which doesn’t invoke quantum computation but also doesn’t invoke ~253 classical computation steps? This is where (e.g.) my and Sam Gunn’s hardness result becomes relevant—by showing that, in some sense, there’s “nothing special” about the problem of spoofing Linear XEB. If there’s a fast classical algorithm to do that, then there’s also a fast classical algorithm to simply estimate final amplitudes in random quantum circuits, better than the trivial estimator.

    Crucially, nothing in the above argument depended on Sycamore being “scalable” (which, indeed, no one claims that it is, at least not until you can do error-correction).

  55. age bronze Says:

    Scott 54# thanks for the response, and I apologise I made you repeat a previous explanation.

    Suppose we either ignore the exponent base, or find ways to improve Feynman’s random path algorithm such that it’s exponent matches sycamore’s, wouldn’t that prove that what sycamore computed can be simulated in classical computer, thereby refuting quantum supremacy claim?

    This sounds like a big deal in itself.

    I can also see how one might explain what happened as having k instances of Feynman random path algorithm running in parallel, instead of resorting to the 2^53 amplitudes. It won’t even sounds like a conspiracy as we already describe nature as running a sort of Feynman path integral. If the case is that each of those k instances is backed by something physical, then quantum computing is hopeless – and will only be able to scale as good as a parallel computer, running those same k instances of Feynman path.

  56. Scott Says:

    age bronze #55: Yes, it’s pretty much tautologically true that, if you can improve the existing classical simulation methods to where they can easily simulate the Sycamore chip (with the depth and fidelity that Google reported), then you refute Google’s current quantum supremacy claim. And yes, that would be a big deal—even if not necessarily for the fundamental questions of eternity, certainly for the PR battles of the present. 😀

    But provided you agree that n sufficiently clean qubits should take ~2n time to simulate classically, I’d advise you to move quickly on this! Yes, both classical hardware and classical algorithms will surely improve, but we should expect the number of qubits and the circuit fidelity (for a given depth) to quickly improve as well from here on out.

  57. fred Says:

    Okay, so Quantum Supremacy is a thing…

    But have we considered that there might be a hidden cost to it?

    Has anyone else noticed that the total IQ of all the brains in the world seems to have gone down drastically those last few years?
    Just as the number of qubits we entangle go up!
    Coincidence or there’s really no free lunch?

  58. fred Says:

    Scott #56

    “But provided you agree that n sufficiently clean qubits should take ~2n time to simulate classically, I’d advise you to move quickly on this!”

    Is it though?

    I’m trying to come up with a simpler analogy to express this:

    You have a magic computer that can add floating point numbers with a trillion bits of precision at no extra cost, but the final result is always rounded to two decimal places. (this ignores how those numbers are loaded into memory).

    Then you also have a “normal” computer that only adds up floats with 64 bit precision.

    And we’re trying to come up with situations where the second machine just can’t match the result of the first one by a long shot, for a similar run time.

    It really depends on the data profile, i.e. the amount of numbers and their range distribution.

    There are often strategies to limit errors sufficiently, without requiring the second machine to rely on emulating the “trillion bit registry” ability via brute force, like sort the data and start adding up the smaller numbers first, etc.

    And of course there are scenarios where a trillion bits of precision just can’t be replaced.
    Then the question is whether those scenarios occur naturally/often.

  59. Romain Says:

    Scott #5

    “Quantum computational supremacy, on the other hand, does now look to be a done deal, modulo tying up a few loose ends with the Sycamore experiment.”

    I would like to point at a question that keeps puzzling me, regarding the conclusions that can be drawn from the (otherwise beautiful theoretically and technologically extremely impressive) « quantum computational supremacy » results published by the Google AI team.

    There are six different circuit variants: { patch, elided, full} X {simplifiable, non-simplifiable } and the « supremacy circuits » (related to which the central claim of quantum computational supremacy is made) correspond to {full, non-simplifiable}.

    Unfortunately, the paper does not provide any XEB data for such supremacy circuits.
    (If I understood it correctly, Boaz Barak also raised that concern during the debate that took place in Jerusalem).

    Of course, given current knowledge on how to simulate RCS on classical computers, it is not feasible to compute XEB data for a large (say n>40) number of qubits: this is shown on Fig S50 c) of the supplementary information. The same Figure however also clearly indicates that XEB computations for the supremacy circuits are totally feasible with up to about 30 qubits, (even for depth m=20).

    This leads to at least two questions:
    * Are there compelling reasons to believe that XEB data for {full, simplifiable} and {full, non-simplifiable} circuits should just perfectly match?
    * Should we consider the absence of published XEB data for the supremacy circuits (even for n<30) as one of the loose ends that still needs to be tied up?

Leave a Reply

Comment Policy: All comments are placed in moderation and reviewed prior to appearing. Comments can be left in moderation for any reason, but in particular, for ad-hominem attacks, hatred of groups of people, or snide and patronizing tone. Also: comments that link to a paper or article and, in effect, challenge me to respond to it are at severe risk of being left in moderation, as such comments place demands on my time that I can no longer meet. You'll have a much better chance of a response from me if you formulate your own argument here, rather than outsourcing the job to someone else. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.