## Scott’s Supreme Quantum Supremacy FAQ!

You’ve seen the stories—in the *Financial Times*, *Technology Review*, *CNET*, Facebook, Reddit, Twitter, or elsewhere—saying that a group at Google has now achieved quantum computational supremacy with a 53-qubit superconducting device. While these stories are easy to find, I’m not going to link to them here, for the simple reason that *none of them were supposed to exist yet*.

As the world now knows, Google is indeed preparing a big announcement about quantum supremacy, to coincide with the publication of its research paper in a high-profile journal (which journal? you can probably narrow it down to two). This will hopefully happen within a month.

Meanwhile, though, NASA, which has some contributors to the work, inadvertently posted an outdated version of the Google paper on a public website. It was there only briefly, but long enough to make it to the *Financial Times*, my inbox, and millions of other places. Fact-free pontificating about what it means has predictably proliferated.

The world, it seems, is going to be denied its clean “moon landing” moment, wherein the Extended Church-Turing Thesis gets experimentally obliterated within the space of a press conference. This is going to be more like the Wright Brothers’ flight—about which rumors and half-truths leaked out in dribs and drabs between 1903 and 1908, the year Will and Orville finally agreed to do public demonstration flights. (This time around, though, it thankfully won’t take *that* long to clear everything up!)

I’ve known about what was in the works for a couple months now; it was excruciating not being able to blog about it. Though sworn to secrecy, I couldn’t resist dropping some hints here and there (did you catch any?)—for example, in my recent Bernays Lectures in Zürich, a lecture series whose entire structure built up to the brink of this moment.

This post is not an official announcement or confirmation of anything. Though the lightning may already be visible, the thunder belongs to the group at Google, at a time and place of its choosing.

Rather, because so much misinformation is swirling around, what I thought I’d do here, in my role as blogger and “public intellectual,” is offer **Scott’s Supreme Quantum Supremacy FAQ**. You know, just in case you were randomly curious about the topic of quantum supremacy, or wanted to know what the implications would be if some search engine company based in Mountain View or wherever were *hypothetically* to claim to have achieved quantum supremacy.

Without further ado, then:

**Q1. What is quantum computational supremacy?**

Often abbreviated to just “quantum supremacy,” the term refers to the use of a quantum computer to solve *some* well-defined set of problems that would take orders of magnitude longer to solve with any currently known algorithms running on existing classical computers—and not for incidental reasons, but for reasons of asymptotic quantum complexity. The emphasis here is on being as sure as possible that the problem *really was* solved quantumly and *really is* classically intractable, and ideally achieving the speedup *soon* (with the noisy, non-universal QCs of the present or very near future). If the problem is also *useful* for something, then so much the better, but that’s not at all necessary. The Wright Flyer and the Fermi pile weren’t useful in themselves.

**Q2. If Google has indeed achieved quantum supremacy, does that mean that now “no code is uncrackable”, as Democratic presidential candidate Andrew Yang recently tweeted?**

No, it doesn’t. (But I still like Yang’s candidacy.)

There are two issues here. First, the devices currently being built by Google, IBM, and others have 50-100 qubits and no error-correction. Running Shor’s algorithm to break the RSA cryptosystem would require several thousand logical qubits. With known error-correction methods, that could easily translate into *millions* of physical qubits, and those probably of a higher quality than any that exist today. I don’t think anyone is close to that, and we have no idea how long it will take.

But the second issue is that, even in a hypothetical future with scalable, error-corrected QCs, on our current understanding they’ll only be able to crack *some* codes, not all of them. By an unfortunate coincidence, the public-key codes that they can crack include *most* of what we currently use to secure the Internet: RSA, Diffie-Hellman, elliptic curve crypto, etc. But symmetric-key crypto should only be minimally affected. And there are even candidates for public-key cryptosystems (for example, based on lattices) that no one knows how to break quantumly after 20+ years of trying, and some efforts underway now to start migrating to those systems. For more, see for example my letter to Rebecca Goldstein.

**Q3. What calculation is Google planning to do, or has it already done, that’s believed to be classically hard?**

So, I can tell you, but I’ll feel slightly sheepish doing so. The calculation is: a “challenger” generates a random quantum circuit C (i.e., a random sequence of 1-qubit and nearest-neighbor 2-qubit gates, of depth perhaps 20, acting on a 2D grid of n = 50 to 60 qubits). The challenger then sends C to the quantum computer, and asks it apply C to the all-0 initial state, measure the result in the {0,1} basis, send back whatever n-bit string was observed, and repeat some thousands or millions of times. Finally, using its knowledge of C, the classical challenger applies a statistical test to check whether the outputs are consistent with the QC having done this.

So, this is not a problem like factoring with a single right answer. The circuit C gives rise to some probability distribution, call it D_{C}, over n-bit strings, and the problem is to output samples from that distribution. In fact, there will typically be 2^{n} strings in the support of D_{C}—so many that, if the QC is working as expected, the same output will never be observed twice. A crucial point, though, is that the distribution D_{C} is *not* uniform. Some strings enjoy constructive interference of amplitudes and therefore have larger probabilities, while others suffer destructive interference and have smaller probabilities. And even though we’ll only see a number of samples that’s tiny compared to 2^{n}, we can *check* whether the samples preferentially cluster among the strings that are predicted to be likelier, and thereby build up our confidence that something classically intractable is being done.

So, tl;dr, the quantum computer is simply asked to apply a random (but known) sequence of quantum operations—not because we intrinsically care about the result, but because we’re trying to prove that it can beat a classical computer at *some* well-defined task.

**Q4. But if the quantum computer is just executing some random garbage circuit, whose only purpose is to be hard to simulate classically, then who cares? Isn’t this a big overhyped nothingburger?**

No. As I put it the other day, it’s not an everythingburger, but it’s certainly at least a somethingburger!

It’s like, have a little respect for the immensity of what we’re talking about here, and for the terrifying engineering that’s needed to make it reality. Before quantum supremacy, by definition, the QC skeptics can all laugh to each other that, for all the billions of dollars spent over 20+ years, *still* no quantum computer has even once been used to solve any problem faster than your laptop could solve it, or at least not in any way that depended on its being a quantum computer. In a post-quantum-supremacy world, that’s no longer the case. A superposition involving 2^{50} or 2^{60} complex numbers has been computationally harnessed, using time and space resources that are minuscule compared to 2^{50} or 2^{60}.

I keep bringing up the Wright Flyer only because the chasm between what we’re talking about, and the dismissiveness I’m seeing in some corners of the Internet, is kind of breathtaking to me. It’s like, if you believed that useful air travel was fundamentally impossible, then seeing a dinky wooden propeller plane keep itself aloft wouldn’t refute your belief … *but it sure as hell shouldn’t reassure you either*.

Was I right to worry, years ago, that the constant drumbeat of hype about much less significant QC milestones would wear out people’s patience, so that they’d no longer care when something newsworthy finally did happen?

**Q5. Years ago, you scolded the masses for being super-excited about D-Wave, and its claims to get huge quantum speedups for optimization problems via quantum annealing. Today you scold the masses for not being super-excited about quantum supremacy. Why can’t you stay consistent?**

Because my goal is not to move the “excitement level” in some uniformly preferred direction, it’s to be right! With hindsight, would you say that I was mostly right about D-Wave, even when raining on that particular parade made me unpopular in some circles? Well, I’m trying to be right about quantum supremacy too.

**Q6. If quantum supremacy calculations just involve sampling from probability distributions, how do you check that they were done correctly?**

Glad you asked! This is the subject of a fair amount of theory that I and others developed over the last decade. I already gave you the short version in my answer to Q3: you check by doing statistics on the samples that the QC returned, to verify that they’re preferentially clustered in the “peaks” of the chaotic probability distribution D_{C}. One convenient way of doing this, which Google calls the “linear cross-entropy test,” is simply to sum up Pr[C outputs s_{i}] over all the samples s_{1},…,s_{k} that the QC returned, and then to declare the test a “success” if and only if the sum exceeds some threshold—say, bk/2^{n}, for some constant b strictly between 1 and 2.

Admittedly, in order to apply this test, you need to *calculate* the probabilities Pr[C outputs s_{i}] on your classical computer—and the only known ways to calculate them require brute force and take ~2^{n} time. Is that a showstopper? No, not if n is 50, and you’re Google and are able to handle numbers like 2^{50} (although not 2^{1000}, which exceeds a googol, har har). By running a huge cluster of classical cores for (say) a month, you can eventually verify the outputs that your QC produced in a few seconds—while also seeing that the QC was many orders of magnitude faster. However, this does mean that sampling-based quantum supremacy experiments are almost *specifically* designed for ~50-qubit devices like the ones being built right now. Even with 100 qubits, we wouldn’t know how to verify the results using all the classical computing power available on earth.

(Let me stress that this issue is specific to *sampling* experiments like the ones that are currently being done. If Shor’s algorithm factored a 2000-digit number, it would be easy to check the result by simply multiplying the claimed factors and running a primality test on them. Likewise, if a QC were used to simulate some complicated biomolecule, you could check its results by comparing them to experiment.)

**Q7. Wait. If classical computers can only check the results of a quantum supremacy experiment, in a regime where the classical computers can still simulate the experiment (albeit extremely slowly), then how do you get to claim “quantum supremacy”?**

Come on. With a 53-qubit chip, it’s perfectly feasible to see a speedup by a factor of many millions, in a regime where you can still directly verify the outputs, and also to see that the speedup is growing exponentially with the number of qubits, exactly as asymptotic analysis would predict. This isn’t marginal.

**Q8. Is there a mathematical proof that no fast classical algorithm could possibly spoof the results of a sampling-based quantum supremacy experiment?**

Not at present. But that’s not quantum supremacy researchers’ fault! As long as theoretical computer scientists can’t even prove basic conjectures like P≠NP or P≠PSPACE, there’s no hope of ruling out a fast classical simulation unconditionally. The best we can hope for are conditional hardness results. And we have indeed managed to prove some such results—see for example the BosonSampling paper, or the Bouland et al. paper on average-case #P-hardness of calculating amplitudes in random circuits, or my paper with Lijie Chen (“Complexity-Theoretic Foundations of Quantum Supremacy Experiments”). The biggest theoretical open problem in this area, in my opinion, is to prove *better* conditional hardness results.

**Q9. Does sampling-based quantum supremacy have any applications in itself?**

When people were first thinking about this subject, it seemed pretty obvious that the answer was “no”! (I know because I was one of the people.) Recently, however, the situation has changed—for example, because of my certified randomness protocol, which shows how a sampling-based quantum supremacy experiment could almost immediately be repurposed to generate bits that can be *proven to be random* to a skeptical third party (under computational assumptions). This, in turn, has possible applications to proof-of-stake cryptocurrencies and other cryptographic protocols. I’m hopeful that more such applications will be discovered in the near future.

**Q10. If the quantum supremacy experiments are just generating random bits, isn’t that uninteresting? Isn’t it trivial to convert qubits into random bits, just by measuring them?**

The key is that a quantum supremacy experiment doesn’t generate *uniform* random bits. Instead, it samples from some complicated, correlated probability distribution over 50- or 60-bit strings. In my certified randomness protocol, the deviations from uniformity play a central role in how the QC convinces a classical skeptic that it really *was* sampling the bits randomly, rather than in some secretly deterministic way (e.g., using a pseudorandom generator).

**Q11. Haven’t decades of quantum-mechanical experiments–for example, the ones that violated the Bell inequality–already demonstrated quantum supremacy?**

This is purely a confusion over words. Those other experiments demonstrated other forms of “quantum supremacy”: for example, in the case of Bell inequality violations, what you could call “quantum correlational supremacy.” They did not demonstrate quantum *computational* supremacy, meaning doing something that’s infeasible to simulate using a classical computer (where the classical simulation has no restrictions of spatial locality or anything else of that kind). Today, when people use the phrase “quantum supremacy,” it’s generally short for quantum computational supremacy.

**Q12. Even so, there are countless examples of materials and chemical reactions that are hard to classically simulate, as well as special-purpose quantum simulators (like those of Lukin’s group at Harvard). Why don’t these already count as quantum computational supremacy?**

Under some people’s definitions of “quantum computational supremacy,” they do! The key difference with Google’s effort is that they have a *fully programmable *device—one that you can program with an arbitrary sequence of nearest-neighbor 2-qubit gates, just by sending the appropriate signals from your classical computer.

In other words, it’s no longer open to the QC skeptics to sneer that, sure, there are quantum systems that are hard to simulate classically, but that’s just because *nature* is hard to simulate, and you don’t get to arbitrarily redefine whatever random chemical you find in the wild to be a “computer for simulating itself.” Under any sane definition, the superconducting devices that Google, IBM, and others are now building are indeed “computers.”

**Q13. Did you (Scott Aaronson) invent the concept of quantum supremacy?**

No. I did play some role in developing it, which led to Sabine Hossenfelder among others generously overcrediting me for the whole idea. The term “quantum supremacy” was coined by John Preskill in 2012, though in some sense the core concept goes back to the beginnings of quantum computing itself in the early 1980s. In 1993, Bernstein and Vazirani explicitly pointed out the severe apparent tension between quantum mechanics and the Extended Church-Turing Thesis of classical computer science. Then, in 1994, the use of Shor’s algorithm to factor a huge number became the quantum supremacy experiment *par excellence*—albeit, one that’s still (in 2019) much too hard to perform.

The key idea of instead demonstrating quantum supremacy using a *sampling problem* was, as far as I know, first suggested by Barbara Terhal and David DiVincenzo, in a farsighted paper from 2002. The “modern” push for sampling-based supremacy experiments started around 2011, when Alex Arkhipov and I published our paper on BosonSampling, and (independently of us) Bremner, Jozsa, and Shepherd published their paper on the commuting Hamiltonians model. These papers showed, not only that “simple,” non-universal quantum systems can solve apparently-hard sampling problems, but also that an efficient classical algorithm for the same sampling problems would imply a collapse of the polynomial hierarchy. Arkhipov and I also made a start toward arguing that even the *approximate* versions of quantum sampling problems can be classically hard.

As far as I know, the idea of “Random Circuit Sampling”—that is, generating your hard sampling problem by just picking a random sequence of 2-qubit gates in (say) a superconducting architecture—originated in an email thread that I started in December 2015, which also included John Martinis, Hartmut Neven, Sergio Boixo, Ashley Montanaro, Michael Bremner, Richard Jozsa, Aram Harrow, Greg Kuperberg, and others. The thread was entitled “Hard sampling problems with 40 qubits,” and my email began “Sorry for the spam.” I then discussed some advantages and disadvantages of three options for demonstrating sampling-based quantum supremacy: (1) random circuits, (2) commuting Hamiltonians, and (3) BosonSampling. After Greg Kuperberg chimed in to support option (1), a consensus quickly formed among the participants that (1) was indeed the best option from an engineering standpoint—and that, if the theoretical analysis wasn’t yet satisfactory for (1), then that was something we could remedy.

[**Update:** Sergio Boixo tells me that, internally, the Google group had been considering the idea of random circuit sampling since February 2015, even before my email thread. This doesn’t surprise me: while there are lots of details that had to be worked out, the idea itself is an extremely natural one.]

After that, the Google group did a huge amount of analysis of random circuit sampling, both theoretical and numerical, while Lijie Chen and I and Bouland et al. supplied different forms of complexity-theoretic evidence for the problem’s classical hardness.

**Q14. If quantum supremacy was achieved, what would it mean for the QC skeptics?**

I wouldn’t want to be them right now! They could retreat to the position that *of course* quantum supremacy is possible (who ever claimed that it wasn’t? surely not them!), that the real issue has always been quantum error-correction. And indeed, some of them have consistently maintained that position all along. But others, including my good friend Gil Kalai, are **on record, right here on this blog** predicting that even quantum supremacy can never be achieved for fundamental reasons. I won’t let them wiggle out of it now.

[**Update:** As many of you will have seen, Gil Kalai has taken the position that the Google result won’t stand and will need to be retracted. He asked for more data: specifically, a complete histogram of the output probabilities for a smaller number of qubits. This turns out to be already available, in a *Science* paper from 2018.]

**Q15. What’s next?**

*If* it’s achieved quantum supremacy, then I think the Google group already has the requisite hardware to demonstrate my protocol for generating certified random bits. And that’s indeed one of the very next things they’re planning to do.

[**Addendum:** Also, of course, the evidence for quantum supremacy itself can be made stronger and various loopholes closed—for example, by improving the fidelity so that fewer samples need to be taken (something that Umesh Vazirani tells me he’d like to see), by having the circuit C be generated and the outputs verified by a skeptic external to Google. and simply by letting more time pass, so outsiders can have a crack at simulating the results classically. My personal guess is that the basic picture is going to stand, but just like with the first experiments that claimed to violate the Bell inequality, there’s still plenty of room to force the skeptics into a tinier corner.]

Beyond that, one obvious next milestone would be to use a programmable QC, with (say) 50-100 qubits, to do some *useful quantum simulation* (say, of a condensed-matter system) much faster than any known classical method could do it. A second obvious milestone would be to demonstrate the use of quantum error-correction, to keep an encoded qubit alive for longer than the underlying physical qubits remain alive. There’s no doubt that Google, IBM, and the other players will now be racing toward both of these milestones.

[**Update:** Steve Girvin reminds me that the Yale group has already achieved quantum error-correction “beyond the break-even point,” albeit in a bosonic system rather than superconducting qubits. So perhaps a better way to phrase the next milestone would be: achieve quantum computational supremacy *and* useful quantum error-correction in the same system.]

**Another update:** I thought this *IEEE Spectrum* piece gave a really nice overview of the issues.

**Last update:** John Preskill’s *Quanta* column about quantum supremacy is predictably excellent (and possibly a bit more accessible than this FAQ).

Comment #1 September 23rd, 2019 at 4:17 pm

Time to re-read The Quantum Spy by David Ignatius!

Not sure, that NASA *inadvertently* posted an outdated version of the Google paper on a public website…

Comment #2 September 23rd, 2019 at 4:19 pm

[…] Scott’s Supreme Quantum Supremacy FAQ 2 by xmmrm | 0 comments on Hacker News. […]

Comment #3 September 23rd, 2019 at 4:50 pm

Thanks for the write up Scott. I think this clarifies mostly everything, and I for one am excited and cautiously optimistic about the future of quantum computing!

Comment #4 September 23rd, 2019 at 4:59 pm

If I understand correctly, known results show that it is hard (assuming standard complexity theoretic conjectures) for a classical machine to sample from the output distribution of RCS exactly.

Throwing in additional (non-standard) conjectures, we also get that it is hard for a classical machine to sample from the output distribution of RCS approximately in L_1 or variation distance.

Do we also know that it is hard (assuming the same non-standard conjectures above) to sample from a distribution that has high “linear cross-entropy score”? If not, is this true if we throw in even more non-standard conjectures?

Comment #5 September 23rd, 2019 at 5:14 pm

Q16. Is there evidence in favour of quantum Moore’s law, ie. that the number of qbits in manufacturable quantum circuits will multiply by x each year? Or maybe what we can actually achieve is only _add_ x qbits each year?

Comment #6 September 23rd, 2019 at 5:19 pm

No, no, no.

After all the hype that will follow, when that dust settles we’ll need to see a 50-100 QC solve Simon’s problem for a 25-50 qubit circuit – i was non-explicitly promised this!

That’s a more targeted test for “large scale interference in the context of universal computation” with far more concrete results.

If we just stop at random sampling and move on to “practical applications”, i’m going to be really disappointed.

I mean, i will run the test myself as soon as a QC is available that has sufficient gate depth to implement more than one Toffoli gate (i already have the code), but who knows how long that will be.

Let’s stay focused?

Comment #7 September 23rd, 2019 at 5:21 pm

This post reminds me of the night Donald Trump was elected. Everyone was so excited because they knew Hillary was going to win, since all the polls said so. But when the final votes came out, it was a disaster for Hillary’s team. I knew and predicted Trump was going to win the whole time, because history shows that the American people have always chosen the more charismatic candidate for president since Kennedy and Nixon first debated on TV and Trump is more charismatic than Hillary (whether you like him or not).

If Google has quantum supremacy, they would have announced it already. What’s holding them back? I don’t think they really have quantum supremacy. I’m glad they are cautious enough not to say anything.

Comment #8 September 23rd, 2019 at 5:21 pm

Thank you Scott, for giving me enough background understanding so that, my heart would keep beating, on the day that Google would announce Quantum Supremacy.

Indeed, sampling is the low hanging fruit in this game.

Let me be kind of ignorant, and say the following:

It is entirely possible, that this sampling is not just the low hanging fruit, but the only fruit.

Time will tell, if the gap between this solution and what we had before, is exponentially smaller than the gap between some reasonably useful Quantum Computer and a quantum sampling machine.

I am ready to be optimistic.

Comment #9 September 23rd, 2019 at 5:56 pm

Since circuits are generated randomly, is it possible that this problem is not actually hard classically with high probability over those circuits? I know you answered this partly in Q8, but what about a probabilistic statement? For example, what is the difference between this quantum supremacy experiment and an example considered in this paper, in which a classically hard problem is solved polynomially with a high probability: https://arxiv.org/pdf/1812.10897.pdf ?

Comment #10 September 23rd, 2019 at 6:00 pm

Thanks for the great questions and illuminating answers Scott.

The experimental demonstration that 20 gates could be applied to 53 qubits, while still keeping some control on the output state (non-zero fidelity) is extremely impressive and certainly constitutes a landmark technological result for quantum computing.

Whether computational supremacy can or cannot be claimed from the obtained results is also an important question (in particular on this blog).

As far as I understand, it seems mostly related to the computational complexity of simulating classically the “practical” RCS experiment (as opposed to a model of the experiment).

Is it possible that some “fine prints” could have a decisive impact on this simulation complexity and hence on the validity of the claimed supremacy ?

I am thinking in particular about the way noise is modeled: noise is assumed to be uniform in the article.

Would there be an impact on the complexity of classically simulating RCS if the noise was not uniform ? And how big could be this impact ?

Another question that really puzzles me is how to reconcile the apparent contradiction between the possibility to achieve supremacy with depth-20 RCS on 53 qubits and the recent claims from Alibaba to be able to simulate (compute amplitudes) for depth-32 RCS on 70 qubits.

https://arxiv.org/pdf/1907.11217.pdf

Is is related to the fact that the simulation of Google experiment requires to compute ALL the 2^53 amplitudes, and Is this actually necessary to do so ?

Comment #11 September 23rd, 2019 at 6:36 pm

@Craig, The thing holding them back is probably the journal’s embargo policy. For example, here are the rules for

Science, a likely destination for this paper:After the paper is accepted, there’s a few week delay while it’s proofed, typeset and slotted into an issue, during which this policy applies. My guess is that someone heard about the acceptance, but jumped the gun on posting the full-text.

Comment #12 September 23rd, 2019 at 6:39 pm

My main opinion is that, of course, this is an amazing achievement and Google deserves all the praise.

However, I also get a bit annoyed at the way these things start getting framed in discussions and media after a while. Mostly, this general discussion over whether a specific fundamental boundary has been crossed or not. As if quantum supremacy was an objective in itself, which will suddenly unlock new capabilities that were before out of reach. As if Google having achieved this particular (semi-arbitrarily defined) boundary puts them on top, while Rigetti and IBM and whoever else have lost the race, the end.

I think that quantum supremacy is much more like an intermediate milestone towards some place really useful than a goal in itself. The point is not this individual paper and individual experiment – whether Google has x qubits with fidelity epsilon or not – but rather about all the papers that Google is probably going to put out over the next decade, giving us a better idea how hard it is to add extra qubits, and whether they can keep their fidelity at that level as the number of qubits increases and so on. Whether they already “have quantum supremacy”, as a previous commenter put it, to me feels secondary.

But I don’t mean at all to sound negative – again, this is an amazing achievement! But, like putting a man on the moon or finding the Higgs boson, it is still an achievement that feels academic, in the sense that it’s a boundary that was crossed just to show that it’s possible. It’s only because quantum computers have a *promise* of eventual practical applications that so many people tend to focus so much (and so negatively) on what has *not* been achieved.

Comment #13 September 23rd, 2019 at 6:54 pm

Hi Scott, I have some questions maybe you (or other readers of the blog can answer). Since I admit they have a bit of skeptical tone, let me preface them by saying that: (i) surely Martinis et al. have accomplished a marvelous engineering achievement; (ii) I’m all in favor of trying to achieve this Random Circuit approach to quantum supremacy.

Questions:

1. Why do you (and they) refer to the task as a ‘sampling’ problem? In all the variant guises of the problem (cf. the [BoulandFNVN] paper), it’s always ultimately a ‘search’ problem. (Okay, maybe a promise-search problem, because one focuses on the Porter-Thomas distribution. But you get my meaning.) Like:

Original Google paper:

Output a sequence of k strings x_1, …, x_k such that GeometricMean_i{Pr_{D_C}[x_i]} >= .57/N.

[A perfect quantum computer could not only do this wvhp, it would get >= 1.52/N wvhp.]

Yours and Lijie’s paper:

Output a sequence of k strings x_1, …, x_k such that noticeably more than half of them have Pr_{D_C}[x_i] >= .69/N.

[A perfect quantum computer could not only do this wvhp, it would have >= 84% exceeding the threshold wvhp.]

Current Google paper:

Output a sequence of k strings x_1, …, x_k such that ArithmeticMean_i{Pr_{D_C}[x_i]} > 1.01/N.

[A perfect quantum computer could not only do this wvhp, it would get >= 1.99/N wvhp.]

All three of these are (sort-of-artificially-chosen?) search tasks. And a search task was ultimately chosen to be the ‘final verdict’ precisely because it’s much much easier to verify that the quantum computer is solving a search task, rather than solving the sampling task. So… why the talk about the sampling?

2a) Speaking of these sort-of-artificially-chosen search tasks, any idea why Google switched to the arithmetic mean, after excitedly advocating the geometric mean in the early paper? Given that the statistics side of the Google papers is… a bit fuzzy… it strikes me as kind of weird that they talked about the geometric mean version in their short paper, but then switched to the arithmetic mean when they got down to details…

2b) Also, this is a sociological question, but… is the repeated mention of “cross entropy” and the associated “XEB” acronym basically marketing? I don’t see that it has any natural statistical significance. (The original ‘cross entropy’ sort of resembles KL divergence, except it’s backwards.) As I said, I could be missing something.

3. Speaking of the sampling problem… I have basically the same question that Gil posted on his blog. I haven’t properly seen the paper, but Gil seems to suggest that they don’t publish the easy, obvious, thing that would go the furthest towards convincing a reader that their quantum computer is solving the sampling problem well. Namely, doing say 16 qubits, explicitly giving a table of 65536 D_C(x) values [a laptop could get these no problem], and then doing some million samples from their quantum computer and reporting *its* distribution. Then, if indeed the KL-divergence or Hellinger-distance (or TV-distance) between the two distributions was small, it would be very convincing. It should probably be possible to do this with 20+ or 30+ qubits, right? Is it true that they don’t present such results in their paper?

Comment #14 September 23rd, 2019 at 6:55 pm

Scott is correct that inability to achieve quantum supremacy is quite central to my argument (since 2014), so naturally I don’t expect that the recent claims by the Google team will stand. Of course, if these claims (or any other quantum supremacy claim) are correct then this would defeat my theory. It goes without saying that the claims are so fantastic that also responsible believers in quantum computers should examine these specific claims (like an alleged NP=! P proof) carefully and skeptically. Also, it would be nice to hear some details about the precise claims and methodology of the Google team.

Comment #15 September 23rd, 2019 at 7:07 pm

Anon #4: Yes, that was

exactlythe upshot of (the relevant part of) my 2017 paper with Lijie Chen. You just need to design the right nonstandard conjecture, and then it implies what you need! 🙂Comment #16 September 23rd, 2019 at 7:10 pm

Adam #5: While it’s

possiblethat we’ll see exponential scaling in the number of qubits, I personally feel like there’s too little data to say. I know Hartmut Neven (the head of Google’s Quantum AI Lab) is much more bullish about this.Comment #17 September 23rd, 2019 at 7:18 pm

Job #6: The trouble with Simon’s algorithm is that it requires an oracle. And 25 years after that algorithm’s discovery, we still don’t know of any convincing way to instantiate the Simon oracle in such a way that the exponential speedup persists. (I’ve looked for such an instantiation without success.)

Of course there’s Shor’s algorithm, whose discovery was inspired by Simon’s algorithm. But as I explained in the FAQ, running Shor to factor a classically intractable number will set you back thousands of logical qubits, which after error-correction could translate into millions of physical qubits. That’s why no can do it yet.

If you had just landed on the moon, how interested would you be in the views of various Monday-morning quarterbacks that “y’know, what would

reallybe interesting would be to land on Mars…”? 🙂Comment #18 September 23rd, 2019 at 7:20 pm

Thanks for the write up Scott. Curious, could a version of this experiment be performed on an adiabatic quantum computing device and if so would you expect similar results?

Comment #19 September 23rd, 2019 at 7:24 pm

Craig #7: Have you considered the possibility that others might know things about this that you don’t? What’s holding Google back is simply that their paper is embargoed by the journal that accepted it. Once it’s published (hopefully within a month), they’ll hold a press conference, etc. etc., as if the leak never happened. And will you be back here to acknowledge your wrongness?

Comment #20 September 23rd, 2019 at 7:26 pm

A.E. #8: If sampling is the “only fruit,” then that itself would be fascinating and would demand to be understood. Is quantum fault-tolerance impossible? Or is there some other new principle of physics that rules out quantum speedups for decision problems?

Comment #21 September 23rd, 2019 at 7:35 pm

Bill #9: The relevant conjecture is precisely that there’s no polynomial-time classical randomized algorithm that takes as input a description of the circuit C, and that produces samples (exactly or approximately) from C—or indeed, that produces

anysamples with a high linear cross-entropy score—with more than negligible probability over the choice of C.If you take out the part about linear cross-entropy, then you can strengthen the conjecture to say: for the overwhelming majority of circuits C, there’s no polynomial-time classical algorithm (

even one adapted to C) to sample exactly or approximately from C’s output distribution.I’d be happy to discuss the status of the evidence for any of these variations that might interest you. (tl;dr: we’re about as confident as can be that exact sampling is hard for random C’s, a little less confident that approximate sampling is hard, and a little less confident still that getting a large linear cross-entropy score is hard.)

Comment #22 September 23rd, 2019 at 8:47 pm

Hey, you’re the one talking about reusing the lander for other applications, i would like stay on the moon and look around for a while, before moving on.

Why is the exponential speedup important though? We already expect that a functioning QC is faster that a classical machine. We just want to know if we have a functioning QC.

You’d be testing the device’s ability to carry out “large-scale interference in the context of universal computation”, which is key.

Even a quantum adder would be impressive if it uses quantum interference in a meaningful and verifiable way.

I think Simon’s problem would be a natural follow-up to random sampling.

You’d start by picking a random secret value and assembling a basic Simon circuit with that secret – doesn’t have to be a classically hard instance, and it’s easy to do.

Then, you’d generate a random classical circuit (using universal gates) and append that to the Simon circuit. It will still be a valid Simon problem instance since the classical circuit will only permute the output (it’s still two-to-one).

Then, you sample the output of Simon’s algorithm on that circuit using a QC (see, it’s still just sampling) and check how many samples are orthogonal to the secret value.

If the QC is noise free, it will only produce orthogonal values. If it’s completely noisy, it will do so only 50% of the time.

It’s basically random sampling within Simon’s problem, which provides the perfect setting for testing large-scale interference.

With the advantage that Simon’s problem has almost no setup compared to Shor’s algorithm and period finding.

You might need a gate depth of a few hundred or thousand operations, but not that many qubits.

Comment #23 September 23rd, 2019 at 9:22 pm

Not sworn to secrecy, but from context they are working the messaging about a reality of efficient quantum factoring and how much $$$ NSA will have to pay through the nose to delay public utility factorization as a service. I’m guessing NSA may go as high as $100 billion per year.

Comment #24 September 23rd, 2019 at 9:27 pm

Scott 19,

If Google demonstrates to all of the experts’ satisfaction that they can get a quantum computer to do a computation in a few microseconds that it takes a few weeks for the best classical computer in the world to do, I already acknowlege that I would be wrong. That is by definition quantum supremacy, something which I predicted from the beginning would fail.

I would not be ashamed that I would be wrong, but I would be very confused. This has never been to me about being right; this has been to me about how our universe works, something I care deeply about.

Comment #25 September 23rd, 2019 at 9:38 pm

Scott #21: What if we assume Google is adversarial? If they were to publish the data, could we rule out any secret trickery in the choice of C or faking the results in some other way?

Comment #26 September 23rd, 2019 at 10:08 pm

Scott #21 You write “Bill #9: The relevant conjecture is precisely that there’s no polynomial-time classical randomized algorithm that takes as input a description of the circuit C, and that produces samples (exactly or approximately) from C—or indeed, that produces any samples with a high linear cross-entropy score—with more than negligible probability over the choice of C.”

Am I missing something? Surely your result was not “We show that task X is hard assuming the conjecture that task X is hard”. Is there any relation between hardness of sampling from a random circuit (to small L_1 distance or some such) and some more standard complexity conjecture?

Comment #27 September 23rd, 2019 at 10:21 pm

So Google can efficiently generate some random bits, in theory. But do the bits pass the Diehard tests? If so, I guess I should be excited. More seriously, I never did believe generating random bits was all that difficult until I understood it to be an impossible task. Today I just feel old and confused, again.

Comment #28 September 23rd, 2019 at 10:34 pm

Scott, would this result proves that we don’t live in a (classical) simulation?

Comment #29 September 23rd, 2019 at 10:44 pm

I think that’s conflating two different goals.

The first one is whether a BQP machine can be realized.

The second one is whether BQP != BPP.

I’m interested in both, but it’s the first one that we have any real shot at answering right now, and it’s what a QC is designed to do.

Let’s face it, even a working QC running Shor’s algorithm on 2000 bit integers is not going to settle BQP vs BPP. I’m surprised you’d accept quantum supremacy that easily! 🙂

In a way, no experiment can definitively establish quantum supremacy, that’s a theoretical question. I will be here arguing BQP vs BPP long after we’re using QCs to break RSA (not that i think BQP = BPP but it’s an open question and it’s there to be debated).

Asking for a speedup in this context is like asking a person to leap just to show that they can fly.

If they have wings they can fly, lets just check that they have wings.

Comment #30 September 23rd, 2019 at 10:52 pm

Jay #28:

Scott, would this result proves that we don’t live in a (classical) simulation?

No. But it’s evidence in favor of the proposition that if we do, then the classical simulation needs to work exponentially hard. (Something that I was already pretty confident about anyway, but YMMV.)

Comment #31 September 24th, 2019 at 1:05 am

[…] Enfin, l’algorithmologue quantique Scott Aaronson a publié une FAQ sur la performance de Google qui ne contredit pas tout ce qui est au-dessus (je suis sauf…) : Scott’s Supreme Quantum Supremacy FAQ!. […]

Comment #32 September 24th, 2019 at 1:06 am

Dear Scott,

0.First of all, thank you very much for a very neat piece of writing in Q3, and also for venturing to write this post in the first place.My opinion (for whatever it is worth):

1.They might have (must have?) something, but the protocol in the leaked document, taken by itself, is too weak to support the claim of Quantum Supremacy (QS). The protocol rather highlights the difficulty of proving the QS via this route of Random Circuit Sampling (RCS) than it proves theirs having actually achieved the QS.If at all following RCS, they should have first proved existence of high-quality (say, long enough-lasting) entanglements despite applying enough number of gates, in a series of a number of QCs with progressively increasing computational power, but all built using the same technology.

The number of qubits in such a series of QCs should have been systematically increased from a trivial 2–4 qubits to the number just below, but within, the capabilities of SUMMIT or similar classical computer of today’s.

*Then* a claim to QS could have got the required strength—even if, as you pointed out, integer factoring would still be the ideally right grounds from which to stake the claim.

2.Another point: To my (novice’s) understanding, applying gates is the same as applying QM operators. Randomly applied gates don’t have to mean “purely” random gates. A sequence of applications of gates could together sample through a definite operator evolution in time too. Time-ordered operator evolution is physically meaningful in the Heisenberg “picture,” I guess (within my limited knowledge of QM, but I haven’t really studied this “picture”). I don’t know if sampling at different instants in a specific evolution, but in a random order, can be said to have a definite physical meaning; I tend to think that in an ensemble sense it does.So, I guess, a definite course of evolution could qualify to be called a “something-burger.” Provided that the *entire* space in the support of the ideal $D_C$ was accessible to verification by a classical computer of today’s. In contrast, what you have here today is “tasting” samples at random instants of time from all possible recipes of all possible different kinds of burgers. Not exactly a “something”-burger to my mind.

Of course, personally, I do appreciate both the magnitude of the effort and the tantalizing nature of the reported—err, *available*—results. However, unfortunately, my appreciation of the effort cannot add to the strength of the very protocol followed (or at least as made available).

My two cents.

Best,

–Ajit

Comment #33 September 24th, 2019 at 2:43 am

O.S.Dawg #27, I think the idea is that the random bits are certifiably random. I.e. I advertise a coin-flipping service, you order a million random bits and send payment, and I duly send you back a million 1’s and 0’s with the claim that I generated them all by flipping coins (or some other random process).

The bits pass all your tests, but how do you know I didn’t really just run yesterday’s Trump tweets through a hash function and send you the output, instead of doing something actually random?

The solution is that you can send a challenge (a random quantum circuit) that generates a certain non-uniform probability distribution, and the only way I can efficiently sample that distribution is through a quantum algorithm. If I answer reasonably quickly, I can’t have done a massive classical computation that would take too long, so I must have used the quantum machine and generated actual random numbers. Of course for you to classically check that the answer actually has the right distribution requires YOU to do a massive computation, unless I’m missing something.

Maybe I don’t really understand this after all.

Comment #34 September 24th, 2019 at 2:58 am

[…] Aaronson enthusiastic blog post gives his take in the new development and also mention some several key researchers behind the […]

Comment #35 September 24th, 2019 at 3:25 am

[…] Scott’s Supreme Quantum Supremacy FAQ! […]

Comment #36 September 24th, 2019 at 3:28 am

scott:

you write “in my role as blogger and “public intellectual,”” how did you become a public intellectual? was there an election and if so, who voted? did a ‘select’ group designate you as one? or did you simply declare yourself to be one? i’d like to know because it seems like it would be pretty cool to be a public intellectual, even if it doesn’t come with a salary increase (what exactly are the perks you have as a result of being a public intellectual?) and i’d like to apply to be one.

Comment #37 September 24th, 2019 at 4:35 am

Could you explain why a quantum computer is necessarily faster than a classical computer at simulating a quantum system like the aforementioned condensed matter system. This might seem obvious to everyone here, but it isn’t obvious to me, given how subtle and specific the class of algorithms are that receive a speedup (or claimed speedup).

After all, I can simulate the very same condensed matter system using my very classical pen and paper, in some cases quite exactly.

Comment #38 September 24th, 2019 at 4:58 am

My background is in machine learning and stats, and I would like to better understand what exactly is the type of distribution over n-bit strings that is parameterized by the quantum circuit.

Does anyone have advice about where can I start reading about this?

Comment #39 September 24th, 2019 at 5:00 am

Scott, you’re using “private key” incorrectly. What you mean by “private key” is called symmetric-key cryptography. Private keys are a part of asymmetric-key cryptography, which is the older (and more descriptive) name for public-key cryptography.

Comment #40 September 24th, 2019 at 5:22 am

Am I correctly understand that this quantum supremacy experiment aims to

1) Produce a quantum device returning set X of strings in few minutes

2) Develope a classical “test” T – a programme with input set of strings and output Yes or No

3) On input X the test T reruns Yes with high probability (in other words, X passes the test T)

4) The claim is that it is classically intractable to generate set X which passes the test T

If you start to claim that X is (close to be) from some distribution D and T tests this, the sceptics can argue with this: it is impossible to sample from D exactly, how large is error, etc. However, the items 1-4 above does not mention any D and the only way for sceptics to argue with this is to falsify item 4) by producing a classical efficient algorithm returning a random set X which passes the test T with high probability.

Comment #41 September 24th, 2019 at 5:29 am

Jay #28: You could stack as many simulations as you want, as long as there is a source of uncertainty and free will, I’d argue each of those is perfectly real.

We don’t know if that’s the case with our world, but the fact that we can imagine and be scared of the possibility that we are a simulation is enough freedom for now 🙂

Comment #42 September 24th, 2019 at 5:36 am

More generally, am I correctly understand that any (not necessary this) quantum supremacy experiment can be written as

1) A (programmable) quantum device which gets input I from some set S1 and efficiently generates random output O(I) from some set S2

2) A classical program T with input (I’ \in S1, O’ \in S2) and output Yes or No

3) On input (I, O(I)), T returns Yes with High probability

4) A conjecture that for any efficient classical program with input I and output O'(I), T returns No on input (I, O'(I)) with high probability.

Comment #43 September 24th, 2019 at 5:44 am

A reformulation as a game:

A classical program T is given to Alice and Bob.

Alice chooses input I and sends it to Bob

Bob does a computation for a limited time and then returns O = O(I) to Alice

Alice and Bob run T on input (I,O)

Alice wins if T outputs False, Bob wins if T outputs True.

The claim is that Bob wins if and only if he has a quantum computer.

Comment #44 September 24th, 2019 at 6:51 am

@Uri #38: It’s widely believed that the distribution generated by a random n-qubit circuit is well-modeled by the distribution generated by the following code:

for x in {0,1}^n

p[x] := draw of an Exponential(1) random variable

Where the 2^n exponential random variables are independent.

(This is just because pre-squared quantum amplitudes should be well-modeled by independent Gaussians, and the square of a Gaussian is an Exponential.)

Note the two levels of randomness: this is a *randomly-generated* probability distribution.

A key phrase to google is “Porter Thomas”.

Comment #45 September 24th, 2019 at 7:07 am

Are there applications for this certified randomness protocol? Is there a paper?

We like verifiable random functions (VRFs), (publicly) verifiable secret sharing (VSS/PVSS), and verifiable delay functions (VDFs) because they give us randomness that is canonical given some constraints.

VRF: You generate exactly as much suitable randomness as the protocol says.

VDF: Randomness is unbiased assuming both one honest fast participant and that no participant can evaluate the VDF faster than the specified adversarial advantage.

PVSS: Randomness is unbiased assuming that k-out-of-n participants are honest.

Your certified randomness proves the correct process generated the randomness, but presumably says nothing about how many times they ran the full multi-sample process, making it extremely biased, right?

p.s. VRF examples include many signature schemes like RSA-FDH, BLS, and https://eprint.iacr.org/2017/099.pdf We’ve several VDF designs but the most useful is https://eprint.iacr.org/2019/166.pdf

Comment #46 September 24th, 2019 at 7:33 am

Perhaps Scott or Gill st so can comment on this argument against QC:

Quantum computations require highly reversible steps to minimise the release of decoherence producing entropy (heat). But all reversible processes have to run slowly (the slower they run the more reversible they can be, by thermodynamics). So all QCs seem caught in a cleft stick: to run quickly they generate so much entropy that the decoherence washes out the result. I tried applying this to Grover’s search algorithm and found that, although Grover does indeed complete the search in root N steps – beating the classical competiton, which requires order N steps, of course – that the individual steps run so slowly that the whole computation still takes time order N to complete. So, to be clear, Grover completes in fewer steps, but still takes as long, timewise, to complete as a classical computer.

Shor’s algorithm is not Grover’s, of course, but even so, is this a fundamental problem with QCs? A lot of the time when a paper announces a QC speedup over the classical what they mean is the number of steps is fewer – which is not the same thing.

Comment #47 September 24th, 2019 at 7:41 am

Maybe this answers @Bogdan’s question.

Fact: The blueprint B of a 53-qubit circuit induces a probability distribution p_B on the N = 2^53 bitstrings x in {0,1}^53.

Definition: Given B, say a string x in {0,1}^53 is “acceptable” if p_B(x) > 1/N.

—

Now here is an “average-case search problem”:

Input: a _randomly chosen_ blueprint of a 53-qubit circuit B.

Task: output an acceptable string.

—

Widely believed statement: A perfectly built quantum computer can efficiently solve the task with probability >= 70%. [In fact, >= 2/e.]

Plausible conjecture made by Google and others: Every efficient classical algorithm solves the task with probability <= 40%. [In fact, = 60%. (I.e., only 10% worse than the ideal quantum computer.) Now run the quantum computer 1 million times and see if outputs at least 500,000 acceptable strings. (A classical computer can check this.)

If the quantum computer really generates acceptable strings with probability >= 60%, it will pass the overall test of getting at least 500,000 acceptable strings with > 99.99999% probability. On the other hand, if the plausible conjecture is true, the probability an efficient classical algorithm can solve the overall task by getting 500,000 acceptable strings is < .00001%.

It would be great if Google's paper describes the results of this kind of nice experiment (even for 15 or 20 or 36 qubits in place of 40).

Comment #48 September 24th, 2019 at 7:42 am

This is a little like celebrating the technological advance of Sputnik during the Cold War. Objectively a great scientific achievement once it’s verified, but impossible not to have mixed feelings given the monopolistic, antidemocratic, and censorious nature of the organization and is employees.

Comment #49 September 24th, 2019 at 7:43 am

Sorry, the above comment seems to have gotten garbled (maybe due to less-than and greater-than signs?). Here it is re-posted:

——

Maybe this answers @Bogdan’s question.

Fact: The blueprint B of a 53-qubit circuit induces a probability distribution p_B on the N = 2^53 bitstrings x in {0,1}^53.

Definition: Given B, say a string x in {0,1}^53 is “acceptable” if p_B(x) exceeds 1/N.

—

Now here is an “average-case search problem”:

Input: a _randomly chosen_ blueprint of a 53-qubit circuit B.

Task: output an acceptable string.

—

Widely believed statement: A perfectly built quantum computer can efficiently solve the task with probability at least 70%. [In fact, at least 2/e.]

Plausible conjecture made by Google and others: Every efficient classical algorithm solves the task with probability at most 40%. [In fact, at most 1/e.]

—

Fact: Using known classical algorithms, we can *check* whether a string x is acceptable for blueprint B provided the number of qubits is less than maybe 40. (Doing it for 53 is super-hard though.)

So a nice experiment would be: Pick a random B with 40 qubits. Try to build a physical quantum computer that at least solves the ‘acceptability task’ wiht probability at least 60%. (I.e., only 10% worse than the ideal quantum computer.) Now run the quantum computer 1 million times and see if outputs at least 500,000 acceptable strings. (A classical computer can check this.)

If the quantum computer really generates acceptable strings with probability at least 60%, it will pass the overall test of getting at least 500,000 acceptable strings with 99.99999% probability. On the other hand, if the plausible conjecture is true, the probability an efficient classical algorithm can solve the overall task by getting 500,000 acceptable strings is less than .00001%.

It would be great if Google’s paper describes the results of this kind of nice experiment (even for 15 or 20 or 36 qubits in place of 40).

Comment #50 September 24th, 2019 at 7:57 am

Ryan O’Donnell #13:

1) The original hardness results were for sampling problems, and the

intuitionis always that you’re trying to force the QC to sample from some particular distribution; the statistical tests like linear cross-entropy are all designed around that. But you’re right, ultimately the sampling problem always ends up being converted to a search problem, since you can’t directly verify sampling. Lijie and I acknowledged as much in our CCC’2017 paper, where we advocated shifting the focus to the hardness of the search problem. Incidentally, do you know my Equivalence of Sampling and Searching paper? (Admittedly, it only bears indirectly on this discussion, since search problems involving time-bounded Kolmogorov complexity are impractical to verify.)2a) No, I don’t know why they switched to arithmetic mean. Having said that, I actually like the arithmetic mean much better for quantum supremacy testing—for example, it’s more convenient for the analysis of my certified randomness protocol—so I wasn’t inclined to scrutinize their choice too harshly. 🙂

2b) I agree that, once you’ve switched to the arithmetic mean, it’s no longer accurate to call the quantity you’re computing a “cross-entropy.” My guess is that they were just still wedded to that word.from the years when they were using the geometric mean. In the certified randomness paper that I’m writing, I’ve been calling the task of producing samples with high arithmetic mean HOG (Heavy Output Generation). But there’s still time to change it! Do you have a suggestion for a better term?

3) I have no doubt whatsoever that they did the thing that you and Gil are asking for (i.e., checking the whole distribution with small numbers of qubits). I agree that it would’ve been good to discuss that in the paper. It might be in one of their earlier papers, but if not, you might be able to get the data you want by emailing them—or let me know if you’d like me to do it.

One clarification, though: if the distribution you were trying to sample from is D

_{C}, then the actual distribution that you observe is not D_{C}itself, but more like(1 – exp(-d)) U + exp(-d) D

_{C},where U is the uniform distribution and d is the circuit depth.

Comment #51 September 24th, 2019 at 8:16 am

@ Scott #21: If they work with a search problem instead of sampling, does it mean that the average case hardness results you mention in your Q8 do not apply? In the paper I mentioned before, I bet that if one initializes the algorithm at random, the search will produce a non-uniform sample. And in that problem it is very easy to conjecture that no polynomial time search algorithm exists, but it actually does.

Comment #52 September 24th, 2019 at 8:25 am

Gil Kalai #14: I’ll try to respond later at more length if I have time. For now, though, three points:

(1) It’s unfortunate that the paper leaked early. But when the final paper is released

with all the supplementary materials(and links to more stuff online), hopefully it will provide much of the additional data that you want. Did you also try looking at their earlier papers?(2) If, after (1) is exhausted, there’s additional data that would be relatively easy to collect and that would satisfy you, I’ll be happy to ask them for it on your behalf.

(3) As I see it, the fact that you’ve gone down the path of defiantly predicting that the Google result won’t stand, and will have to be retracted, provides the ultimate rejoinder to all the people who are now saying that it’s obvious, no big deal, and a nothingburger. So thanks! 😀

Comment #53 September 24th, 2019 at 8:33 am

“The bits pass all your tests, but how do you know I didn’t really just run yesterday’s Trump tweets through a hash function and send you the output, instead of doing something actually random?”

You are making the assumption that Trump’s tweets aren’t actually random. Not sure I’d go that far.

Comment #54 September 24th, 2019 at 8:47 am

Nicholas Teague #18:

Curious, could a version of this experiment be performed on an adiabatic quantum computing device and if so would you expect similar results?

With a

non-stoquasticadiabatic QC, yes, in principle it could be done, by the famous theorem of Aharonov et al. thatanyquantum computation can be done adiabatically.By contrast, with a

stoquasticadiabatic QC—which includes, for example, everything D-Wave was building until extremely recently—it’s not obvious that anything similar could be done, even in principle. We know that the ground states of stoquastic adiabatic processes are accessible in the complexity class AM. This tells us that the style of hardness arguments that we know for quantum sampling problems, based (for example) on #P-completeness and BPP=P^{#P}collapsing the polynomial hierarchy, can’t possibly go through for stoquastic adiabatic QC. So if the conclusions are true at all, then a new type of argument would be needed.Two other remarks:

(1) Insofar as we’re using adiabatic QC solely to solve

optimization problems(as D-Wave was for most of its history)—in other words, as long as we care only about minimizing some easily-computable cost function—the hardness arguments for sampling-based quantum supremacy will of course be irrelevant to us. The only question then—one that’s still not really settled—will be what sorts of speedups are available via adiabatic optimization.(2) OK, forgetting all the history, suppose we now decided that we wanted to build a non-stoquastic adiabatic QC for the specific purpose of solving a hard sampling problem. Would that be a good engineering idea? As far as I understand, it all depends on whether the temperature can be made sufficiently small compared to the spectral gap. And the Google group, which knows a lot about adiabatic QC as well (they did, after all, buy a D-Wave machine), obviously made a decision at some point that adiabatic would

notbe the fastest route to quantum supremacy.Comment #55 September 24th, 2019 at 8:53 am

dualmindblade #25:

What if we assume Google is adversarial? If they were to publish the data, could we rule out any secret trickery in the choice of C or faking the results in some other way?

I don’t know how to rule out secret trickery in the generation of the circuit (even though, thinking it over just now, I

alsodon’t know how such trickery would be accomplished).If we’re at

thislevel of paranoia, then presumably the only real solution will be for some trusted third party to generate the circuit C, send it to Google’s QC, and request an immediate response. That sort of functionality will eventually be available—indeed, it’sexactlywhat my certified randomness protocol requires, and Google is working right now to demonstrate that protocol. Give it a few more months?Comment #56 September 24th, 2019 at 9:07 am

matt #26:

Am I missing something? Surely your result was not “We show that task X is hard assuming the conjecture that task X is hard”. Is there any relation between hardness of sampling from a random circuit (to small L_1 distance or some such) and some more standard complexity conjecture?

Yeah, I didn’t go into detail about the conditional hardness results that we have, but they’re not

thattautological. 🙂A summary:

(1) If you want hardness of

exactsampling, we know how to get that based on the sole assumption that the polynomial hierarchy is infinite (or indeed that BPP^{NP}≠ P^{#P}). That’s as good as you could possibly ask for in the present state of complexity theory.(2) If you want hardness of approximate sampling in L

_{1}-norm, then Arkhipov and I showed how to get that from the belief that the polynomial hierarchy is infinite,together withthe belief that certain natural problems are #P-hard. In the case of BosonSampling, that problem was to approximate the permanent of a matrix of independent N(0,1) Gaussian entries, with high probability over the matrix. For other quantum supremacy proposals, the problem will be different but similar in spirit. Unfortunately, while we know how to prove that many natural problems are #P-hard to approximate, and also that they’re #P-hard on average, we don’t yet know how to prove their #P-hardness when approximation and average-case are combined.(3) Finally, if you want hardness of doing anything whatsoever to spoof the linear cross-entropy test (possibly deviating arbitrarily from correct sampling)—well, we can get that from the assumption that given a random quantum circuit with n qubits and m>>n gates, guessing a specific output amplitude can’t be done in polynomial time with even 2

^{-n}bias over chance. So, this is not quite tautological, but also not very satisfying. For more see my paper with Lijie.Comment #57 September 24th, 2019 at 9:10 am

O. S. Dawg #27:

More seriously, I never did believe generating random bits was all that difficult until I understood it to be an impossible task. Today I just feel old and confused, again.

I tried to anticipate and head off

exactlythis confusion in my Q10, but it looks like I didn’t succeed.asdf #33 gives a very clear explanation; see if it helps.

Comment #58 September 24th, 2019 at 9:17 am

Ajit Jadhav #32:

Regarding your point 1: as I wrote to Gil Kalai, they already did what you’re asking for. They’re not right now slapping themselves on their foreheads, saying “why didn’t we first check for entanglement with smaller numbers of qubits?? why did it take a blog comment to remind us of that???” Read their earlier papers, and wait a month for the supplementary material for the new paper, and if there’s still relevant data that’s missing then I’ll ask them for it on the skeptics’ behalf.

I didn’t understand your point 2.

Comment #59 September 24th, 2019 at 9:29 am

If – as Ryan O’Donnell #13 described in point (3) – the Google team demonstrated the appropriate match in KL-divergence for the 16 qubit (classically checkable) version, then would any skeptic have grounds to doubt their claim of RCS-based supremacy (for the moment, set aside the “challenge” version, where computational time would also need to be checked)?

Comment #60 September 24th, 2019 at 9:31 am

Richard Gaylord #36:

you write “in my role as blogger and “public intellectual,”” how did you become a public intellectual? was there an election and if so, who voted? did a ‘select’ group designate you as one? or did you simply declare yourself to be one? i’d like to know because it seems like it would be pretty cool to be a public intellectual, even if it doesn’t come with a salary increase (what exactly are the perks you have as a result of being a public intellectual?) and i’d like to apply to be one.

LOL! Here’s my secret recipe:

(1) Start a blog. Keep at it for 15 years.

(2) Attract readers by writing what you actually think. Many of those readers might hate you, but at least they’ll be reading.

(3) Be one of a couple hundred people who understand something that journalists constantly want to write articles about.

(4) Among those couple hundred, be one of the only ones who will actually talk to the journalists / return their emails and calls.

(5) Bask in the glory, of being called a “shill,” “clueless techbro,” etc. every week on Twitter and SneerClub. 😀

Comment #61 September 24th, 2019 at 9:36 am

Haelfix #37:

Could you explain why a quantum computer is necessarily faster than a classical computer at simulating a quantum system like the aforementioned condensed matter system.

It’s not

necessarilyfaster. Indeed, as you correctly point out, for some condensed-matter systems it definitely isn’t. But for others it might be! It all depends on finding a system for which the known classical methods like DMRG and tensor networks all break down, but there’s still some observable whose expectation value you really care about.Comment #62 September 24th, 2019 at 9:42 am

Uri #38: Yes, exactly as Ryan O’Donnell #44 said, look up any of the papers by the Google group that mention the “Porter-Thomas” distribution.

The basic picture is that your final amplitudes just look like a bunch of uncorrelated complex Gaussian variables of mean 0. Taking the squared absolute values of those amplitudes then gives you exponentially distributed probabilities. So, in your distribution over 2

^{n}strings, the number of strings whose probabilities exceed b/2^{n}will be approximately 2^{n}/exp(b). And there’s not really any structure in the distribution besides that. It’s just that preferentially sampling the “heavier-weight” strings is expected to be a classically intractable problem.Comment #63 September 24th, 2019 at 9:52 am

Candide III #39:

Scott, you’re using “private key” incorrectly. What you mean by “private key” is called symmetric-key cryptography. Private keys are a part of asymmetric-key cryptography, which is the older (and more descriptive) name for public-key cryptography.

In the quantum and complexity communities, I hear “private-key” much more often than “symmetric-key.”

According to Wikipedia: “Other terms for symmetric-key encryption are secret-key, single-key, shared-key, one-key, and private-key encryption. Use of the last and first terms can create ambiguity with similar terminology used in public-key cryptography.”

My verdict: calling it “private-key” is acceptable/correct, but “symmetric-key” is arguably preferable. So thanks—I’ll try calling it the latter more often in the future, if I can remember to.

Comment #64 September 24th, 2019 at 10:00 am

Jeff Burdges #45:

Are there applications for this certified randomness protocol? Is there a paper?

Alas, I’m still working on the paper! (I find that, alas, my already-high base rate of procrastination is amplified by orders of magnitude when there are two rambunctious kids to take care of.) In the meantime, see for example these slides.

To answer your question (unless I misunderstood it), what my protocol guarantees is not unbiased samples, but samples with a large amount of smoothed min-entropy. However, once you have that, together with a tiny random seed, it’s well known that you can use a randomness extractor (like, say, that of Guruswami-Umans-Vadhan) to produce a large number of nearly-unbiased random bits. Those could then be used for all sorts of applications that require certified random bits (especially in the form of a public randomness beacon)—e.g., proof-of-stake cryptocurrencies.

Comment #65 September 24th, 2019 at 10:13 am

Michael Price #46: Your argument is wrong because in real life, a QC is

notreversible, but only very approximately so. Ultimately, the theory of quantum fault-tolerance is what convinced most of us that in principle, it ought to be possible to build scalable QCsdespitethat irreversibility. Understanding quantum fault-tolerance is a prerequsite to seriously discussing these questions.Even today, though, before fault-tolerance has been achieved … if you care about specific numbers, Google’s superconducting device executes each layer of gates in just a few nanoseconds. If your argument were sound, then shouldn’t the speedup they’ve apparently already obtained have been impossible?

Comment #66 September 24th, 2019 at 10:16 am

Eli S. #48:

This is a little like celebrating the technological advance of Sputnik during the Cold War. Objectively a great scientific achievement once it’s verified, but impossible not to have mixed feelings given the monopolistic, antidemocratic, and censorious nature of the organization and is employees.

Even granting that those are your feelings about Google as a company, I’d encourage you in the strongest terms to separate them from your feelings about the team of roughly 75 people, centered at a lab in Santa Barbara, that produced this accomplishment.

Comment #67 September 24th, 2019 at 10:21 am

I don’t think that being a QC skeptic is the same thing as denying quantum supremacy.

My skepticism is about building a practical QC with millions of error corrected qubits.

Comment #68 September 24th, 2019 at 10:34 am

So basically this thing is like a more complex double slit experiment that’s reconfigurable?

Comment #69 September 24th, 2019 at 10:42 am

Bill #51:

If they work with a search problem instead of sampling, does it mean that the average case hardness results you mention in your Q8 do not apply?

As I said before, for the search problem (i.e., producing any samples with high linear cross-entropy), the hardness result in my paper with Lijie would be the most directly relevant.

But even setting aside complexity-theoretic evidence, people who dislike or distrust the Google result now have an

immensemotivation to try to find better classical algorithms for handling random quantum circuits. So let’s see what they come up with!Comment #70 September 24th, 2019 at 10:44 am

fred #68:

So basically this thing is like a more complex double slit experiment that’s reconfigurable?

Yes. And a QC with millions of error-corrected qubits would be like an

even morecomplex,even morereconfigurable double-slit experiment.Comment #71 September 24th, 2019 at 10:54 am

Scott #70

you added in your answer “error-corrected”, which is the crux of the matter imo.

Isn’t the problem that, unlike classical computers which are scalable (i.e. built from smaller independant modules), all the “bits” in a QC are potentially linked/correlated with one another?

Comment #72 September 24th, 2019 at 11:03 am

Ryan O’Donnell #49

Thank you for the answer. Yes, this is exactly what I would like to see.

Comment #73 September 24th, 2019 at 11:07 am

Scott #63: yes, according to the

~~dictionary~~footnote in Wikipedia, “private-key” is an acceptable synonym for “symmetric-key”, but sinceprivate keysare also a part ofpublic-keycryptography, where i.a. you have to manage yourprivate keyswhich are NOT symmetric keys, I’m afraid using the former synonym will only lead to unnecessary confusion when you’re writing for the general public and not just for the small community of QC researchers. Speaking as a software professional, I can’t remember ever seeing “private-key cryptography” used for “symmetric-key cryptography” in any discussion, code or documentation. “Private key” always implies asymmetric (public-key) cryptography.Comment #74 September 24th, 2019 at 11:09 am

fred #71: No. If our current understanding of physics is right, then that’s not the problem at all. If you can error-correct one qubit, then the linearity of QM implies automatically that you can error-correct n entangled qubits with n times as much effort. (Of course, there’s also the challenge of

computingon the error-corrected qubits—but you only need to be able to handle pairs of them, to get 2-qubit gates!) The problem, rather, is that protecting even one qubit for arbitrary amounts of time is anincrediblyhard engineering problem.But as the recent news reminds us, incredibly hard engineering problems sometimes do get solved—as long as there are no deep principles of physics or math that explain why they can’t be! And in the case of quantum error-correction, after 25 years, no one has managed to articulate such principles in a way that’s made the slightest bit of sense to me.

Comment #75 September 24th, 2019 at 11:37 am

It will be alot easier to find holes in the way the experiment was carried out than to try to improve classical simulators.

What’s the chance that the random number generator used in the experiment will ultimately be brought into question, thus prompting the use of verifiable randomness?

Maybe they should do that anyway and then repeat the experiment, thus demonstrating a valid use of the protocol.

Comment #76 September 24th, 2019 at 12:12 pm

Regarding the claim of quantum supremacy, it sort of depends on the philosophical question of what counts as a ‘computer’. E.g. here’s one way of achieving quantum supremacy: generate a sequence of amino acids, synthesize the protein and observe properties that depend on its folding. Modeling folding classically is known to require massive amounts of computing power, whereas the protein molecule doesn’t use much power at all. I am fairly certain this experiment could be easily done years ago already with table-top hardware. So in a sense quantum supremacy has been achieved: we can set up repeatable experiments whose outcomes are prohibitively expensive to predict using classical computers. Looked at from this angle, all that the Google result achieves is to improve the experimental setup (no need for messy liquid chemistry).

This episode reminds me of Classical/Hellenistic Greek mathematicians and the cube-doubling problem. They had not yet developed the methods to prove that a compass-and-straightedge construction of ∛2 was impossible, but they did know that they didn’t know any C&SX construction that could be proved to yield ∛2 (that’s the classical computation part). Also, they knew (and applied practically) a number of more or less complicated mechanical devices – essentially experimental set-ups – which could be used to extract cube roots, for example Eratosphenes’ mesolabe which he reportedly had engraved on his tombstone, and they knew how to check the result to any desired accuracy with C&SX and geometric algebra. But I don’t think they talked about “mechanical supremacy”. In fact they didn’t regard mechanical “solutions” of the cube-doubling problem as true solutions at all: they called them “sophistical solutions” [Russo, 2004].

Comment #77 September 24th, 2019 at 12:21 pm

@Scott: Thanks for the thoughtful replies.

So yeah, just to heighten the excitement a little… You said ‘I have no doubt whatsoever that they did the thing that you and Gil are asking for (i.e., checking the whole distribution with small numbers of qubits)’. Great! — this would literally be the thing that Gil would accept as disproof, I think 🙂

So yeah, do you want to contact them? Like, if we could get:

For one (or, better, two or three) random circuits on, say, ~20 qubits:

a) the complete table of ‘ideal’ probabilities (p_x)_{x in {0,1}^20}

b) the complete table of their estimate (q_x) of the output of their quantum circuit (the empirical distribution formed by, say… a billion draws? If that’s too many, we could try 16 qubits…)

c) and then d_TV(p,q) is less than .1, or KL(p||q) less than .1 [remark: you’d get 1/e, 1-gamma if q were just uniform]

… then these two tables of 2million numbers should suffice to make Gil recant? Can you email them?

(PS to Gil: Not being adversarial here, you know I love your work 🙂 We can all be excited about the outcome one way or another 🙂

Comment #78 September 24th, 2019 at 12:31 pm

Scott #65; it would take the speedup Google claims combined with effective fault correction to convince me the reversibility argument is incorrect.

Comment #79 September 24th, 2019 at 1:08 pm

What about IBM’s claim, that they managed to simulate a 56 qubit version of the same problem Google describes in the paper (see https://pastebin.com/RfUMXJZE) on a classical super computer:

– https://www.ibm.com/blogs/research/2017/10/quantum-computing-barrier/

– https://www.newscientist.com/article/2151032-googles-quantum-computing-plans-threatened-by-ibm-curveball/

Doesn’t that mean Google cannot claim to reach quantum supremancy with only 54 qubits (as described in Googles paper)?

Comment #80 September 24th, 2019 at 2:08 pm

Andreas #79: No, it does not mean that. Did you read the FAQ? (Specifically Q7?)

With the current generation of devices, the goal is not to solve sampling problems that are totally out of the question to solve classically—indeed, even if you could do that, you then couldn’t verify the results. Rather, it’s to solve the problems

several billion times fasterthan the best known algorithms running on a classical computer.Comment #81 September 24th, 2019 at 2:13 pm

Scott, how does this affect your research? what’s the next focus for you?

Comment #82 September 24th, 2019 at 2:37 pm

[…] scientist Scott Aaronson, who says he’s familiar with the research, kindly takes us through the meaning and implications of the milestone in a very readable post on his personal […]

Comment #83 September 24th, 2019 at 3:15 pm

Your paper about HOG and QUATH (Complexity-Theoretic Foundations of Quantum Supremacy

Experiments) seems to suggest the following test:

1. Compute the expected probabilities of each output of their QC

2. Count the number of samples for which the probability > 1/2, and verify that it’s more than 2/3 of the outputs.

However, their paper evaluates their results in an entirely different manner, involving linear XEB, which is just summing up the expected probabilities instead (times 2 minus 1).

While your condition has some theoretical background and proofs backing it, theirs do not.

It could be interesting if they could also compute the data for your condition. They even mention your condition in the paper right next to XEB:

“Another closely related choice is the heavy output generation test, for which f is a step function”

It shouldn’t be difficult at all for them to compute it, they should have all the data.

If you can contact them and ask them to compute it, that would be very interesting result! It will make it much easier to see the direct line between their experimental results and your hardness conjectures.

Comment #84 September 24th, 2019 at 3:37 pm

fred #81: To those of us in the field, the achievement of quantum supremacy is a wonderful milestone, but

not at alla surprise. It’s something that we knew perfectly well was coming in months or years (and anyone reading my blog or listening to my talks could’ve known it too 🙂 ).So I expect that my research interests will be mostly unchanged, except maybe with a little more emphasis on the applications of quantum supremacy experiments and near-term QCs more broadly.

That is, in the 5% of my time that I still have for research at all, as opposed to teaching, travel, administration, dealing with the kids….

Comment #85 September 24th, 2019 at 3:40 pm

Idan #83: As I already said in a different comment (on the post prior to this one), it’s trivial to modify the QUATH=>HOG reduction so that it talks about linear cross-entropy, instead of the test that Lijie and I considered. But it sounds like I should write that up for the sake of completeness.

Comment #86 September 24th, 2019 at 3:57 pm

[…] scientist Scott Aaronson, who says he’s familiar with the research, kindly takes us through the meaning and implications of the milestone in a very readable post on his personal […]

Comment #87 September 24th, 2019 at 4:25 pm

@Ryan #77: I think you misunderstand the output of Google’s noisy quantum computer. Even if we could run the machine a gazillion times and extract the machine’s true output distribution (q_x) and compare it with the ideal distribution (p_x), they won’t be close in total variation (TV) distance at all. In fact, from the numbers in the paper it looks like their TV distance will be greater than 0.99.

The claim is that q_x is a convex combination of p_x and the uniform distribution, with the weight in front of p_x being small (between 0.01 and 0.001), but it’s not zero. This distribution q_x scores slightly higher than the uniform distribution on the “linear cross entropy” score.

Comment #88 September 24th, 2019 at 4:42 pm

[…] scientist Scott Aaronson, who says he’s familiar with the research, kindly takes us through the meaning and implications of the milestone in a very readable post on his personal […]

Comment #89 September 24th, 2019 at 5:54 pm

“Those other experiments demonstrated other forms of “quantum supremacy”: for example, in the case of Bell inequality violations, what you could call “quantum correlational supremacy.”

Small correction: while people often make vague statements like “quantum correlations are stronger than classical ones”, Bell’s theorem is not a bound on correlations, it’s a bound on a particular function of correlations. You can have a local hidden variable model that is maximally correlated for every measurement setting (most trivial case would be one where you always get the same outcome locally), but it wouldn’t violate Bell’s inequality and funnily enough also isn’t allowed in QM.

Comment #90 September 24th, 2019 at 6:07 pm

fulis #89: Correct! I just didn’t have a short phrase for that. And I figured that anyone who knew enough to make such a distinction at all, would also know what “correlational supremacy” means in this context.

On further thought, maybe “nonlocal games supremacy”?

Comment #91 September 24th, 2019 at 6:55 pm

@Anon #87: Aha, I did not realize that, thank you.

So it’s more like:

Pr[classical alg outputs a string w.p. more than 1/N] should be 1/e,

Pr[their quantum computer outputs a string w.p. more than 1/N] is more than 1/e + .001.

I guess this is indeed testable with ~1MM samples.

By the way, I focus on this version (do you output a string w.p. more than 1/N?) rather than their “average the probability of the strings output”, because this is the statistical test that is the best distinguisher for the classical alg’s (presumed uniform) output and the quantum alg’s presumed output (.99 uniform + .01 true Porter-Thomas distribution).

—

Would still be great to see actual tables of “ideal distribution” and “actually output distribution” for some 15 or 20 or 30 qubits.

Comment #92 September 24th, 2019 at 10:23 pm

Re: Q9, it has already been placed in Bender’s head for his free will in futurama.

Comment #93 September 25th, 2019 at 2:08 am

I gotta say this method of randomness certification reminds me of Merkle puzzles, which were a precursor of public-key cryptography. It would be great to have something more like Diffie-Hellman, even if it takes a quantum verifier. Durned if I have any idea what that DH analog would be, though.

Comment #94 September 25th, 2019 at 3:11 am

[…] (For an excellent discussion of quantum supremacy see Scott Aaronson’s (University of Texas) blog yesterday, Scott’s Supreme Quantum Supremacy […]

Comment #95 September 25th, 2019 at 4:44 am

Scott, please tell Google till they make a quantum fortnite version running at home not to bother us with quantum technicalities.

Comment #96 September 25th, 2019 at 4:45 am

Hi Scott

Thank you so much for the no-nonsense and (relatively) easy to understand FAQ. I understood 1/n of it, rather than 1/(2^n) that would be typical for this sort of stuff (lol).

While I’m super excited about this, being a total layman I *really* want to see a QC that can do a calculation that I can actually understand. The most obvious example would be Shor’s algorithm (no I don’t begin to understand the algorithm, but I do get that factoring big numbers is classically difficult).

You say that to do this would require millions of qubits, but is that only to factor a number that would take us a long time to do it with today’s fast computers? Could the current google QC do Shor’s algorithm at all, like even on a “small” number? If so, how big a number could it potentially do it? Would the time it did that in be faster than a classical computer of the same size (50 bits of memory I guess[?]) could do it? Or is there something stopping it running Shor’s algorithm at all, even for very small numbers? How much larger would a QC need to be to factor a number that would take me a long time to do using a desktop calculator and a pen and pencil and some guesswork?

Comment #97 September 25th, 2019 at 4:46 am

Oh, and by the way Shana Tova – I hope it’s a good one for you, your family and all of humanity.

Comment #98 September 25th, 2019 at 5:34 am

Concerning the integer factorization (i.e., the security of RSA), there’s another front open now. See the Sep 18 paper “Integer factorization using stochastic magnetic tunnel junctions” in Nature:

https://www.nature.com/articles/s41586-019-1557-9

and “Stochastic magnetic circuits rival quantum computing”:

https://www.nature.com/articles/d41586-019-02742-x

Comment #99 September 25th, 2019 at 6:31 am

I’m wondering about this paragraph in the paper:

Is this done in order to account for two-qubit gate error?

It sounds like they generate a random two-qubit circuit, run that for each qubit pair, estimate the (noisy) unitary and then let that be the two-qubit gate for that pair, averaging to an iSWAP with 1/6 of CZ.

It’s basically tuning the two-qubit unitaries to what the system does in practice?

Comment #100 September 25th, 2019 at 6:43 am

Just want to echo Scott’s sincere thank you to Gil. Thoughtful good faith skepticism is indispensable to the scientific enterprise! If the results do stand, then Gil being able to use his logic and rationality to overcome the hurdle of admitting to being wrong, then there is a very good chance something new could be learned.

Comment #101 September 25th, 2019 at 8:01 am

Scott #74

I may be totally off on this.

But to me the main issue with understanding why QC are so complicated to build is because of the analogies done with classical computing, like the notion of “gate”.

In a classical computer, any gate only operates on a 0/1 signal, which is totally local, i.e. how that signal came to be is totally irrelevant (this is done by carefully choosing the input/output impedances of each block of transistors in a gate). It doesn’t matter whether that binary signal has been generated by a single gate or tens of billions of gates. Which is why Moore’s law came in effect – a big computer can always be built by assembling smaller independent modules (whether we’re talking about a CPU or the internet as a whole). Dealing with noise correction is also mostly local. The only “global” difficulty in scaling is dealing with synchronicity/timing because of the light speed limit (puts a limit on how fast all distant parts can talk with one another).

But in QC, the signal processed by each gate is an object (a wave function) which complexity depends (exponentially) on how many quantum gates came before it, right?

A QC is basically doing wave function origami:

a big sheet of paper comes in, the first gate folds it in 2, creating two surfaces, the next gate folds it in 2 again, creating 4 surfaces,… each new gate folds it along a new crease, creating a geometric increase in the number of facets (dimensions of the wave function).

A very slight error in the crease at the first gate can have dramatic effects on the shape of the object handled by the last gate.

Which is also why error-correction seems very difficult to pull off: you have add more qubits to create redundancy, but at the same time those new qubits have to be such that they don’t contribute to making the wave function exponentially more complex.

Comment #102 September 25th, 2019 at 8:47 am

fred #101: Yes, you’re totally off here.

In a quantum computer,

exactly like with a classical computer,to operate on two qubits you don’t need to know where they came from, or which gates were applied to them previously, or how many other qubits they’re entangled with. If your gate works on two completely isolated qubits in a pure state, then as long as we can neglect the gate’s “side effects” on qubits besides those two, it will automatically also work on two qubits that are arbitrarily entangled with a billion other qubits. Quantum-mechanical linearity guarantees this. There is no wiggle room here.It’s true that, if you were simulating an n-qubit quantum computer classically, then just to apply a single 2-qubit gate, you’d generally have to update a vector of 2

^{n}amplitudes (thereby acting, so to speak, not merely on the 2 qubits themselves, but on the entire n-qubit system of which they’re a part). But with a quantum computer you don’t have to do that—which, indeed, is the entire point of building a quantum computer in the first place.Comment #103 September 25th, 2019 at 9:00 am

“my email began “Sorry for the spam.”” is an excellent anecdote of how real progress in science is made! 🙂

As always I very much appreciate your efforts here. You can tell that people at least care somewhat about this topic by the sheer number of contentious comments (and that you care immensely by your responses!) Your dedication to the field is inspiring, even for those of us who work in quite different areas.

Comment #104 September 25th, 2019 at 9:22 am

Wonderful FAQ and discussion!

Surely the next steps are something like this:

– paper is published, which includes addressing reviewers comments

– paper is intensely scrutinized, which includes identifying flaws, weaknesses, subtleties, loopholes, etc

– results are replicated (or reproduced), by at least one totally independent group

– those independent results are published, and critiqued etc

– something similar is done for a quite distinct “problem”, or using a very different QC (“hardware”-wise).

Comment #105 September 25th, 2019 at 9:24 am

Scott #102

Ok, I didn’t mean that a gate needs to “explicitly” know about how many gates are before, but that the complexity of what it’s acting upon depends on how many gates were before it. Sure, if everything is perfect, nature magically takes care of that for us (even though it’s exponentially harder for a classical computer).

I guess there are two (or more) types of errors: one due to potential decoherence along the way. And one due to finite precision of the transform details at each gate?

I’m not sure there’s something similar to the latter on a classical computer, because the result of a classical gate is 100% precise.. a QC gate in that sense is more similar to a “floating point unit” in a classical computer, where the object isn’t a simple 0/1 but floating numbers with limited precision, which can degrade depending of how many operations you make on them, in what order, etc (the details of specific algorithms).

Anyway, I’m clearly out of my league here! but there’s has to be something specific to a QC that makes it so much harder to scale up than a classical computer (in actual practice, not in terms of theoretical QM).

Comment #106 September 25th, 2019 at 10:58 am

#76 Is the point Candide is making a little different than what is being discussed by Scott at 34:00-40:00?

https://video.ethz.ch/speakers/bernays/2019/5564a353-d71b-46d4-bfc0-fa907de5de25.html

Is there a clear reason why a quantum sampling demonstration is different than protein folding (sampling) demonstration? Is protein folding not solving any hard problem? Is it because we can check the correct distribution by verifying the results of the 50-qubit QC classically (after a considerable amount of calculation), whereas there isn’t a formal mathematical theory of protein folding distribution (assuming an exponential statistical theory of protein folding exists which specifies the likelihood of all the local minimums of energy/foldings)? Is it because there isn’t a quantum algorithm for making the protein folding prediction, but if there was, that would be another way to show supremacy? Is it because it is easier to reconfigure this quantum computer system compared to an amino acid system? Is it because protein folding can end up in a local minimum, where local minimums are not relevant (although decoherence can be) for a quantum computers? Or…?

(Sorry for the amateur questions if the answer have been clearly stated before.)

Comment #107 September 25th, 2019 at 11:02 am

Hi Scott,

Why would you want to generate encryption keys if they have higher probabilities of being derived from certain bit strings according to the PT distribution ?

Comment #108 September 25th, 2019 at 11:05 am

Ryan O’Donnell, Gil Kalai, and others:

I wrote to John Martinis, Hartmut Neven, and Sergio Boixo, passing along your request for the full histogram of probabilities for random circuit sampling with a smaller value of n.

John Martinis wrote back, pointing out that they already did this for 9 qubits in their 2018

Sciencepaper. John asks whether this is satisfactory or whether you’d like more data.Comment #109 September 25th, 2019 at 12:11 pm

In practice it’s easy to scale all kinds of non-universal quantum computers: you just implement them as classical computers instead, because they are easy to simulate.

If there is a barrier to scaling a QC it would have to somehow exclude non-universal computation.

Otherwise, are we saying that it’s impossible to compute even trivial (if large) functions coherently at the quantum level? Quantum Computing would go from “not quite too good to be true” to “too bad to even bother with”, despite the technical challenges involved.

Or is nature just running on Clifford gates? But then how is universal computing possible at all?

It’s like, not only would light need to know that there is a second slit, it would also have to know whether it’s part of a universal computer. If so, don’t interfere!

And i’m not sure that would be less powerful than a Quantum Computer. Actually, sounds like a step up.

Comment #110 September 25th, 2019 at 12:22 pm

[…] What’s up with the new quantum supremacy breakthrough? And a useful Twitter […]

Comment #111 September 25th, 2019 at 12:42 pm

[…] a blog post responding to frequently-asked questions about the Google news, Scott Aaronson, a quantum professor at The University of Texas (Austin), […]

Comment #112 September 25th, 2019 at 12:44 pm

re #76 and #106

I have the same naive questions. On the one hand, a ribosome, plus an extensible set of chaperonin proteins is a programmable machine for generating correctly folded proteins of arbitrary length. Ribosomes are programmed by RNA sequences. This technology has been widely available in various forms including cell-free extracts (in vitro translation systems) from plants and animals for several decades. One objection might be that most random protein sequences adopt unstructured folds; i.e. do not produce a unique solution. However, that does not seem too different from a random computer program.

Comment #113 September 25th, 2019 at 1:21 pm

[…] a blog post responding to frequently-asked questions about the Google news, Scott Aaronson, a quantum professor at The University of Texas (Austin), […]

Comment #114 September 25th, 2019 at 1:38 pm

Candide III #76, Jon K #106, dm #112:

This is already addressed in the original post; see Q12.

Comment #115 September 25th, 2019 at 1:58 pm

fred #105:

I guess there are two (or more) types of errors: one due to potential decoherence along the way. And one due to finite precision of the transform details at each gate?

Yes, that’s correct. Ultimately, it’s the theory of fault-tolerance that tells you that, even though these errors seem “continuous” in nature (they affect the vector of amplitudes), by making measurements you can “discretize” the errors—i.e., force nature to make up its mind as to whether an error definitely happened or definitely didn’t happen—and then correct them if they happened. That’s what makes a QC different from an analog computer with respect to errors, and is crucial to understand.

but there’s has to be something specific to a QC that makes it so much harder to scale up than a classical computer (in actual practice, not in terms of theoretical QM)

No, that’s where you’re wrong. I mean,

of coursethere are huge new challenges, or else we’d have scalable QCs already. But you’re forgetting that to get from Charles Babbage to truly scalable classical computers took more than a hundred years. The fact that humans ultimately succeeded doesn’t mean that there was anything easy about it.Comment #116 September 25th, 2019 at 1:59 pm

I have some issues when you look at XEB from an adversarial point of view. If I had to cheat and get the highest XEB instead of playing fair, what would I do?

Even if I knew a perfect classic simulation of QC, the way to get highest XEB “score” isn’t even to run it and give its outputs: It’s to run it until you get the same sample twice, and then output that same sample over and over again. If for whatever reason you can tell for a fraction of the circuits what is the most common output, and just output that same output over and over again, you’re going to ‘cheese’ it and maybe even beat the real circuit in XEB.

I imagined it as if we’re checking if the distribution is epsilon close to the real distribution. Instead, XEB feels more like checking that we’re epsilon far away from the uniform distribution, but in the direction of the real distribution.

It feels like in order for XEB to hold as benchmark, finding even one common output for a fraction of the circuits should be impossible – and we know some very common outputs do exist. The distribution of circuits must also be special, so that the chance of a ‘weak’ circuit (one for which there is an easy way to predict one common sample) must be very low.

With their current data, they have only 0.001 advantage over completely random guess.

Their null results compare only to algorithms which would compute the exact amplitude, which are guarantied to give a correct results, but are take long time to execute.

I can think of several algorithms, which run quickly, and I wont be surprised if they capture just enough information about the circuit outputs to also score high on XEB (again, only 0.001 advantage over uniform distribution is needed):

Several things of the top of my head:

1. Just treating qubits as bits, picking a random choice each gate. If the random circuit distribution has even little chance to leave some bits unentangled, this will probably already capture a lot of information which would pass the XEB test.

2. Running a simulation while just measuring everything every m gate applications (m is small) and picking the highest probability sample every time. I wouldn’t even be surprised if this outscores running the real circuit, because picking a high probability output outscores being honest in XEB, and picking like the real distribution.

3. Dividing qubits to small subsets, and simulating everything, while doing a measurement every time there’s a gate between two qubits from different subsets. (Again picking highest amplitudes)

Because random circuits lack meaningful structure, after several gates applications, when you reach some string with some high amplitude, and just measure everything and pick highest probability string and continue, the chance that the next gates would somehow have had destructive interference (that you have just lost) with what you picked, in a way that the highest probability string is no longer a common string, should be small – only very specific circuits have a chance to ‘disentangle’ the state now, and the chance the random circuit is one of them is really small.

The physics intuition I have, is that macroscopic objects are basically random quantum circuits, and quantum mechanics isn’t needed to describe the statistics of macroscopic objects – normal statistics already describe their probability pretty well.

If there was even a slight chance chance of random quantum circuits displaying quantum behaviour, you would see schrodinger’s cats all over the place. But you don’t.

Maybe I’m wrong. It would be interesting to compute the XEB value of those algorithms.

Comment #117 September 25th, 2019 at 3:07 pm

Jon K. #106:

Is there a clear reason why a quantum sampling demonstration is different than protein folding (sampling) demonstration?

You list a bunch of possibilities yourself, some of which (like programmability) are indeed relevant differences. But I’d say that the most important difference by far is one you didn’t list: namely, with protein folding, there’s no theoretical reason to believe that the difficulty of simulating it on a digital computer is

asymptoticin nature. (Except, ironically, insofar as quantum effects are important!) Rather, this seems like “merely” an issue of the huge constant factors incurred in simulating chemical deformations that occur over incredibly short timescales. With QC, by contrast, we have every reason to expect the speedup over classical toincrease at an exponential rateas you integrate more qubits, and that’s indeed exactly what Google observes in its current data.Comment #118 September 25th, 2019 at 3:10 pm

SimonN #107:

Why would you want to generate encryption keys if they have higher probabilities of being derived from certain bit strings according to the PT distribution ?

No, you wouldn’t directly use the QC’s outputs for cryptographic randomness. As I said in earlier comments, what you’d do instead is feed them into a

seeded randomness extractor. That’s a well-known type of classical algorithm, already widely used in practice, for “purifying” nonuniform random bits into nearly-uniform random bits with almost the same amount of entropy.Comment #119 September 25th, 2019 at 4:12 pm

A. Karhumaa #98, that appears to use classical (real-valued) probability and as such, not have the interference phenomena required for Shor’s algorithm to work.

Comment #120 September 25th, 2019 at 4:24 pm

I was thinking about this as well, but isn’t the experiment only establishing a “weak” supremacy claim anyway? With a strong supremacy claim requiring BQP != BPP?

I wonder how far it’s even worth taking this up on the classical side since we won’t just casually show that a classical algorithm

couldn’t possiblymatch the QC, and neither thatit can, without cheesing it.They’re both tall orders and large breakthroughs in their own right. IMO it’s down to poking holes in the way the experiment was formalized and carried out. An adversarial approach is not a serious challenge to the QC’s massive speedup.

It’s the flip side of not using D-Wave’s “optimization problem” approach – where it was unclear how far either side could go, and they could always be made to go farther. In that case it made sense to go back and forth.

This time, for the classical algorithm it’s not enough to just go further, it would need to go all the way, while the QC can sit back and cheese it.

If confirmed, i take the experiment to be what it is: a soft quantum supremacy result that asserts the quantumness of the device.

I don’t see how it could do more than that.

Comment #121 September 25th, 2019 at 6:03 pm

[…] a blog post responding to frequently-asked questions about the Google news, Scott Aaronson, a quantum professor at The University of Texas (Austin), […]

Comment #122 September 25th, 2019 at 6:41 pm

I’m confused. For a problem like finding Satoshi’s private key, the distribution for the useful parts of the 2**n key space is a single spike. So sampling is unlikely to see anything interesting.

If we define a qubit as the thing Google isn’t announcing today and a Cqubit as an error corrected qubit that we might eventually have, then does this sampling stuff say that if the sampling results for qubits are ok, then the eventual Cqubits results should work for the Satoshi problem?

Comment #123 September 25th, 2019 at 7:14 pm

Stu #122: There are multiple layers of confusion to unpack in that comment.

No one with any knowledge ever

claimedthat a sampling-based quantum supremacy experiment would help in the slightest for recovering Satoshi’s key. Where are you getting that from?Also, I’m pretty sure that even a full, scalable quantum computer would be of limited help for recovering Satoshi’s key. According to this page:

Bitcoin already has some built-in quantum resistance. If you only use Bitcoin addresses one time, which has always been the recommended practice, then your ECDSA public key is only ever revealed at the one time that you spend bitcoins sent to each address. A quantum computer would need to be able to break your key in the short time between when your transaction is first sent and when it gets into a block. It will likely be decades after a quantum computer first breaks a Bitcoin key before quantum computers become this fast.

So, as long as Satoshi followed the recommended practice—which hopefully he did? 😀 —presumably the best quantum speedup you could get would be the square-root speedup from Grover’s algorithm? (Anyone with more knowledge of bitcoin is welcome to correct me on this.)

Yes, the achievement of quantum supremacy is some evidence in favor of the proposition that the basic principles of quantum computation are sound, and that building a full fault-tolerant device, able for example to break RSA, indeed ought to be “merely” an engineering problem (which of course says little about how long it might take). It’s not the slightest bit surprising—at least not to those of us in this field—but it’s certainly reassuring. It’s not a sufficient condition, but it’s a necessary one.

Comment #124 September 25th, 2019 at 7:20 pm

Yoni #96 and #97: Shana Tova to you too!

How large a number Google could factor, by running Shor’s algorithm on its current device, is a semi-interesting question to which I don’t know the answer. My guess would be that they could at least get up to the hundreds, depending on how much precompilation and other classical trickery was allowed. The Google group has expressed no interest in doing this, regarding it (with some justice) as a circus stunt that doesn’t showcase the real abilities of the hardware. But once theirs or similar hardware becomes available for outside use, I’m sure someone will give it a go.

Comment #125 September 25th, 2019 at 7:24 pm

A. Karhumaa #98: Unless I’m misunderstanding something, I’m astonished that that paper (and especially the credulous commentary on it) made it into

Nature. It seemsclearly, obviouslynot scalable to large numbers, and destined for a sprawling graveyard of similar “classical physics” approaches to factoring that failed to scale for the same completely obvious reasons.Quantum mechanics appears to violate the Extended Church-Turing Thesis. Quantum mechanics is the

onlything we know that appears to violate the Extended Church-Turing Thesis. Remember those two facts, and you’ll be shielded against like 90% of all the forehead-banging wrongness at the interface of physics and computation.Comment #126 September 25th, 2019 at 8:38 pm

What p value did Google get for their quantum supremacy experiment? p=0.05?

Comment #127 September 25th, 2019 at 8:59 pm

RE Stu #122

Yes confusion for new stuff seems a state I just have to be comfortable with.

I picked S’s key because it seemed an interesting, non-contraversial, defined, and hard problem.

From reading the bitcoin paper, solving it seems straight forward if you can find the unique solution to a complex random set of logic equations. (Grover’s oracle.) If O(root(N)) is what QC offers, then yes BC is safe, I was hoping for O(1). (Although I’m not quite sure what O() notation means for QC, For classic it’s proportional to (operators * cycles). These map to wall clock time and size of the check you write to buy the machine. For QC, there operators but not sure if there are cycles.)

My confusion is preventing an answer to my first question. With classic logic, the distribution of the solution for the above oracle is a single spike. If you get close to the solution, you get no help getting closer from statistics. If you do, then that is a problem in the code design otherwise known as a backdoor. Definitely O(N).

But the quantum supremacy test seems to be all about statistics to verify that the qubits are working mostly.

In a problem where close gives you no help for getting closer, what makes working mostly useful?

Comment #128 September 25th, 2019 at 9:37 pm

Craig #126: They mentioned 5σ in their paper, which corresponds to a p-value of 3×10

^{-7}. This is physics, not psychology. 🙂Speaking of which, didn’t you get the memo about the recent moves to abandon the “p≤0.05” criterion, whose incredible laxity played a major role in the replication crisis?

Comment #129 September 25th, 2019 at 9:40 pm

Stu #127: I’m still having trouble extracting a clear question. But maybe it would help to study Grover’s algorithm? (Here are my lecture notes, for example.) It

doesconcentrate most or all of the amplitude onto a single basis state, by doing something completely different from random circuit sampling.Comment #130 September 25th, 2019 at 10:59 pm

Scott 128,

Impressive p value. With respect to q8, I just thought of a possible (perhaps naive?) way to spoof the results using a fast classical algorithm: say you have two zero-one state vectors of size n=53 which differ in only one entry (one has a zero, the other has a one). It takes the order of 2^n computations to figure out the two probabilties of reaching these two states after all of the unitary transformations of the circuit are applied. However, I would think it should only take a polynomial time to figure out which of these two probabilities is larger, since most of the unitary transformations act in the same way on both of these states. If one always outputs the state having the larger probability when randomly selecting lots of these pairs of vectors, wouldn’t this spoof the results in deterministic and polynomial time probably reaching a b significantly larger than 1, as defined in Q6?

Comment #131 September 26th, 2019 at 12:51 am

Craig #130: No, I don’t think anything like that can work. Keep in mind that both probabilities are the absolute squares of amplitudes, which in turn are

sums of exponentially many complex numbers.Given two such sums, how on earth do you decide in polynomial time which one has a larger magnitude?Comment #132 September 26th, 2019 at 7:47 am

Scott 131,

Yes, you are correct. Thank you.

Comment #133 September 26th, 2019 at 11:00 am

Re #117

I am not remotely qualified to comment on the theoretical hardness of any class of problems much less prediction of protein folding so apologies if this is off base.

If by “constant factors” you are implying that difficulty is more or less proportional to protein length, I think that substantially understates the physical problem. Its not just many short timescale processes, but the range of relevant timescales (femto-second to seconds). Each scale involves a different set of issues. There are various intra-chain cooperative effects and long-distance interactions etc. Because many large proteins are organized in domains that often fold independently there may be an upper limit to the difficulty with respect to length. However, many – probably most – protein sequences do not have a unique, classical 3D fold, but exhibit a distribution of unstructured conformations in solution. These distributions are also programmable, i.e. determined by the protein sequence. Of course, Google already collaborates (very successfully) with people who know vastly more than i do about the subject.

Comment #134 September 26th, 2019 at 11:22 am

Dear Scott and others who are happy about the quantum supremacy claim,

I can accept the results of the standard experiments of quantum mechanics, for instance the double slit experiment, since in my mind this is explainable if one imagines that the universe is a giant digital computer (Zuse and Fredkin proposed this theory about 50 years ago) – then the double slit experiment is easy to simulate on the digital computer. It is similar to a magician using camera tricks to fly around on a magic carpet – the double slit experiment is essentially done by camera tricks.

However, no such explanation is available for quantum supremacy. If it is true, no digital computer can even simulate it. So it means that our universe works by magic, if one loosely defines magic as “impossible to fathom”. It might be predictable magic, but it is magic nevertheless. So the only fundamental difference between witches casting spells to put hexes on people and quantum supremacy is that quantum supremacy has now apparently been demonstrated in a scientific experiment (and perhaps witches casting spells doesn’t involve any higher level mathematics).

So my question is does it not disturb you that the quantum supremacy experiment has apparently shown (assuming it will not be debunked) that nature works by magic? Would you be happy also if it were scientifically proven that witches casting spells to put hexes on people can damage people’s souls? And how can this not disturb you, if you are not disturbed?

Besides not believing that QC is possible, this is another reason why I have been a QC skeptic. I cannot accept that the laws of the universe are magical.

Note that I’m not making a scientific argument here; if the experiment was done correctly and nobody can find a flaw in it, then I cannot argue with the result of the quantum supremacy experiment and must accept quantum supremacy as a fact. But I would not be happy about such a fact.

Comment #135 September 26th, 2019 at 12:13 pm

Craig #134: What is “impossible to fathom” about quantum supremacy? There are plenty of people fathoming it. Some people apparently even fathomed it well enough to be able to achieve it.

Comment #136 September 26th, 2019 at 12:38 pm

Craig #134: For me, a good working definition of “magic” would be “the fundamental laws of the universe caring about human goals and intentions. The leptons and quarks moving one way and not other because someone cast a spell, rather than for any reason understandable in reductionist terms.”

By that definition, quantum mechanics—including its apparent prediction of computational supremacy—is obviously and emphatically not magic. It’s “merely” a huge and profound aspect of the laws of physics.

Regarding being happy or unhappy: well, it’s better than nothing to say, “my gut tells me that evolution and relativity and quantum mechanics are all totally wrong, but I’ll bravely face up to whatever the data says, while forever being unhappy about it.” But to whatever extent you can manage, it’s better to

adjust your gut!How would things feel to you if you’d imbibed those explanations of the world along with your ABCs in preschool? Feel that way.Comment #137 September 26th, 2019 at 12:47 pm

dm #133: You’ve given an extremely interesting “inside view,” from someone who understands vastly more about protein folding than I do. But there’s also the more abstract “outside view,” which says: to whatever extent this is classical rather than quantum physics, why not just build a gigantic finite-element model (or whatever) that simulates what’s going on at every point in space? Sure, it might be ridiculously inefficient and uncompetitive in practice. But it would prove the point that the complexity of protein folding would

notscale exponentially with the length of the amino acid chain and with the total time involved, but at most polynomially.Comment #138 September 26th, 2019 at 1:02 pm

Have you ever plagiarized research?

Comment #139 September 26th, 2019 at 1:04 pm

HaveYouEverPlagiarized #138: I haven’t lived a perfect life, but no, of that I’m innocent.

Comment #140 September 26th, 2019 at 1:27 pm

This is a really nice FAQ Scott, thanks for putting in the effort to write it up. 🙂

I’m a little uneasy about your answer to question 5, however. Whilst I agree that Google’s accomplishment sure seems like a somethingburger and that the burgers D-Wave has been serving up to this point have been nothing but empty calories, I think it’d do well to briefly mention why (i.e. no evidence that the problems being solved by the D-Wave machines are in any way hard, scant evidence that entanglement is contributing to the machines’ computational power, etc.), and/or contrast the approaches of the two groups. There are probably people reading this FAQ who aren’t familiar with your previous writings on D-Wave.

—

Also it was great meeting you during the talks you and Dana gave at UTS in Sydney back in January. I hope I didn’t come across as too incoherent, I hadn’t slept in ~30 hours before showing up.

The post about Z-chains I mentioned having made mostly in jest was this one, by the way.

Comment #141 September 26th, 2019 at 1:37 pm

[…] has a great post with preliminary descriptions and evaluations, and we confess to adapting the following telegraphic […]

Comment #142 September 26th, 2019 at 1:53 pm

About qubits – what’s the primary reason as to why it’s so hard to keep them entangled and isolated from the environment on really large time scales?

Because we need a way to first prepare them and that the method done to achieve that means they’re always a bit coupled (mechanically/electromagnetically) to the rest of the apparatus?

Because of the ways we propagate output from one gate to the inputs of the next gates?

External effects? Like EM fields, photons (IR heat), or even more subtle things like cosmic rays (I read that those already significantly affect classical computers, like, many times a day), and maybe even more exotic interactions (neutrinos, gravity waves, dark matter)?

Comment #143 September 26th, 2019 at 1:56 pm

These numbers from the experiment:

That’s large enough for the “Simon sampling” test in terms of gate depth. 🙂 I didn’t even use circuits that large when testing on a simulator.

The total fidelity of 0.2% means that the QC would have a success rate of about 50.1%. Though i assume that the 2-qubit error rate would be larger since it would use standard rather than “cheesed” 2-qubit gates.

It would probably be closer to 50.0001% or less for circuits this large, only very slightly better than a random guess. Not exactly the kind of test you can do over a cloud API.

Hopefully things don’t stagnate in terms of fidelity, in favor of “NISQ applications”.

Comment #144 September 26th, 2019 at 1:59 pm

#137

“But it would prove the point that the complexity of protein folding would not scale exponentially with the length of the amino acid chain and with the total time involved, but at most polynomially.”

Well, a simple analog might be counting the number of distinct ways a string of n beads can be packed in a minimum volume. Does that scale as at most polynomial of n?

Comment #145 September 26th, 2019 at 2:04 pm

Jon K. #106 and dm #112: for what it’s worth, Scott briefly provided a different take on protein folding in his 2005 paper, NP Complete Problems and Physical Reality. (Ctrl+f “protein”, fourth hit.)

I was going to make a similar point to this one, but this paper turned up while googling for NP-hardness results for protein folding.

Comment #146 September 26th, 2019 at 2:17 pm

David 135,

Quantum supremacy essentially means that one can be given a vector x[i] of probabilities, where i=1,…,2^million, which add to one, and then through measurement instantly output i with probability x[i]. This is impossible to fathom, even though it is easy to understand the basic idea. First of all where is such a vector x[i] stored in our universe? Second of all, how does the qc figure out which i to output in just an instant, given the size of x[i]? (Note that according to Google’s claim, the million is replaced by 53, but if quantum supremacy is true, then what is stopping the 53 from becoming a million or a billion?)

Anyone who treats quantum supremacy like it is a claim of no big deal cannot possibly understand it. For instance, one of Google’s competitors from another company downplayed this announcement saying the result of the experiment has no practical value. How can he say that? Doesn’t he understand the significance of such a claim, if it is true?

I still am skeptical about Google’s claim of quantum supremacy as everyone should be, but how anybody doesn’t see the cosmic significance of this claim and thinks that what is going on in Washington, D.C. now is more important simply doesn’t get it, in my opinion.

Comment #147 September 26th, 2019 at 2:21 pm

Apologies for my naive question!

From hanging out in the world of Markov chain Monte Carlo (MCMC), my impression is that local-search-based schemes for approximate sampling are often way more effective at sampling complicated distributions than they have any right to be. I am curious if MCMC is a useful classical baseline to consider here, and if not, why not.

For the classical simulations in the Google paper, I think they use a variant of rejection sampling. For circuits of this size, how many bitstrings do you have to propose and evaluate before you accept one this way, on average? If this number is very large, have people run classical MCMC algorithms on random-circuit-induced target distributions and checked their mixing times?

You note in Q6 that computing the probability of observing a particular bitstring x is already exponentially hard in the length of x. I guess my first big point of ignorance here is: how much of the classical difficulty of this problem is due to the hardness of computing the (unnormalized) probability p_C(x) *of one bitstring x* given random circuit blueprint C, and how much of it is due to having to compute p_C *lots of times on different x’s* to find a “good enough” x. (And is it fair to break it up into these two chunks in the first place? E.g. could you compute p_C(x’) more cheaply by storing some intermediate results you accumulated while computing p_C(x), assuming x and x’ are neighboring in some way…)

Thank you for running this blog and in advance for any pointers — this is all very exciting to read about!

Comment #148 September 26th, 2019 at 2:25 pm

Greg #140: Sorry, I didn’t want to stray too far from the subject, and I’ve harped enough about D-Wave. And even D-Wave’s former proponents seem to have mostly thrown in the towel (ironically, as the company itself became more scientifically responsible and scaled back its supremacy claims).

Comment #149 September 26th, 2019 at 2:28 pm

dm #144: No, you misunderstood my point. In protein folding, nature itself does not search through an exponential space of possibilities and magically pick the best one. It simply undergoes a dynamical evolution process—one that, like any other physical process, can get trapped in local optima (of which prions, the agent of mad cow disease, seem to be an example). And as long as that process is classical, it can in principle be simulated on a digital computer with only polynomial overhead.

Comment #150 September 26th, 2019 at 2:30 pm

fred #142: The fundamental reason why it’s hard to keep qubits isolated and entangled is that, if they can entangle with each other, then they’d also like to entangle with their whole external environment (including us). But we perceive that as the qubits collapsing.

Comment #151 September 26th, 2019 at 2:30 pm

Scott #136

the thing that really bothers me about QM is that recently a few threads came up in this blog about questions/articles on quantum systems within quantum systems… and as far as I could tell, no-one had really clear answers on how to properly interpret QM in that context (some ppl claimed it was “obvious”, but I seriously doubt it since this sort of questions were the basis for Everett’s work, which got him in so much trouble with the establishment).

Some researcher tried to show some inconsistency in QM regarding the above, and he was hopeful that QCs could be used to probe this. That’s pretty exciting imo.

Comment #152 September 26th, 2019 at 2:37 pm

qubit computation:quantum annealing::expert systems:neural nets, arguendo. There will be a place for both types of QC and the relationship between quantum annealing and biology will be found to be as analogous and informative as that between ANN and the the nervous system. Biological qubit computation is highly unlikely but biological quantum annealing has been demonstrated, I’d say, and is the path of last resistance form evolution to exploit quantum effects. I wouldn’t be surprised if we won’t fully understand e.g. prion diseases without a deeper understanding of quantum annealing.

Comment #153 September 26th, 2019 at 3:02 pm

Perhaps if they had made the quantum computer out of lego blocks as they did their first servers they would have performed better

Comment #154 September 26th, 2019 at 3:12 pm

Got it. Thanks!

Comment #155 September 26th, 2019 at 5:21 pm

Issue I haven’t seen asked here yet: there is this big accomplishment of building a 53-qubit computer that demonstrates supremacy. Great engineering challenges had to be overcome to get those 53 qubits working. My question: what additional challenges would have been faced if they wanted to get it to 55 qubits instead of 53? What about 56 qubits, or 57, 60, 70, 100, 200, or 1000?

I skipped 54 because I read somewhere (maybe in jest) that the present machine was intended to be 54 qubits but something went wrong with one of them, so they are running it with just 53. Is there any truth to that?

Thanks.

Comment #156 September 26th, 2019 at 5:30 pm

Scott, I’m confused as to why you keep talking about the extended church-turing thesis when a basic requirement for invalidating it would seem to be a proof that the problem is intractable on a classical computer. We don’t have such a proof so the church-turing thesis doesn’t seem relevant.

Comment #157 September 26th, 2019 at 5:45 pm

This is a response to Steve Girvin’s comment that Scott conveyed through an update. In arXiv:1602.0478, the Yale group implemented a type of error correction with gain, but (in my understanding) not full-fledged error correction with gain that meets the benchmark standard. They store their quantum data in a QED resonator system, and they identify the dominant error in their system which they then correct. This is excellent, but again falls short of the benchmark goal. That goal to correct all types of error on an equal footing, or equivalent to correct depolarization error, so that the encoded logical qubit is better protected against all error (not just some error) than the physical qubits that store it. One next step to imagine after that is iterated error correction, i.e., another layer of error correction to protect an even better second-level logical qubit that is encoded in the first-level logical qubits.

What the Yale group does seems more at the same level as what the Martinis group did previously with 9 qubits, namely that they corrected just one type of error (depending on your basis, just dephasing or just bit flip error), and achieved this with gain.

Comment #158 September 26th, 2019 at 5:52 pm

asdf #155: Yes, one of the 54 qubits malfunctioned.

They originally built a 72-qubit chip (called “Bristlecone”), but they found that the performance wasn’t good enough, so they went back and built the 54-qubit chip with some different design ideas. It indeed turned out to work much better.

An obvious next step will be to scale up the new design to 70 or 100 or whatever, while also (just as importantly, or even more so) further improving the qubits’ quality. My understanding—keep in mind that I’m not an experimentalist!—is that beyond a few hundred superconducting qubits, the engineering starts to look really scary, and creative new ideas will be needed to go further.

Comment #159 September 26th, 2019 at 5:59 pm

Rahul #156:

Scott, I’m confused as to why you keep talking about the extended church-turing thesis when a basic requirement for invalidating it would seem to be a proof that the problem is intractable on a classical computer. We don’t have such a proof so the church-turing thesis doesn’t seem relevant.

I was perfectly explicit in the FAQ that there’s no mathematical proof at present that quantum systems can’t be efficiently simulated classically.

So, the careful statement is that, once you convincingly demonstrate quantum supremacy, you’ve falsified the ECT

unless something happens in complexity theory that would itself be astounding, like BPP=BQP.For the experimental physicists actually building these devices, who aren’t bound by mathematicians’ standards of rigor, “exotic complexity collapses don’t happen” is usually just taken as an unstated background assumption in discussions of quantum supremacy.

Comment #160 September 26th, 2019 at 6:44 pm

Everyone: I’ll probably close this comment section in a day or so, simply because I’m traveling and responding to everything has become too much. Go ahead and get in any final comments.

Comment #161 September 26th, 2019 at 8:58 pm

Hello, Dr Aaronson

Do we have any useful analogue quantum computers? I mean the ones without any cubits, the ones simulating a quantum (presumably requiring a lot of resources to simulate) system by analogy rather than expressing it in numbers.

https://en.m.wikipedia.org/wiki/Analog_computer

Those without effectively prove the quantum supremacy conjecture by their mere existence.

Comment #162 September 26th, 2019 at 9:28 pm

lockywolf #161: See my Q12–is that the sort of thing you had in mind?

Comment #163 September 26th, 2019 at 10:22 pm

Thanks for your FAQ and answers. Congrats to the team. Interesting times!

Comment #164 September 26th, 2019 at 11:06 pm

[…] Te recomiendo leer a Scott Aaronson, “Scott’s Supreme Quantum Supremacy FAQ!,” Shtetl-Optimized, 23 Sep 2019. El término “supremacía cuántica” fue acuñado por John Preskill en 2012, pero ya estaba en […]

Comment #165 September 27th, 2019 at 12:39 am

Craig #146: You’ve been commenting on this blog for years, and that

mightbe the first comment of yours that I just about fully agree with. 🙂 No, quantum supremacy is not magic, but yes, it is cosmic.Comment #166 September 27th, 2019 at 4:26 am

I don’t think that’s the impressive part.

For example, classically, suppose you have a graph of n nodes. Consider the vector x of all possible sub-graphs of that graph.

That’s a set of 2^n items, which is also a set of 2^n probabilities – each subgraph has a probability of being picked at random.

Even worse, if you wanted to compute some of those probabilities – like what’s the probability of picking a subgraph that’s a clique of size n/2 – you would need a very large expression.

And yet, you can pick a subgraph at random with probability x[i] trivially just by picking a random number between 0 and 2^n and extracting the respective subgraph.

The power of a QC is not in the size of x, it’s in how x can be manipulated.

And it’s pretty subtle because classically you can also transform x in interesting ways.

Comment #167 September 27th, 2019 at 6:14 am

[…] The best assessment I’ve seen is in a blog post by Scott Aaronson, a well-respected theoretical computer scientist and Professor of Computer Science at the University of Texas at Austin. Aaronson wrote: […]

Comment #168 September 27th, 2019 at 7:12 am

Scott #125:

I think you are misunderstanding something. I mean, sure, it’s pretty silly to claim that the thing can be scaled to factor cryptographically-sized numbers, and Nature should ideally have people who get just how silly that is, but that’s not really the point of the article. The point, as I understand it, is that the article is a demonstration of

completely new type of computational hardwarethat is not limited to integer factorization. One fairly obvious and natural potential use for p-bits seems to be random number generation.Comment #169 September 27th, 2019 at 7:31 am

@ Craig #146: 2^1000000 = 12^278943. I am sure a factory can produce 278943 numbered dodecahedral dice with 12 different colours. If you dump them from the truck you immediately produce an output over 2^1000000 outcomes. Every time a child is born, nature outputs something with probability much much smaller than 2^1000000, although not instantly but over the course of 9 month. So I don’t understand what is so cosmic about 2^53? It is another amazing human achievement to harness laws of nature in this case for the purpose of solving a hard optimization problem.

Comment #170 September 27th, 2019 at 7:43 am

Scott #115: May be additional subtlety with errors due to gate precision. Likely it would be systematic, i.e., biased errors and that may need other treatment, than usual error correction codes for unbiased errors.

Comment #171 September 27th, 2019 at 7:47 am

In very simple terms, what is the corresponding number of classical operations that it is claimed can’t be achieved? To claim quantum supremacy have they a) Shown that they have sampled approximately from a 53-qubit random circuit b) argued that a corresponding classical simulation would require some number of classical operations and c) then argued that this number is not feasible to perform. Is this number, for example, more than the number of classical operations needed to crack DES?

Or have they simply argued that the best code they could write themselves couldn’t perform the simulation in a feasible about of time? If we take the DES analogy, this latter approach would not seem satisfactory for establishing quantum supremacy.

Comment #172 September 27th, 2019 at 10:29 am

“My understanding—keep in mind that I’m not an experimentalist!—is that beyond a few hundred superconducting qubits, the engineering starts to look really scary, and creative new ideas will be needed to go further.”

QC is the type of problem where any idea that looks promising in the lab at the small scale (experimentalism) quickly requires a full army of engineers in order to scale and validate.

I just don’t see a small team of hands-on physicists going from a few dozens qubits to a few thousands on their own (assembling thousands of qubits “by hand”). So this puts a huge damper on the creative aspect, how often you can revise and start from scratch.

Even classical computers, scaling up only became viable after the invention of the transistor and then continuous incremental refinements in litography. But even though improvements were somewhat obvious (hence Moore’s law) it still required very expensive hardware and large scale production facilities, sustained by an actual market that existed almost from the beginning (even the first very clunky computers like the ENIAC were highly practical).

Building a large scale QC could be more similar to ITER, the International Thermonuclear Experimental Reactor. While still considered an experiment, the planning started in 1985, and involves all major nations. Full readiness is estimated to be in 2035.

Comment #173 September 27th, 2019 at 11:39 am

Alex V #170: The fault-tolerance theorem can handle biased/systematic errors as well, as long as they’re sufficiently small.

Comment #174 September 27th, 2019 at 11:49 am

dorothy #171: Yes, it’s important for skeptics (like the IBM group? 🙂 ) to try as hard as they can to shoot this down by designing faster classical simulations. My own guess is that the simulations might improve

somewhat, but they’ll still take time exponential in the number of qubits (we have some complexity-theoretic evidence to support that). And meanwhile, the QC hardware will continue to improve. So at least for these sampling tasks, I feel fairly confident in predicting quantum supremacy from here on out.The largest speedup they claim is by a factor of about 2 billion, compared to a classical supercomputer with 1 million cores (though since the classical verification was too difficult, that depends on some extrapolation; the largest directly verified speedup is “merely” by a factor of a million or so).

A couple days ago I did a back-of-the-envelope to see how large a speedup that would correspond to in terms of “elementary instructions.” That’s a little bit naive, since a 2-qubit gate in the QC is not directly comparable to (say) a single CPU instruction in a classical core. But if I forget about that and just plug the numbers in, I get that, with the best currently-known classical simulation algorithms (optimized tensor network methods), simulating 53-qubit random circuit sampling would take about 10

^{26}elementary instructions (1 million cores running for 10,000 years), while the number of 2-qubit gates used by the QC was only ~10^{11}(3 minutes’ worth of gates that take about 10ns each). So, that’s a speedup by a factor of about 10^{15}. That’s not an especially small number, and i personally wouldn’t know how to explain it without saying something about the dimensionality of the Hilbert space.Comment #175 September 27th, 2019 at 12:07 pm

Scott 165,

I hope you understand that I am (still) a QC skeptic not just to be contrarian. It is because I do understand the cosmic implications of QC and don’t like them.

Comment #176 September 27th, 2019 at 12:09 pm

Am I the only one worried that once we reach a sufficiently large number of entangled qubits, producing a ridiculously large Hilbert space, the overlord hackers responsible for starting the simulation of our universe will notice that their computational resources are being sucked into a very tiny branch of the simulation, dangerously starving the rest of the system, and then decide to shut the whole damn thing down to investigate the logs and stack trace?

Comment #177 September 27th, 2019 at 12:11 pm

From the paper:

It’s only supremacy over all known classical algorithms.

But to be fair, that would still be the case if the experiment involved breaking RSA, solving large instances of the travelling salesman problem or checking whether a program halts in k steps.

In each case we’d only have more and more confidence that no classical algorithm could match the QC.

And even (or especially) in the halting problem case we’d want to check that the input was sufficiently random, universal, and difficult. 🙂

Comment #178 September 27th, 2019 at 12:43 pm

[…] * Quantum Supremacy FAQ. […]

Comment #179 September 27th, 2019 at 1:17 pm

Scott #174. Thanks, that’s interesting.

My concern is about the heavy reliance on the word exponential. Let’s assume the classical complexity is indeed exponential. Let’s assume further that this really means 2^n time, and I see no particular reason to believe that, then 2^53 computations is *perfectly feasible*. DES cracking for example requires 2^56 computations and that is regarded as easy to do these days. Sure it’s hard to do using standard Intel CPUs bit that’s beside the point. From what you say it sounds like the state of the art practical classical implementation for 53-qubit random circuit sampling is much worse than 2^53 computations. I can’t claim I understand fully why but it’s not clear to me that we can rely on this extra cost as being intrinsic. That is a much stronger assumption than that the true classical complexity is 2^n.

In essence the whole exercise sits awkwardly been two worlds. There is the nice clean world of computational complexity (about which we can mostly only make conjectures) and the messy world of real hardware and software one can really build. We want to make grand mathematical claims, for example about the extended Church-Turing hypothesis and are tempted to use the latter kind of messy evidence to make them.

To make a claim of quantum supremacy at all convincing, even allowing for assumptions about not only exponential but 2^n complexity for the classical counterpart, it seems to me that n needs to be at least of a size where 2^n computations is not feasible. Relying on the extra costs of practical classical implementations on existing general purpose machines seems unconvincing. But really one also needs to make a conjecture about the base of the exponential classical complexity and I, at least, have not read any arguments for why this should be 2.

Comment #180 September 27th, 2019 at 2:20 pm

So in the 19th century I believe it was theorized that the Sun was something like a gigantic lump of burning coal. Its mass and diameter were known, but that led to a calculation that it would burn itself up in a few hundred years or maybe less. Since it was known to be much older, that left a big mystery. It had to be powered by some unexplained phenomenon, tapping into a seemingly infinite source of energy. Later, relativistic mass-energy equivalence and nuclear fusion were discovered, and the source of energy was identified. It was very very large, but it wasn’t infinite after all.

Could QM be something like that, postulating an infinite dimensional Hilbert space without really explaining it? Could there be something large but finite dimensional instead? That would put a bound on the size of a possible quantum computation, keeping the extended CT thesis alive.

It seems weird to me that complex probability amplitudes don’t show up in other areas of math, the way continuous real-valued functions approximating discrete ones happen everywhere, the way the number pi shows up everywhere. I wonder if something like that remains to be found.

Comment #181 September 27th, 2019 at 3:16 pm

[…] Additional resources and information can be found from Quanta Magazine, the Financial Times, Scott Aaronson, and this 2017 […]

Comment #182 September 27th, 2019 at 6:28 pm

fred #176:

Am I the only one worried that once we reach a sufficiently large number of entangled qubits, producing a ridiculously large Hilbert space, the overlord hackers responsible for starting the simulation of our universe will notice that their computational resources are being sucked into a very tiny branch of the simulation, dangerously starving the rest of the system, and then decide to shut the whole damn thing down to investigate the logs and stack trace?

Funny—when I once gave a quantum computing talk at SciFoo camp at Google, Larry Page asked me about exactly the same scenario (with a grin on his face). My response was something like, “wouldn’t you know more than I would about the computational overlords running our universe?” 🙂

Comment #183 September 27th, 2019 at 6:39 pm

dorothy #179:

1. You’re exactly right that the subtlety of these discussions come from how they mix the clean world of asymptotic complexity with the dirty world of actual implementations and experiments. You see the same issue in the field of applied cryptography, for the same reasons.

2. If you assume ~2

^{n}hardness for an underlying estimation problem, then the type of hardness reduction I know how to prove will give you ~2^{n}hardness for spoofing the linear cross-entropy test. On the other hand, with a weaker assumption you’ll get a weaker conclusion. In any case, certainly no one currently has any idea for simulation algorithms that beat ~2^{n}, for quantum circuits with this level of generality (as long as the depth is large enough).3. Indeed, when you actually write the simulation code there’s a large multiplier in front of the 2

^{53}, for a variety of reasons (one which is memory limitations). It would be better if we could somehow “equalize” all the various constant factors when comparing classical to quantum. On the other hand, if you have both a large predicted asymptotic speedup AND a large observed speedup in practice, and the two are clearly related to one another via a causal story, how much more can you reasonably ask? 🙂Comment #184 September 27th, 2019 at 6:48 pm

asdf #180:

Could QM be something like that, postulating an infinite dimensional Hilbert space without really explaining it? Could there be something large but finite dimensional instead? That would put a bound on the size of a possible quantum computation, keeping the extended CT thesis alive.

I think you mean exponential vs polynomial rather than infinite vs finite, but yes, that’s the obvious scientific question being tested by quantum supremacy experiments. And right now it’s looking like, sorry dude, but there turns out to be no last-minute save of this kind for the Extended Church-Turing Thesis.

Everyone: thanks for a great thread! Please, no new questions at this point, only responses and parting thoughts.

Comment #185 September 27th, 2019 at 10:23 pm

Scott #57. Yes, your Q10 reread in light of asdf #33 helps. Certifiably random bits used to demonstrate quantum supremacy is a very cool idea. Part of my confusion was an expectation that claims of quantum supremacy would come with a proof. Proofs, presumably, can be checked. Demonstrations take a bit longer to be accepted. Thanks.

Comment #186 September 28th, 2019 at 6:42 am

Get a hundred quantum computers at different locations to perform the same task ten thousand times, then compare the results. That’s how you harness high mutation rates: You make a lot of blurred snapshots overlap, repetitive results are the most likely to be repetitive – which is what mathematics is all about, relying on patterns to repeat. Seems to be a somewhat functional reflection of reality.

As I see it, the Planck constant doesn’t limit the world, just the size-related frequency of the observer: If you were made of stars instead of atoms, all your technology was made of stars, and any measurable change in your world would require several stars closing circuits at the speed of light, the rare snapshots you could take of Earth would sum up to an image just like the one we get by taking snapshots of the quantum world.

Einsteins physics describes how physics appears if you look pretty close at objects roughly your size – it fails both when watching the quantum level (where frequency is higher, aka time runs faster, so the speed of whatever they have as light appears to us to be higher) and at cosmic distances (where time appears to run faster, because timespace is a hypersphere, only appears flat at small distances, and when its curve comes into play, you get the illusion of a higher speed of light – if you don’t take this into account, you make up a universe consisting almost entirely of dark fudge).

Which means, you’re basically trying to get a bunch of unruly children to welcome you with clean clothes and tidied-up rooms when you return in ten thousand years. Seeing it that way doesn’t solve you problems, but gives you access to a lot of solutions that already exist. Might spark some good ideas.

Comment #187 September 28th, 2019 at 1:01 pm

[…] einen gewaltigen Heimvorteil hatte. Trotzdem ist das eine bravouröse Pionierleistung. Experten wie der Informatiker Scott Aaronson von der University of Texas in Austin vergleichen sie mit dem ersten Flug der Brüder Wright im Jahr 1903 – deren Flieger war auch kein […]

Comment #188 September 29th, 2019 at 8:10 am

[…] “The devices currently being built by Google, IBM, and others have 50-100 qubits and no error correction,” Aaronson wrote on his blog last week. […]

Comment #189 October 2nd, 2019 at 10:59 pm

[…] The Blog of Scott AaronsonIf you take just one piece of information from this blog:Quantum computers would not solve hard search problemsinstantaneously by simply trying all the possible solutions at once. « Scott’s Supreme Quantum Supremacy FAQ! […]

Comment #190 October 4th, 2019 at 6:58 am

[…] I was really curious to learn about quantum cryptography and its implications for the future, after Thomas revealed in the Skype chat that Google had achieved Quantum Supremacy. So I read up about it, and a computer scientist and blogger I trust—Scott Aaronson—who’s been aware of the experiments before its publication says it’s real and says it’s a big deal. […]

Comment #191 October 6th, 2019 at 9:09 pm

[…] Tom Chivers—and was planning to review it here, before it got preempted by the news of quantum supremacy (and subsequent news of classical non-supremacy). Now I can get back to […]

Comment #192 October 7th, 2019 at 6:45 am

[…] some experts are claiming that something important has just happened in the field of Quantum […]

Comment #193 October 7th, 2019 at 11:24 am

[…] Recently a paper was leaked in which a team of GOOG researchers used a 53-qubit quantum computer to demonstrate quantum supremacy — that is, their device successfully solved a problem that would have taken a classical computer many orders of magnitude longer to solve. (In this case, the quantum machine took 200 seconds to solve a problem that would have taken a classical computer 10,000 years to solve.) The problem itself was a rather artificial and arbitrary construct — not something that was intrinsically useful or interesting. The point was to demonstrate a principle — not to solve a problem. (Readers interested in the details of the problem, and how the solution could even be checked for accuracy, can read this excellent article by computer scientist Scott Aronson.) […]

Comment #194 October 8th, 2019 at 2:13 pm

[…] An amazing blog by Scott Aaronson on Quantum Supremacy: https://www.scottaaronson.com/blog/?p=4317 […]

Comment #195 October 10th, 2019 at 2:51 am

[…] Scott’s Supreme Quantum Supremacy FAQ! […]

Comment #196 October 10th, 2019 at 12:22 pm

[…] Quantum Supremacy to Classical Fallacy y Scott’s Supreme Quantum Supremacy FAQ que es un estupendo documento de «preguntas frecuentes» sobre el tema, por Scott Aaronson, de los […]

Comment #197 October 10th, 2019 at 12:52 pm

[…] Quantum Supremacy to Classical Fallacy y Scott’s Supreme Quantum Supremacy FAQ que es un estupendo documento de «preguntas frecuentes» sobre el tema, por Scott Aaronson, de los […]

Comment #198 October 10th, 2019 at 1:01 pm

[…] quantum computing did not come of age with Google’s Sycamore, a 53-qubit computer solving in 200 seconds a problem that would take even a supercomputer 10,000 […]