John Preskill, laziness enabler

The purpose of this post is just to call everyone’s attention to a beautiful and accessible new article by John Preskill: Quantum Computing in the NISQ era and beyond.  The article is based on John’s keynote address at the recent “Q2B” (Quantum Computing for Business) conference, which I was unfortunately unable to attend.  Here’s the abstract:

Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to perform tasks which surpass the capabilities of today’s classical digital computers, but noise in quantum gates will limit the size of quantum circuits that can be executed reliably. NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications, but the 100-qubit quantum computer will not change the world right away — we should regard it as a significant step toward the more powerful quantum technologies of the future. Quantum technologists should continue to strive for more accurate quantum gates and, eventually, fully fault-tolerant quantum computing.

Did you ever wish you had something even better than a clone: namely, someone who writes exactly what you would’ve wanted to write, on a topic people keep asking you to write about, but ten times better than you would’ve written it?  To all journalists and others who ask me over the coming year about the application potential for near-term quantum computers, I can now simply respond with a link.

78 Responses to “John Preskill, laziness enabler”

1. William Hird Says:

Scott, I have that “gee, how come I didn’t say that” feeling every time I watch a Noam Chomsky YouTube video , and there are lots of them, can you say High Anxiety ? 🙂

2. Sean Etesham Says:

Congratulations on your PR issues being solved ha! I see how frustrating it has been for you to deal with journalists.

3. Ian Says:

Will NISQ be enough to disprove Gil? 🙂

4. Scott Says:

Ian #3: If it works as well as hoped, I think so, yes.

5. Charles R Greathouse IV Says:

Another good resource for “what could we do with quantum computers?” is NIST’s Quantum Algorithm Zoo:
http://math.nist.gov/quantum/zoo/

Unfortunately it’s not at a level accessible to (most) journalists.

I am currently collaborating with researcher 1 on a particular new idea that I came up with in a subfield of a field (researcher 1 is well established and tenured and I am a student). I think the same idea will be applicable in a different subfield of the field for which researcher 2 (also tenured) has expertise. The subfields are not that far off and researchers routinely collaborate. Is it ok to initiate communication with researcher 2 without informing researcher 1? What if researcher 2’s collaboration turns out more fruitful and quick compared to researcher 1? What should do in this position? What implications could there be if I initiate communication with researcher 2 without informing researcher 1?

Dear Scott,

If I were to be a journalist, I would raise the following sort of a question to you:

“Does your moratorium also mean that you will not be covering the following question?:

Assuming a QC is built, would you be able use it to make encryption so powerful that even algorithms like Shor’s would find it very hard to crack it? Do such algorithms exist? Is there any possibility that such algorithms might get invented soon enough (if they have not been, already)?”

Best,

–Ajit

Why don’t you want to tell researcher 1 about your desire to collaborate with researcher 2? If there is no good reason, you should be open about it. If there is a good reason, I suspect that you should question the wisdom of collaborating with either researcher 1, or with researcher 2, or possible with either of them.

9. Scott Says:

Ajit #7: Well, there’s quantum key distribution, which is the overwhelmingly most obvious way that quantum mechanics can help encryption rather than hurting it. But that’s different from quantum computing: it requires a quantum communications network, which is neither necessary nor sufficient for scalable QC.

To my knowledge—someone can correct me if I’m wrong—we don’t at present have any good example of a public-key cryptosystem with conjectured exponential security, where you seem to get much benefit from the encryption and/or decryption being done on a QC. What we do have is a quantum version of “Merkle puzzles” (i.e., public-key crypto with only polynomial security), where to get the best polynomial, you currently seem to need quantum encryption and decryption. There’s also a whole field of “post-quantum cryptography”, which studies classical public-key cryptosystems—for example, code- and lattice-based systems—that unlike RSA, Diffie-Hellman, and elliptic curve systems, seem to have the property of resisting quantum attacks. These post-quantum cryptosystems would be the most obvious things to switch over to in a world with scalable QCs.

10. Jopus Says:

I find it fascinating that no-one has ruled out the possibility that integer factorization will be solved algorithmically on a classical computer, rendering much of QC hype rather moot. Does it strike anyone that the resemblance would be something akin to bragging about how you could summate the natural numbers via some elaborate and contrived computing machine, not by using the shortcut derived from induction, no, but ACTUALLY adding the individual numbers up to some limit in each case. If that analogy isn’t stark enough for you, then perhaps it would be a bit like comparing Babbage’s ‘computer’ of the Victorian day, with any given smart phone of today’s generation. I honestly feel that QC machines will suffer the same fate as Babbage’s machines. For the simple reason that they cost the earth to build, and tell you nothing about the underlying structure of the mathematical problem you are attempting to solve (Admittedly the analogy fails here, because Babbage’s machine was entirely deterministic and logical, simply employing mechanical analogues of the modern electronic counterparts) but rather are a brute force ‘cop out’ summoning spurious quantum effects for their justification. QC is always an ‘if or a but’ away from becoming reality. It will never happen, because someone will get to the jump of solving the salient problems classically first.

11. Scott Says:

Jopus #10: Ignorance and confidence is usually a bad combination.

While factoring makes the headlines, it’s extremely far from being a “workhorse” application of QC, for anyone but the intelligence agencies. If a fast classical factoring algorithm is discovered after 40+ years of not finding one, it will change the world for reasons having nothing to do with QC, but it still won’t undercut the main economic or scientific rationale for QCs, which is that they give you a fast way to simulate quantum mechanics itself. Now, no one has ruled out a fast classical algorithm for the latter either—but we have shown that if the simulation was exact, it would imply a collapse of the polynomial hierarchy, which complexity theorists consider “almost as bad” as P=NP. So the theoretical stakes are pretty high.

Moreover, if Google and IBM and IonQ and so forth do what they’re expecting to do with the 50-qubit devices they’re building now, then in some sense, classical theorists have at most one more year to beat them to the punch by discovering a fast classical algorithm to simulate QM—the clock is ticking! If the rug does get pulled out from under QC in such a dramatic way, I’ll certainly be exploring that amazing development in detail here on this blog. If the opposite happens, will you be back here to admit you were wrong?

12. asdf Says:

> we *have* shown that if the simulation was exact, it would imply a collapse of the polynomial hierarchy, which complexity theorists consider “almost as bad” as P=NP.

Wait, what? Is it proved that PH collapses if BPP=QBP? I know you worked on that but I thought it was still a conjecture. Also, if PH collapses, isn’t that *worse* than P=NP since it implies P=NP (both are in PH)?

13. Job Says:

We’ll need a BNISQP class.

BTW, i’ve noticed that Preskill tends to focus more on entanglement than interference in these talks.

I guess, ultimately it’s hard to separate the two, they’re part of the same framework.

But why the focus on entanglement? Isn’t quantum interference so much more to the point when explaining the power of a quantum computer?

What am i missing?

14. Scott Says:

asdf #12: Firstly, P=NP means that PH collapses all the way down to the zeroth level, i.e. to P itself. But PH could also collapse at some higher level even if P≠NP. This would be “not quite as bad” as P=NP.

Secondly, the result by me and Arkhipov, and independently Bremner-Jozsa-Shepherd, says that if ExactSampBPP = ExactSampBQP then PH collapses. Here ExactSampBPP and ExactSampBQP are the classes of exact sampling problems solvable in classical and quantum polynomial time respectively. The collapse is to the third level if we interpret “exact” to mean “up to exponentially small error in variation distance,” or to the second level if we interpret it to mean “really exact.”

We don’t know of any similar consequence assuming only BPP=BQP or PromiseBPP = PromiseBQP, where those refer to classes of languages or promise problems. But we do know it if you’re willing to talk about sampling problems.

The central open problem that we left in 2011—a problem that remains open!—is whether ApproxSampBPP = ApproxSampBQP would collapse PH. Here ApproxSampBPP and ApproxSampBQP are the classes of classically and quantumly easy approximate sampling problems, and “approximate” means “up to 1/poly(n) error in variation distance.” By another result of mine from 2011, this is equivalent to FBPP=FBQP collapsing PH, where FBPP and FBQP are the classes of classically and quantumly easy relational problems (that is, problems with many possible valid outputs).

15. Scott Says:

Job #13: Yes, I also prefer to focus on interference when explaining QC in public lectures. But, I don’t know, maybe John considered it too technical for a business audience. Of course, the hope for a speedup also relies on the fact that the interference is taking place with exponentially many amplitudes. And while John uses the word “entanglement,” his subsequent discussion makes it clear that he’s really talking about the exponential size of the wavefunction, something that follows immediately from the possibility of arbitrary entanglement (although one could also just talk about the exponentiality directly, without mentioning entanglement). And obviously he doesn’t make any mistake that involves conflating entanglement with classical correlation (or encouraging his readers to conflate them), which is the usual trap that popularizers fall into here. So I think his choice actually works fine for what he’s trying to do.

16. Jopus Says:

Fine. I’ll be back here in a year, whatever the outcome. But don’t bet against me. There’s always a snowball’s hell in chance.

17. John Preskill Says:

Job #13, Scott #15: I like to emphasize entanglement because scientists usually think of entanglement as the obstacle to doing an efficient classical simulation of an evolving quantum system. When the state is only “slightly entangled” we can describe it succinctly using a tensor-network description like a matrix-product state. As the entanglement grows, the classical simulation becomes more and more costly. One could also say that the classical simulation is becoming costly because we need to consider a superposition of many basis states in a physically natural basis, but that’s really just another way of expressing the same idea.

Also, I’m a little bit wary of reinforcing the impression that a quantum computer gets it’s power because many classical simulations are being done in superposition. That, too, is true in a way, but often leads to misunderstanding.

18. Job Says:

That makes sense.

Whenever i don’t see interference mentioned, i get a sense that my understanding of how QC’s are used to solve problems (other than simulating a quantum system) is really incomplete.

Like i’m missing a key technique that’s really significant.

Also, interference is much cooler than entanglement. I don’t hang out with entanglements.

19. Scott Says:

Job #18: And you hang out with interferences?

20. Ashley Says:

“Ignorance and confidence is usually a bad combination”

I just could not help making this correction: IN SCIENCE, ignorance and confidence is usually a bad combination.

21. Scott Says:

Ashley #20: I think it’s a bad combination everywhere. It’s just that in some domains (marketing, seduction, Continental philosophy, running for president…), it can be a profitable one for the perpetrator.

22. Jopus Says:

http://math.mit.edu/~cohn/Thoughts/factoring.html

23. Scott Says:

Jopus #22: That’s not in any sense a “refutation of my assertions.” Firstly, because it’s just Henry’s opinion, clearly and honestly labeled as such (“Of course, I have no real evidence for my views; if I had any good ideas for factoring, I’d be working on them instead of writing this”). But secondly, because nowhere in my response to you did I dispute the point that factoring might turn out to be classically easy! Did you even read what I wrote?

Hey Scott,

Thanks for caring to give a real reply (re. #9).

Bye for now…

–Ajit

25. Jopus Says:

Sorry Scott, wasn’t attempting to be churlish. Was meant to be a tongue-in-cheek comment. Should have added an exclamation mark to indicate this, but humor doesn’t always come off well in the online format, I find.

I simply thought you were suggesting ignorance of Quantum Computing on my part, which would be fair enough, because it’s true. I know next to nothing about quantum computing. But confidence in the fallibility of quantum factoring methods seems well-placed given my initial observation, which I stand by, that if quantum computing worked and solved integer factorization or any problem, it would be of value – but not useful until we fully understood the mechanisms involved, so we could exploit them in further applications, allowing other science and technology developments.

But when you said that, ‘…we have shown that if the simulation was exact, it would imply a collapse of the polynomial hierarchy, which complexity theorists consider “almost as bad” as P=NP’, I fail to see why, if P=NP, that this would be a bad thing. In the same reply, you conceded that a classical P-time factorization algorithm ‘still won’t undercut the main economic or scientific rationale for QCs’. However, I disagree somewhat on this. Bar the exception of protein folding for biologists and reaction modelling for chemists for example, which I can comprehend the benefits by leveraging QC power, I struggle to see the economic rationale being other than ‘break all encryption’ for example.

You may state that elliptic curve, discrete logarithm etc are still safe with a classical P-time algorithm. However, I disagree, as it’s very likely that a solution to factorization could rapidly be adapted to these other encryption classes as well.

For my two cents worth though, I entirely believe that the Ancient Greeks knew a P-time factorization algorithm. When the library at Alexandria was burned by the Romans, a LOT of learning was lost. Some historians have speculated it set the world back 1,000 years or more. For example, Pythagoras had invented a rudimentary form of calculus, something which was not revived again until the 15th century. So 40+ years doesn’t really do justice to the two Millennia that preceded the modern era.

The difference was that in those days, any useful algorithm had limited utility, even if was a great discovery, because there were no computers of any kind to implement it on, so the classical versus quantum distinction was rather moot. Also, I have heard it mentioned that even classical computers use quantum effects at the holes in transistor gates. So in a sense, classical computers ARE already quantum, it’s just that perhaps they are a very diluted version of what researchers are striving for with true QC machines. So good luck with your research. I honestly hope QC computers start to work properly or usefully, although I don’t hold out much hope that they will, certainly not for another decade or more.

26. Jake the Snake Says:

I’ve never been fascinated by quantum computers, because quantum mechanics has already proven its worth for making intractable computations become tractable. Quantum mechanics is needed to build electronic computers, semiconductors, and transistors, which in turn are already used for “simulating quantum mechanics” (e.g., lattice QCD). Sure, advancing from Babbage machines to supercomputers has only changed running time constants rather than complexity classes, but they’re awfully large constants.

27. Scott Says:

Jake the Snake #26: You can be fascinated or not fascinated by whatever you want, but it’s worth pointing out that there are fundamental physical limits to the continuation of Moore’s Law. I.e., you can’t just keep repeating the advance from Babbage machines to supercomputers over and over, without your computers storing so many bits in a given region, or cycling through so many states per second, that they necessarily collapse to black holes. In more practical terms, we have no clue how you’d make transistors smaller than atomic scale, and are already nearing that limit. After that, if you want computers that can do even more, you need to harness nature for computation in fundamentally new ways, and quantum computing is currently the only serious proposal for how to do that based on known physics.

28. fred Says:

Scott #27

“we have no clue how you’d make transistors smaller than atomic scale, and are already nearing that limit. ”

Given that the Planck length is about 10^20 smaller than the radius of an electron, is possible to use particle position to store information?… or the uncertainty principle makes this fundamentally impossible?
One analogy I’m thinking about (which may be wrong) is the constant flow of new techniques to get past the fundamental resolution limit of light/electron wavelength to see finer details in microscopy.

29. Scott Says:

fred #28: Yes, one could imagine that billions of years in the future, our descendants will build their computers out of pure quantum ripples in the spacetime metric. Even then, of course, they still won’t be able to exceed the Bekenstein limits of ~1069 qubits per square meter, and ~1043 operations per second. In any case, I think it’s fair to say that engineering ideas for how you get below atomic scale, let alone to Planck scale, remain iffy at present. 🙂

30. fred Says:

Scott #29

Right, and density of data is one thing, but there’s also the added difficulty that signals (reads and writes of the data) need to be propagated around (e.g. from an ALU unit at the center to a “sphere” of memory around it), and you can’t beat the speed of light for this.
This is already what makes CPU/GPU design so difficult today.

31. Job Says:

I’ve never been fascinated by quantum computers, because quantum mechanics has already proven its worth for making intractable computations become tractable.

No it hasn’t.

That’s like saying László Babai’s car has already proven its worth in reducing the time complexity of graph isomorphism.

32. Raoul Ohio Says:

1948: “As soon as controlled fusion is perfected, electricity will be too inexpensive to meter”.

2018: “Moreover, if Google and IBM and IonQ and so forth do what they’re expecting to do with the 50-qubit devices they’re building now, ….”

33. Scott Says:

Raoul #32: Oooh, is this a game of “spot the flaws in the comparison”? Can I play?

1. I used the word “if.”
2. I wasn’t talking here about a 20-year prospect, but about something that’s either going to happen or not happen in 1 year. (The first 50-qubit chips already exist, remain to be calibrated and tested.)
3. No claims of any useful applications (yet), let alone any that are “too cheap to meter.”
4. Actually, we could already have fusion energy with 1950s technology, by exploding hydrogen bombs underground and using them to drive turbines. There are some obvious reasons why we don’t do it, but they’re more social and political than scientific. No analogous issue with QC that I’m aware of.

34. fred Says:

The photovoltaic effect was first observed by Becquerel in 1839 and solar power is anticipated to become the world’s largest source of electricity by 2050! 😛

35. Raoul Ohio Says:

That comparison is intended to be a pithy but not particularly serious. My official position on QC is: “Who knows?”.

36. John Sidles Says:

Scott opines (circa #33) No analogous issue with QC that I’m aware of [in controlled thermonuclear fusion]

Todd Harrison Rider’s MIT/PhD thesis “Fundamental Limitations on Plasma Fusion Systems Not in Thermodynamic Equilibrium” (1995) and its followup article “A general critique of inertial-electrostatic confinement systems” (Physics of Plasmas, v2(6), 1995) provides an in-depth discussion of quantum dynamical obstructions to scalable energy-gain fusion that are directly relevant to quantum dynamical obstructions to scalable Quantum Supremacy.

In a nutshell, all demonstrated plasma technologies to date (save for fission-pumped implosion devices, i.e., H-bombs) radiate photonic energy from the device much faster than fusion reactions can replenish it.

Similarly, all quantum-coherent computational technologies to date radiate photonic entanglement from the device much faster than error-correction processes can regenerate it.

The physical obstruction is thtat the fine-structure constant α=1/137 is sufficiently large as to make energy-gain fusion and Quantum Supremacy comparably difficult of scalable demonstration (it was this parallel that first led me to appreciate the deep physics of Gil Kalai’s skepticism).

On the other hand, large-α photonic engineering is both scientifically fruitful and economically viable because (for example) the same lossy photonic processes that make controlled fusion hard make laser technologies easy.

Similarly, same lossy photonic processes that make Quantum Supremacy hard have made (to date) the Extended Church-Turing Thesis effectively true … for all technologies whose key hardware elements are made of photonically-bound condensed matter.

For the above-stated reasons, it is well to combine support for controlled fusion research with a sober appreciation of the photonic obstructions to it. And it is well to combine support for Quantum Supremacy research with a sober appreciation of those same photonic obstructions.

37. Serge Says:

Quantum computing seems to be a clever, systematic method for converting mathematical problems into technical ones…

38. Scott Says:

Raoul #35: You never know for certain that something will work until it does work. But if you still treat 50-100 qubit QC as a random hypothetical about which no one can say anything (“who knows?”), then you’re obviously not paying attention to what’s happening on the ground.

39. William Hird Says:

Scott, you are always saying that one of the main attributes of a scalable quantum computer should one actually be built, is that it will be able us to simulate quantum physics, but isn’t that what nature is doing all the time anyway? Can you elaborate a little bit on what you mean by the insights into quantum phenomena that a quantum computer would give us, beyond what we already know about quantum mechanics( or think we know)?

40. Scott Says:

William #39: It’s the difference between having to build a model and put it in a wind tunnel every time you want to simulate an airplane, versus having a general-purpose computer that just does the simulation for you. It’s not just that the programmable device is enormously faster; it’s that it lets you answer questions that you couldn’t answer in the physical experiment, because you don’t have the right knobs and levers (e.g., to measure or manipulate an electron density all the way in the interior of your molecule, without needing to destroy the whole molecule to get to it). As a final advantage, a simulator lets you test your theory about, e.g., which effective Hamiltonian governs some system, comparing its predictions against reality. Without the simulator you’d only be comparing reality against itself.

It’s not clear that any of this will change the way we understand QM itself—but it should give a new window into many-particle quantum behavior, with applications to chemistry, materials science, high-energy physics, and other fields. For more see (e.g.) Sec. 6.9 in John’s survey, and references therein.

41. Serge Says:

We live on the macroscopic scale and want to prove the feasibility of quantum computing on the quantum scale. The technical switch from macroscopic to quantum level is part of the proof, in addition to finding a quantum algorithm. By Church’s thesis there must exist a mathematical proof of the whole process. I conjecture that all that can be done quantumly in practice could be also done by classical methods with comparable efficiency (Generalized Church Thesis). But there is no reason for not trying the quantum way. Maybe our brain is better at building machines than at inventing algorithms…

42. John Sidles Says:

Long-sustained exponential advances in the capabilities large-scale quantum simulation with classical computational resources are inspiring in many people (definitely including me) diverse new questions in respect to “the way we understand QM itself” (in the phrase of Scott’s comment #40).

As a concrete example, what dynamical state-spaces — other than Hilbert space — support the quantum thermodynamical “trifecta” of Hamiltonian dynamical processes, Lindbladian measurement processes, and complex spectral processes?

Here the word “support” is equally interesting when we embrace its engineering sense, namely “good enough for practical thermodynamical computations”, and when we embrace its mathematical sense, namely “rigorously supporting thermodynamical laws”.

This particular class of questions can be regarded as a subset of a more general class of computational questions that Avi Wigderson’s new book Mathematics and Computation (2017) (draft PDF here) introduces as follows:

Without describing them [tensor network states] formally, they may be viewed as computational devices (like circuits). …

In both cases the obvious description of these objects has length exponential in n, but for some ground states, as for some functions, the computational description may be much more succinct, e.g. polynomial in n. …

[Tensor network states] are an extremely useful heuristic for many naturally occurring and studied physical many body systems, but no theoretical explanation or provable bounds on their performance exist.

This viewpoint, in which (as a concrete example) trajectories on tensor network states are regarded both as (quantum) computational processes and as (quantum) thermodynamical processes, in recent years has accumulated a mathematical and scientific literature that is so stupendously large and diverse — a literature whose domain and range are extending with such astounding rapidity — that this comment can no more attempt to adequately survey that literature, than could Avi Wigderson’s monograph.

Yes, this ongoing research is radically extending (in Scott’s phrase) “the way that we understand QM itself”, for example with regard to the identification of thermodynamical trajectories with computational trajectories.

For young researchers especially, three pertinent resources are Mike Nielsen’s on-line free-as-in-freedom book Neural Networks and Deep Learning (2015), quantum economist Steve Landsberg’s inspiring and mathematically erudite weblog essay “The Generalist” (2014), and the “available positions” web page of the prospering thermodynamical simulation company Schrödinger LLC … which as of today lists forty open positions for high-level engineers and scientists.

In summary, “computational devices that answer thermodynamical questions” and “thermodynamical devices that answer computational questions” can naturally be regarded as two sides of the same quantum coin, and this synthetic view is just one of many ongoing advances that are radically extending both the fundamental ways we understand QM itself, and the ways that we apply QM in addressing deep problems, both practical and fundamental.

43. Craig Says:

Scott,

What will you say if IBM, Google, Intel, and other companies in the QC game get together at the end of the 2018 and say something like, “We have all built the order of 50-qubit machines, but no matter what we did to calibrate them, they completely failed the harder of the tests outlined in this paper:
https://arxiv.org/abs/1712.05642

44. Scott Says:

Craig #43: Obviously I’ll say, “then what can we figure out from the data about the reasons for the failures? Was there unexpected decoherence? If so, are there ideas for how to control it? Or was it something more surprising? Let’s try to learn something new.”

45. Craig Says:

Scott 44

It is not obvious to me what you would say. If the tests fail, then this would most likely mean the output data is garbage. What could you possibly learn from garbage? Especially when a good analysis of the output would require the order of 2 to the 50 runs of the algoritm.

This is why I do not think much will even be learned from a failure, other than “it doesn’t work.”

46. Job Says:

It depends on the tests.

For example, i think Simon’s algorithm would be a perfect test, especially with 40-50 qubits, because it will tell you how good the QC is at large scale interference – which is the only thing i care about.

I’ve worked this out before, roughly, and with a max circuit depth of 100 gates (assuming CNOT counts as a single gate) you get just enough gates left on a 40-50 qubit system to implement Simon functions with a wide range of secret values.

It would not be a perfect test in the sense that these input functions would not be using a universal gate set (since AND gates are really expensive) and that does matter alot, but it gives us “programmable interference” in a testable context.

If you get failures, then you narrow down the inputs that cause them and try to identify a pattern. It could be really revealing.

That’s basically the kind of test i’m waiting for, more than the quantum supremacy stuff.

47. Serge Says:

Job #13: Actually there are now four relevant classes:

P (classical Polynomial time)
GQP (classical Galactic Quasi-Polynomial time)
BNISQP (Bounded error Noisy Intermediate-Scale Quantum Polynomial time)
BQP (Bounded error Quantum Polynomial time)

I conjecture that P ⊊ GQP = BNISQP ⊊ BQP.

48. Scott Says:

Craig #45: But Google and IBM have already done experiments with fully programmable superconducting chips with ~20 qubits (and IonQ is at similar numbers with trapped ions). And the results are in, and it’s not garbage—subject to the advertised decoherence rate, gate time, etc. etc. you get the results you want, and can (for example) fully entangle all 20 qubits in a complicated way and then prove you did it.

So, if you got nothing but “garbage” with 50 qubits, then there’s at least one question for anyone with the tiniest smidgen of curiosity: namely, where exactly does the transition to “garbage” happen? At 30 qubits? 40? And how integrated do the qubits have to be before you see the garbage? (We already know of simpler things you can do with thousands or even millions of qubits.) And how does Nature even “know” to start returning garbage—what are the laws governing the garbage/non-garbage boundary?

49. John Sidles Says:

Scott points out (#48) that “Google and IBM  … can entangle 20 qubits in a complicated way and then prove it.”

This statement is of course entirely correct, and indeed the scientific community can look forward, with considerable confidence, to technical demonstrations with hundreds and even thousands of qubits.

On the other hand, is is also entirely correct, that until phrases like “complicated way” are replaced (someday soon?) by computationally specific phrases  — for example “quantum states having no polynomial-rank tensor decomposition” — confidence in the extended Church-Turing thesis will remain well-grounded.

Fortunately we haven’t long to wait for more experimental evidence and new theoretical ideas! 🙂

50. Serge Says:

Serge #41 & #47: A strong version of Church’s thesis could be that P = BQP. But is the feasibility threshold the same in both classes ? That’s what the quantum debate is all about in my view.

51. Craig Says:

Scott 45

Yes, these would be good questions, with a simple answer:

Do the experiments and you will find that at some point between 20 and 50, (or perhaps 100 to be safe) fully integrated qubits will be too much for the great computer in the sky, Nature’s classical computer, to handle and it will return an overflow error of garbage. It shouldn’t be difficult for Nature’s classical computer to know when it is being tricked into entangling too many qubits.

52. Joshua Zelinsky Says:

Craig #51,

We have large entangled states. So how do you expect it to tell the difference between those which are and are not acceptable for the classical computer running things?

53. Craig Says:

Scott 48 and John 49,

As far as what are the laws governing the garbage/nongarbage boundary, that will require lots of experiments to answer. It would involve measurements of the degree of entanglement of qubits. Are there currently ways of measuring the degree of entanglement?

54. John Sidles Says:

Craig wonders (circa #53)  “Are there currently ways of measuring the degree of entanglement (of Supremacy-attempt quantum states)?”

Here is one answer (among many possible) that draws upon Avi Wigderson’s remarks (quoted in comment #42) to illuminate a marvelous feature of the quest to demonstrate Quantum Supremacy.

To set the stage, here is Paul Dirac reflecting on the glory decades of quantum mechanics:

It was a good description to say that it [quantum mechanics] was a game, a very interesting game one could play. Whenever one solved one of the little problems [in the 1920s and 1930s], one could write a paper about it. It was very easy in those days for any second-rate physicist to do first-rate work. There has not been such a glorious time since. It is very difficult now for a first-rate physicist to do second-rate work.”  — Paul A. M. Dirac (1975)

In a smilar vein to Dirac’s meditation, here is a fable that illuminates the “glorious times” that have come again for the quantum researchers … supremacists and skeptics alike … thanks to research programs like DWave’s and Google’s and IBM’s.

Quantum Hilarity 2020

Bob  What a fine Solstice Celebration! How was your 2020, Alice?

Alice  It’s been amazing, Bob … I just learned that my undergraduate capstone project is among the finalists in the DWave/Google/IBM “Quantum Supremacy Challenge 2020”.

Bob  Is that the challenge where each company provides a list of 50-bit strings, with half of the strings random, and the other half produced by quantum hardware, and the challenge is to separate the random strings from the quantum-hardware strings?

Alice  Yes … in the summer of 2020 each company published a challenge-list of 2×10^6 50-bit strings, with 10^5 strings being random strings, and 10^6 strings originating in repeated measurements of the same quantum state, as produced by that company’s Supremacy Challenge hardware. The challenge is to separate the random strings from the quantum strings … my capstone project’s algorithm achieved a separation-purity of better than 0.99.

Bob  Congratulations! However did you do it, Alice?

Alice  (modestly) It wasn’t that hard, Bob … books like Mike Nielsen’s Neural Networks and Deep Learning provided the starting idea of training a cognitive network to distinguish the random-strings from the quantum-strings … books like Karen Smith’s An Invitation to Algebraic Geometry point to Segre varieties as the natural algebraic “neurons” of the network … books like Joe Landsberg’s Tensors: Geometry and Applications flesh out the details of the secant-joins that entangle the Segre-neurons.

Bob  (impressed) Somehow I don’t think it was that easy, Alice.

Alice  (a bit smugly) Definitely there is some “secret sauce” in the way my algorithm algebraically connects-up the Segre-neurons with secant joins — and Google’s TensorFlow language made it easy to code-up this secret sauce.

Bob  So what comes next?

Alice  We finalists now face a tougher challenge-round: given a fresh list of never-before-seen strings, promised to be produced by the same protocol as before, assign a probability to each string; the winning algorithm is the one that yields the maximal entropy-difference between the high entropy of the random strings and the low entropy of the quantum strings. And there’s a thermodynamic constraint too: the entropy-cost of running my entropy-estimating algorithm on classical hardware, cannot exceed the entropy-cost of running the quantum hardware that produced the data-set; this turns out to be about 10^8 J/K.

Bob  (wonderingly) But how can your algorithm work so well, Alice? You haven’t encoded any detailed information regarding the quantum hardware.

Alice  The companies provide a complete description of their hardware and experimental protocol, and some contestants do base their entropy-estimates on detailed dynamical simulations of the apparatus. If the quantum strings were generated by a device as simple as a classical binary circuit driven by a classical noise source, then I would have to model that circuit. Fortunately for my capstone project, classical binary circuits are inherently error-correcting, whereas state-of-the-art quantum circuits are not … as of 2020 at least. Absent error correction (whether classical or quantum) it turns out that it’s surprisingly hard for detailed dynamical simulations to beat deep-learning entropy-estimating algorithms like mine.

Bob  (excited) So if your capstone algorithm can reliably estimate the entropy of quantum-generated strings, at lower entropy cost than required for physically generating those same quantum strings, then the Extended Church-Turing thesis is affirmed yet again … otherwise the Quantum Supremacists can claim a victory.

Alice  (smiles) Yes. And what’s nice is, with each passing year quantum hardware becomes more coherent, and each year too, tensor-descriptions of quantum states become more efficient. So each new year brings wonderful advances for quantum understanding generall … truly we are all of us living in the most glorious quantum times ever.

And it’s very nice that I’m receiving some attractive job offers too … these techniques turn out to have broad applicability beyond “merely” demonstrating quantum supremacy!

@inproceedings{cite-key, Author = {P. A. M. Dirac}, Booktitle = {Directions in Physics}, Editor = {H. Hora and J. R. Shepanski}, Pages = {6}, Publisher = {Wiley-Interscience, New York}, Title = {Lecture 1: The Development of Quantum Mechanics}, Note = {\emph{Directions in Physics} comprises edited transcripts of five lectures given by Dirac when he was visiting Australia and New Zealand in 1975}, Year = 1978} @book{cite-key, Address = {New York}, Author = {Smith, Karen E. and Lauri Kahanpaa and Pekka Kekalainen and William Traves}, Publisher = {Springer}, Title = {An Invitation to Algebraic Geometry}, Year = {2000}} @book{cite-key, Author = {Landsberg, Joseph M.}, Publisher = {American Mathematical Society}, Title = {Tensors: Geometry and Applications}, Year = {2012}} @book{cite-key, Author = {Michael A. Nielsen}, Publisher = {Determination Press}, Title = {Neural Networks and Deep Learning}, Year = {2015}}

55. Scott Says:

Craig #51, #53: I’ve lost interest in debating here. All I want from you at this point is a precommitment that, if 50-qubit programmable QCs work as intended, then you will admit you were almost certainly wrong. I.e., that you won’t try to weasel out of it by demanding a 100-qubit supremacy experiment—and then when that too is done, saying it doesn’t count because we can’t fully verify the result classically.

(I don’t need to make an analogous commitment, because the situation is not symmetrical. From dozens of failed airplane trials in the late 1800s, one could conclude only that the problem was harder than many people thought—and also, that maybe it was worth looking for fundamental physical principles that would explain why heavier-than-air flight was impossible for any machine, notwithstanding the birds. By contrast, a single verified successful trial, like the Wright brothers’, is enough to conclude that the problem is solvable after all.)

56. Craig Says:

Scott,

I will make a commitment that if one can make a 60 qubit machine, then I am almost certainly wrong. 50 qubits to me is not enough.

John,

57. John Sidles Says:

Shtetl Optimized readers who use laptops that perform billions of gate-operations per second, with (typically) zero errors, are invited to read the (dry-as-dust) Wikipedia page “Eye Pattern“, or alternatively, view the (hilarious musical) youtube video “An Eye Is Born.”

No existing article or textbook (that is known to me anyway) analyzes the eye-pattern dynamics of gate-error correction via any engineering-analysis framework that naturally encompasses the state-spaces of quantum-supremacy devices … and yet, the day surely is coming (or already is here?) in which students of computational engineering will require such textbooks.

58. Nick Says:

@John Sidles:

In post #49 you mention “quantum states having no polynomial-rank tensor decomposition”. Given that, as far as I know, even determining the rank of a tensor is an NP-Hard problem, how would this property be verified? Do the tensors that arise from quantum entanglement have qualities that make calculating their rank easier, or are the tensor’s small enough at present that we can use a brute-force method to determine their rank?

59. John Sidles Says:

A short answer to your question, Nick, is by analogy to linear programming. There exist linear programming problems for which George Dantzig’s celebrated simplex algorithm requires exponential time … but experience indicates that these special problems seldom or never are encountered in practice.

Similarly, there exist quantum states for which tensor network representations require exponential rank … but experience indicates that these special quantum states seldom or never are encountered in practice … albeit we haven’t much experience as yet. 🙂

A major challenge for “Quantum Supremacy” experiments, therefore, is to scalably realize, physically and reproducibly, large classes of quantum states for which straightforward extensions of existing tensor-fitting algorithms (like Alice’s algorithm of comment #54) fail generically. No formal proof of failure is required … “hey, the usual tensor-techniques perform badly” counts as evidence in favor of Quantum Supremacy.

Supposing that this first Quantum Supremacy challenge is met, then a far tougher Quantum Supremacy challenge is to scalably realize, physically and reproducibly, large classes of quantum states for which all known tensor-fitting algorithms fail generically. Of course, this second, tougher challenge will motivate Alice and her colleagues to test ever-more-ingenious tensor decomposition algorithms against ever-more-challenging experimental datasets, in a math-versus-physics horse rance that is sure to be plenty of fun for all involved.

Indications of the mathematical breadth and depth of supremacist-vs-skeptic horse race(s) can be discerned in a lecture series that arch-skeptic Gil Kalai’s weblog Combinatorics and More pointed out earlier this month, in an essay titled “Interesting Times in Mathematics: Enumeration Without Numbers, Group Theory Without Groups“. Gil’s essay in turn linked to an IIAS/HUJ 21st Midrasha in Mathematicae school, “Lie Theory without Groups; Enumerative Geometry and Quantization of Symplectic Resolutions“.

To borrow a rural idiom, as I watch these IIAS/HUJ lectures, “I feel like a horse looking at a wristwatch”. There are plenty of individual words that I appreciate are relevant to Alice’s tensor network algorithms (of #54), yet somehow the word “practical” never comes along to help us suffering mathematically challenged engineers. 🙂

Fortunately for young researchers, there are plenty of paths to GREAT careers in entropy-reduction that do not require training in “algebraic symplectic geometry” (whatever that discipline may turn out to be).

One career-path begins with the error-correction technology that is analyzed in the video “An Eye Is Born” (of #57).

Note for example that gate-jitter is measured to be 10.26 picoseconds … so short a time that light travels only 3.07 millimeters. In such circuits, noise mechanisms like intersymbol interference (Wikipedia here) are error-corrected sufficiently aggressively by techniques like Viterbi algorithms (Wikipedia here), that any respectable bit-mining operation (for example) radiates photonic-entropy sufficiently intense as to be visible from orbit. That’s a lot of error-correcting photonic entropy-generation!

Evidently classical and quantum error-correction technologies alike are pushing hard, already, against relativistic, thermodynamic, quantum dynamical, and informatic limits … and coming generations of computational hardware surely will push ever-harder.

This leads us to contemplate the hardware that generated “An Eye Is Born”; namely the Keysight Corporation’s (history here) Advanced Design System (ADS, specifications here), which is part of an ongoing technology development effort that presently is advertising 296 jobs for “engineers” and 85 jobs for scientists. Good!

The evident synergy of the mathematical vigor of the IIAS/HUJ Midrasha in Mathematicae, with the entrepreneurial vigor of Keysight, is convincing evidence — amply convincing to at least one SO reader at any rate! — that we already have entered into a Dirac-quality “glorious time” for error-correcting quantum researchers and entropy-minimizing enterprises.

60. Craig Says:

Scott,

I would like to take back what I said before about 60 qubits being the point where I would say I am almost certainly wrong. Instead I will say that if one were to build a working 65 qubit machine, then I will say I am almost certainly wrong.

I say this because I think the universe is a giant digital supercomputer, so it is only logical that the maximum number of qubits possible for a quantum computer is a power of two. So I believe it to be either 32 or 64, much more likely 32. 128 is too large in my opinion.

61. Ian Says:

Sorry, I have to ask. No real expectation of an answer, but I am very curious about your thoughts on the proof of the “2-2 Unique Games Conjecture:”

https://gilkalai.wordpress.com/2018/01/10/subhash-khot-dor-minzer-and-muli-safra-proved-the-2-to-2-games-conjecture/

62. Scott Says:

Ian #61: It’s a great result! But I have a “conflict of interest,” since my wife Dana was involved in some of the discussions that led to this paper, and has been working on UGC-related problems for a decade, and never more so than right now. All I’ll say for now is that I expect even more hardness results to come, using an overlapping set of ideas to those in the 2-to-2 paper, and will hopefully do a blog post about all of them after the dust has settled.

63. Ian Says:

Great, thanks so much for responding!

64. Mirko Says:

Sorry for the interruption.
“First they came for the Iranians” now enters the second stage: “Then they deported people who were allowed to live in the USA for decades and even serve in the military”.
This is what happened in approx. 1934: First assaults concerning the safety. (Although in general comparisons to the third Reich should be avoided, but I wasn’t the first to bring them, see this blog.)
The dreamers are handled by Republicans as if it was immoral to care for their rights.
I don’t read anything about massive demonstrations. Why don’t the people of the USA put pressure onto Republicans and onto Trump?

65. Charles R Greathouse IV Says:

Mirko #64: They’ve been marching for two days now, see for example https://www.nytimes.com/2018/01/20/us/womens-march.html 200,000 people marched in one city yeaterday; totals for today and totals for other cities I haven’t yet seen reported.

66. Raoul Ohio Says:

Mirko #64:

Here are some aspects:
1. The influence of the media (who historically paid attention to what politicians did) has greatly eroded worldwide, so everyone is getting away with more stuff.
2. Trump and his crew are constantly outrageous, so any one event gets lost in the din.
3. Many Trump supporters are “proud to be dumb” and don’t care.

Hopefully there will be a “Make America Sane Again” landslide in the next election, and the fiasco will start winding down.

67. Hugo Tetrode Says:

Hmmm … for me, the “Leave a Reply” box just now came pre-filled with the email address of at least one previous SO poster … this behavior is wrong … caution is advised … (macOS 10.13.2, Firefox 57.0.4).

68. Joshua Zelinsky Says:

As long as we’re bugging Scott about papers, can we maybe get a comment on the improvement of the result of Ryan Williams to replacing NEXP with NQP? This seems like sort of a big deal.

69. Craig Says:

Scott,

Because you are no longer talking to the press, the press is now turning against QC.

https://www.quantamagazine.org/the-era-of-quantum-computing-is-here-outlook-cloudy-20180124/

70. Scott Says:

Craig #69: If you actually read that article past the headline, it seems balanced and reasonable to me, accurately conveying both the excitement and the caveats the same way I try to do when I give talks.

71. Sniffnoy Says:

Josh: Yes I had been meaning to ask about that too. 🙂 Like, NQP seems scarily close to NP! For obvious reasons we probably shouldn’t expect to see a proof of ACC not contained in NP anytime soon, but man NQP sure at least seems close.

Question: Does NQP have any relations to more commonly-used complexity classes? I’m not seeing any obvious ones. (The complexity zoo doesn’t even list NQP! Or rather, it lists something called “NQP”, but it’s something different, where the Q stands for “quantum”.)

72. Craig Says:

Scott,

I read the article. And the way I see it, compared with almost every other article about QC in popular magazines, it is against QC, precisely because it is balanced and reasonable.

73. Sniffnoy Says:

Er, any more commonly-used complexity classes smaller than NEXP, is what I meant…

74. Scott Says:

Craig #72: Are you aware that, for more than a decade, this blog has been Fighting-QC-Hype-In-The-Popular-Press-And-Trying-to-Replace-It-With-Nuanced-Understanding Central?

I’m always open to ideas for how we could do a better job of that, which don’t involve blithely asserting that, basically, God will descend from heaven to tell us that 260 is too much processing power for the classical computer simulating our quantum universe, so we can’t have 60-qubit QCs, and it won’t even be interesting or surprising when that happens.

In a strange way, your position mirrors that of the popular-magazine hypers. For you both accept the idea that QCs would be preposterous miracle machines; you differ only in the detail of whether the miracle will be realized or not. Neither extreme really wants to engage with the current frontier of scientific understanding about what QCs would and, just as importantly, wouldn’t be able to do, though the picture that emerges from that understanding is much more, shall we say, complex (though not quaternionic, and without loss of generality real).

75. Sniffnoy Says:

…and I got my containment backwards, obviously meaning “we probably shouldn’t expect to see a proof of NP not contained in ACC anytime soon”… oy. So many flubs today…

76. Craig Says:

Scott 76,

I have learned a lot from this blog, and you have done a great public service in explaining to people like myself the nuances of QC. Before reading your explanations and the explanations of other experts, I had believed that QC is impossible in basically the same way that Gil Kalai believes that it is impossible, because of the noise problem. But my understanding of QC has evolved since then, in part because of your writings.

My first comment “Because you are no longer talking to the press, the press is now turning against QC.” was a joke. It was in no way a critique of your blog or you.

As for my position that it won’t even be interesting or surprising when it happens that scientists discover that they can go no farther than building a Q qubit machine (where probably Q=32 or 64), I take that back. Who knows where this knowledge will lead us?

But I doubt that God is going to descend from heaven and tell us that 2^{Q+1} is too much processing power for the computer to handle. This knowledge is going to come from lots of failures of human beings attempting to build QCs with Q+1 qubits, just as it took lots of failures of human beings attempting to build perpetual motion machines for people to finally accept that it is impossible.

77. Craig Says:

It is interesting that according to https://en.wikipedia.org/wiki/Perpetual_motion

Leonardo da Vinci said in 1494, “Oh ye seekers after perpetual motion, how many vain chimeras have you pursued? Go and take your place with the alchemists.”

This was way before the laws of thermodynamics were discovered. They didn’t need a theory to know it was impossible, although it is still nice to have a theory.

Is it similar for QC? The answer is blowing in the wind.

78. Scott Says:

Craig #77: Right, and Lord Kelvin said that heavier-than-air flying machines are impossible. In the absence of a theory, eminent people will declare all sorts of technologies that don’t yet exist to be either possible or impossible, and you’ll have no way to tell who’s right and who’s wrong. That’s why we’ve learned to rely on our best current physical theories, such as thermodynamics, relativity, and quantum mechanics.