## In which I answer more quantum computing questions

Yesterday, I had fun doing an open-ended Q&A at the Astral Codex Ten weekly online meetup. See here for the YouTube video. The questions were mainly about quantum computing, but ranged over various other topics as well, including education policy, the Great Stagnation, and what my biggest disagreements with Scott Alexander are.

In other news, last week I gave my talk about quantum supremacy at the (virtual, of course) Quantum Science Seminar, organized by Sebastian Blatt and others. See here for the YouTube video. Since I (alas) needed to leave after an hour sharp, the organizers asked if they could send me additional questions in writing and have me answer them. I said sure, as long as I could also post the Q&A on this blog! So without further ado…

Q: As you said, in computer science it’s difficult to prove that things are hard. From a computer science perspective, to what extent is “quantum supremacy” something absolute, and to what extent is it a shifting bar that depends on how good our classical hardware and algorithms are?

SCOTT: It’s kind of like the question “can computers beat the best humans at chess?” The latter question also involves a non-absolute shifting bar – maybe humans are getting better! Or maybe they simply haven’t figured out how to beat the computers yet! So how can we say “computer supremacy” has been decisively achieved? Nevertheless, one can clearly see a phase transition, with Deep Blue in 1996-1997, where the burden of proof shifted massively from one side to the other, and has stayed there ever since. Even if humans are getting better, the computers have also gotten better, and at such an insane rate that humans seem to have no chance of ever again catching up.

From a theoretical standpoint, that’s exactly what we might expect to happen with quantum supremacy experiments – simply because they involve tasks that have polynomial scaling for quantum computers, but (as far as we know) exponential scaling for classical computers, where each additional qubit roughly doubles the classical resources required. In analyzing concrete quantum supremacy experiments, I’d say that the goal is not to pin down the exact factor by which they’re beating this or that classical simulation (the answers will change rapidly anyway, and depend on all sorts of low-level details), but simply to figure out whether or not we’ve entered that phase transition.

Q: What do you generally think about improvements in tensor network methods as a challenge to quantum supremacy and the recent simulation of the supremacy result in Beijing?

SCOTT: I blogged about this here. It was a clever paper, which showed that if you focus narrowly on spoofing the linear cross-entropy benchmark used by Google, then there’s a classical algorithm to do that much faster than had been previously pointed out, by generating many samples that all share most of their bits in common (e.g., that all agree on the first 30 of the 53 bits). But many people remain unaware that, if you just changed the benchmark – for example, if you insisted that the returned samples not only pass the linear cross-entropy benchmark, but also be sufficiently different – then this classical spoofing strategy wouldn’t work and we’d be back to the previous status quo.

Q: Why do you need a random circuit to test the device, and also why is this more interesting/useful than doing something very specific many times to test the device?

SCOTT: The reasons to use random quantum circuits are simply that
(1) they generate complicated entangled states on all the qubits nearly as rapidly as it’s possible to do so – indeed, we now have theoretical results that give a detailed understanding of this process, and
(2) random circuits seem to have about as little “usable structure” (which a classical simulation might exploit) as it’s possible to have.
Eventually, of course, we’d like to run actually useful circuits (say, those that arise in Shor’s factoring algorithm), which will typically have regular patterns and be extremely far from random! But then more qubits will be needed to get an advantage over classical computers. It’s not terribly surprising for the first advantage over classical to be via random quantum circuits, which in some sense “maximally exploit” the hardware resources available.

Q: Have you heard of/what do you think about the recent NP-verification experiment using a quantum optical setup from Paris?

SCOTT: That experiment was actually based on a protocol that Beigi, Fefferman, Drucker, Shor, and I proposed back in 2008. It’s crucial for people to understand that there’s no claimed speedup here for solving any NP-complete problem. We’re talking about a more arcane task: namely, proving to someone that an NP-complete problem has a solution by sending them a small number of qubits. Even there, the protocol depends on the ability to send two quantum states that are guaranteed not to be entangled with each other; also, the communication savings is “only” polynomial rather than exponential (in our original protocol, roughly √n qubits where n classical bits would have been needed). Nevertheless, it’s always fun to see a real experiment implementing some version of something that you worked out on paper, even if you already knew it would work!

Q: If the difficulty for classical simulation is related to the Hilbert space dimension, has it been formally proven that an ideal analog classical computer cannot outperform a quantum computer?

SCOTT: This is not a question of “formal proof” but of physics. In my view, we already know deep reasons why, in our universe, analog classical computers are unlikely to be able to do anything that can’t be efficiently simulated using a standard digital computer. (In other words, why analog classical computers don’t violate the “Extended Church-Turing Thesis.”) Those reasons have to do with nonlinear dynamics chaotically amplifying even the tiniest errors in an analog device. Or, if you really want to push this discussion to the bitter end, they have to do with the breakdown in our picture of a smooth spacetime that’s expected to occur at the Planck scale, of ~10-33 centimeters and ~10-43 seconds, for reasons of black hole thermodynamics and quantum gravity. Crucially, neither of these issues apply to quantum computation. The former doesn’t apply because of quantum error correction and fault-tolerance, which have the effect of “discretizing” continuous errors; while the latter doesn’t apply because as far as anyone knows today, quantum mechanics (unlike theories that assume a smooth spacetime) is exactly true.

Q: [In the comparison between quantum and classical computation], what do we mean by “classical resources” here? Do we mean something parochial like “resources obeying non-quantum laws that are available to us humans on Earth?” Or is any classical resource fair game, no matter its size and classical equations of motion? In that case, doesn’t demonstrating quantum supremacy require demonstrating that the quantum computer exceeds the capabilities of, say, a classical computer the size of the solar system that exploits some CTC (closed timelike curve)?

SCOTT: If CTCs were possible, that would be a revolution in physics even greater than quantum mechanics! Leaving CTCs aside, though, cosmology and quantum gravity seem to impose a limit of roughly 10122 on the number of bits (and the number of operations on the bits) that any computer that fit inside the observable universe could possibly have. And if you envision that cosmological computer as a classical one, then it shouldn’t take impossibly long for us to build quantum computers that can outperform it on some tasks: indeed, a device with ~400 qubits should already be enough! But of course we’re not there yet: with 50-60 qubits, QCs right now are “merely” challenging the largest classical computers currently available on earth (again, on contrived sampling tasks), rather than the largest that could fit in the observable universe.

Q: Why is a quantum state (superposition or entangled state) inherently more fragile than a classical state?

SCOTT: Because when information from the state (say, whether a qubit is 0 or 1, or which path a photon takes through a beamsplitter network) “leaks out” into the environment, the information effectively becomes entangled with the environment, which damages the state in a measurable way. Indeed, it now appears to an observer as a mixed state rather than a pure state, so that interference between the different components can no longer happen. This is a fundamental, justly-famous feature of quantum information that’s not shared by classical information.

Q: Given that we have noisy-intermediate scale quantum devices, what do you see as the fundamental role of noise in the discussion on quantum advantage or quantum supremacy. Is it simply a question of less noise is better, or are there things that cannot be done if there is noise?

SCOTT: There will always be some noise. The big question, about any given platform, is whether the noise is low enough that you can start usefully error-correcting it away, or whether the noise is so high that there’s no point (i.e., whether you’re above or below the “fault-tolerance threshold”). Until you start doing error-correction, most experts believe there’s a severe limit to how far you can scale: probably to quantum computations involving a few hundred qubits at most. Whereas once you have error-correction, at least in principle the sky’s the limit.

Q: You used the Wright brothers as an example where an airplane that was not practically useful itself pioneered the path to useful flight. In what sense to the sampling experiments of Google and USTC also pioneer that path for what we need for the future of quantum computing, or to what extent do you see them as niche examples of quantum supremacy?

SCOTT: I think the majority of what Google did, in integrating 53 superconducting qubits, making them programmable, etc. – and especially in characterizing the noise in a system at that scale – will be directly useful going forward. Indeed, that’s a large part of why they did it! Likewise, a lot of what USTC did, in integrating hundreds of beamsplitters, photon sources, and photodetectors, could be directly relevant to building a universal optical quantum computer. On the other hand, it’s true that both groups took shortcuts with the immediate goal of quantum supremacy in mind. As an example, Google’s chip uses a particular 2-qubit gate that was chosen, not because it shows up naturally in any application, but simply because it’s extra-hard to simulate using tensor network contraction algorithms, so it let them get to quantum supremacy faster than if they’d used a more conventional 2-qubit gate like the CNOT.

Q: To what extent does the measured circuit fidelity of 0.2% in the Google experiment limit the usability of this system for other computations?

SCOTT: Oh, we don’t know of anything particularly useful to do with Google’s Sycamore chip – that is, anything that you couldn’t do much more easily without it – other than
(1) quantum supremacy demonstrations,
(2) possibly the generation of cryptographically certified random bits, and
(3) of course, calibration experiments that tell you about the behavior of integrated superconducting qubits and thereby help you iterate to the next device.
But things are developing rapidly – the best circuit fidelity that was achievable in 2019 is not necessarily the best now, or the best that will be achieved in another year or two.

Update (May 25): Please, no new questions; just discussion of the existing questions and answers! I had hoped to get some work done today; I hadn’t planned on another ask-me-anything session. Thanks!

### 83 Responses to “In which I answer more quantum computing questions”

1. Ted Says:

How significant do you think is the recently published announcement of a 62-superconducting-qubit computer that demonstrates quantum walks (https://science.sciencemag.org/content/early/2021/05/05/science.abg7812)? This computer has more superconducing qubits than Sycamore, but is it really any more capable?

They claim that (unlike boson sampling) quantum random walks are capable of implementing universal quantum computing, which makes this announcement seem quite significant. But does the fact that they only demonstrated two-particle random walks mean that there were effectively really only two “active computational units”, in terms of the scaling of the computational power? (Please excuse the uninformed question; I know very little about quantum random walks.)

2. dankane Says:

What do you mean when you say that as far as we know quantum mechanics is exactly true? Are you claiming that we have a plausibly correct theory of quantum gravity? If not, how can any theory that does not predict gravity be exactly true?

3. Scott Says:

Ted #1: It looks cool! Anyone who knows more about this experiment than I do is welcome to chime in. In any case, there doesn’t seem to be any claim here to do anything that’s hard to simulate with a classical computer.

4. Scott Says:

dankane #2: Quantum mechanics is an abstract framework that (as far as we know) the rest of physics fits into. Roughly, this framework says: “the state of the world is a unit vector in a complex Hilbert space; it changes by unitary transformations; measurement probabilities are given by the Born rule; fill in the rest of the details yourself.”

QM doesn’t predict gravity—it doesn’t predict any of the specific forces or particles, for that matter—because that’s not part of its job description.

The relevant question is not whether QM predicts gravity, but simply whether it’s consistent with gravity, and allows for a quantum theory of gravity that describes our world. The overwhelming majority of physicists think “yes”—and certainly the progress on AdS/CFT over the past few decades is strongly consistent with that hope—while a few holdouts, like Roger Penrose, think “no.” I.e., the holdouts think that combining GR with QM will actually change QM itself—rewrite the ground rules of physics—rather than just revealing GR to be a low-energy approximation to an exactly quantum theory. But even the holdouts are well aware of how revolutionary such a discovery would be.

5. Peter Morgan Says:

Re: your answer to the fifth question, “If the difficulty for classical simulation is related to the Hilbert space dimension, …”.
I suppose that all the recorded output from a physical apparatus is classical, insofar as that record is just a collection of classical Gigabits on hard drives. From that record we construct, by various post-selections, a system of statistics that does not admit a model of joint probability distributions, so that only a noncommutative statistical model is possible. Some would call such models “quantum probability”. If we extend classical mechanics to include Gibbs statistical states and noncommutativity, however, using the transformations of phase space that are generated by the Poisson bracket, we can obtain a construction that is isomorphic to quantum mechanics insofar as it can be the algebra of bounded operators acting on a Hilbert space. [Not all is simple, because in this “CM+noncommutativity” the generator of timelike evolution, the Liouvillian operator, is not bounded below, so the whole structure contrasts with QM, for which the generator of timeline evolution, the Hamiltonian operator, *is* bounded below, but we can loosely think of the mathematics of QM giving us an analytic formalism for CM+noncommutativity.]
With the noncommutativity just described, together with details that would make this comment an article (or three: in Physica Scripta 2019, in Annals of Physics 2020, and a third currently only on arXiv), it seems that all the essential resources are available for us to be able to construct a classical model for the recorded output and to model all the algorithms we apply to that recorded output to construct incompatible statistics and a generalized probability model.
With that setup, sadly both overshort and overlong and toiling against the wisdom of many, I hope I can ask a question that will be cogent enough to merit a reply: “Granted that ‘analog classical computers’ are not the same as quantum computers, must we also dismiss ‘*noisy+noncommutative* analog classical computation’ as a way to model the apparatus and the algorithms applied to the output that is recorded by the apparatus?”

6. Mg Says:

noob question ahead: Isn’t it a little bit uncomfortable to include complex numbers in the state of the universe? Since they are uncountable, it would mean that if “God’s [classical?] computer” was to be able to store any logically possible state of the universe it would have to be a very heavenly kind of computer, storing an infinite amount of information. Would it cause any issues (except perhaps being silly and pointless from all practical points of view) to use, say, the algebraic numbers (based on glancing at some QC texts, the complex rationals won’t do)?

7. Eric Neyman Says:

Hi Scott, I was the one who asked about Aumann’s agreement theorem on Sunday. We were only allowed 300 characters for questions, so I asked a different question from the one I wanted to ask. If I may, here’s what I wanted to ask:

Wei Dai writes: “There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren’t just moving closer to each other’s beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information.”

I think this is true if no assumptions are made about the information structure, and is perhaps captured by the computational complexity of the agreement protocol (even with the oracles you give in your paper, the computation required is exponential in 1/epsilon and 1/delta). On the other hand, in real-life situations where two people have different information about the world, there’s usually some sort of nice structure. Like probably it’s in general the case that there’s a nice “summary” of each person’s information, where exchanging summaries is sufficient for agreement. That’s just an intuition.

So when I asked whether Aumannian conversations are applicable to real life, what I meant was: is there an agreement protocol that is efficient “in the average case over real-life information structures”? (So, “real life” was intended to apply to information structures, not the people involved.)

A formal version of this question would be something like “Is there a natural, not overly restrictive assumption that can be made about the information structure, such that there is a polynomial-time protocol for coming to an agreement?”. But I imagine you don’t have an answer to that offhand, so I was more hoping to get your intuition about the informal question.

Thanks!

8. Tamás V Says:

Scott #4: In QM, time (and the amplitudes) are continuous (e.g. in the Schrödinger equation). If it turns out that time is discrete, will that mean that QM is not exactly true?

9. Raoul Ohio Says:

Mg #06:

It is not entirely clear how real numbers, or any uncountable set, fits into the “real” universe. There is a hierarchy of number systems, each more esoteric than the last, and further from the the real world.:

N — the naturals (counting numbers, positive whole numbers): OK, this seems easy.

Z — the integers. OK, nice to be able to solve 2 + x = 1.

Q — the rationals. OK, nice to be able to solve 2 * x = 1.

A — algebraic. OK, nice to be able to solve x * x = 2.

Any element of Z, Q, or A can be represented with finitely many elements of N.

R — the reals. nice to be able to solve 2 ** x = 3 (sorry for Fortran notation). But what really is the solution x = lg(3)? Is it something in the real world, or just the solution of an equation? Does it matter? I dunno.

There are two usual definitions for R (dedikind cuts, cauchy sequences), both of which involve uncountable sets. These definitions are so you have somewhere to start if you want to prove theorems in calculus. They both seem reasonable, and turn out to be equivalent, so there’s that. Would alien civilizations do it the same? My guess is yes, but I hope to ask. I did ask at the UFO museum in Roswell NM, but all they knew about was crop circles.

One of the few things you can say about R is that, whatever it is, it is uncountable.

10. Shmi Says:

Scott, not quite quantum computing, but… what do you think of this 1981 Don Page paper https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.47.979 (https://sci-hub.se/10.1103/PhysRevLett.47.979) that purports to show that quantum mechanics is incompatible with semiclassical gravity (by demonstrating that gravitational force does not point “between” two positions of the same mass in two different Everett branches)?

11. LK2 Says:

Scott #3,

what do you mean with “progress in AdS/CFT”?
As far as measurements concern, we are not living in an AdS Universe (maybe dS .. maybe..’will see as dark energy and other cosmological measurements will improve in the coming years).

12. Chris Says:

Re your answer to the penultimate question: Doesn’t the 2-qubit gate used in the Sycamore chip arise fairly naturally in chemistry applications?

13. Scott Says:

Peter Morgan #5: Alas, your question remains as incomprehensible to me as all your other questions here over the years—presupposing, as it does, detailed knowledge about some specific formalism (which I don’t know) to replace QM (which I do know).

14. Scott Says:

Mg #6:

noob question ahead: Isn’t it a little bit uncomfortable to include complex numbers in the state of the universe?

Longer answer: It seems no more uncomfortable than including real numbers, which we’ve done since … I dunno, Newton? Archimedes? But, OK, real numbers seem pretty uncomfortable too, if (for example) the universe needed infinitely many bits to “store” those reals.

But it’s crucial to remember that in QM, the complex numbers don’t show up as observable quantities (like positions or momenta), but as amplitudes, which are not directly observable, and which are used only to calculate probabilities. So, are you kept up at night by the fact that probabilities could be arbitrary real numbers (so that, for example, it makes sense to talk about a needle that lands a certain way with probability 2/π)? If you’re not worried, then it turns out that basically any mathematical or technical reason you could give for why you’re not worried, generalizes to a reason why you shouldn’t worry about amplitudes being complex numbers either.

Would it cause any issues (except perhaps being silly and pointless from all practical points of view) to use, say, the algebraic numbers (based on glancing at some QC texts, the complex rationals won’t do)?

Algebraic numbers are not a convenient choice, because we often want to talk about states like cos(θ)|0⟩+sin(θ)|1⟩, for various rotation angles θ—but of course, even if θ is a “nice number,” cos(θ) and sin(θ) could be non-algebraic.

But, OK, it wouldn’t be hard to find some countable subset of complex numbers that suffices for any conceivable application to QM—if nothing else, then the computable ones (one could then even do computable analysis on them). The reason people don’t do this is the one you pointed out: you pay a big price in simplicity for no benefit other than perhaps a metaphysical one, which is basically never a trade that working scientists will make.

15. Scott Says:

Eric Neyman #7: Thanks for clarifying—that’s a wonderful question! But the answer isn’t a blog comment; the answer would be a new research paper. 😀

As a first observation, there’s nothing in my existing protocol that requires resources that scale exponentially with n, the number of bits known by Alice and Bob (which is the usual parameter that we’d care about in CS theory). At least assuming the oracles that sample from the common prior and so forth (admittedly, those are probably unrealistic), the only exponential dependence is on 1/ε and 1/δ. But, OK, are there practically-important special cases, or other natural computational models, where that exponential dependence can be replaced by a polynomial dependence? Probably! That’s a great research project for some student reading this.

(In general, it’s disappointed me that in 17 years there’s been almost zero followup to my agreement paper … I hope that will change!)

16. Scott Says:

Tamás V #8:

In QM, time (and the amplitudes) are continuous (e.g. in the Schrödinger equation). If it turns out that time is discrete, will that mean that QM is not exactly true?

No. QM is totally, 100% fine with discrete time—that just means we talk about unitary transformations (e.g. “gates,” in quantum computing) rather than Hamiltonians.

By contrast, if the amplitudes were restricted to some discrete subset, that would mean QM was not exactly true.

17. Scott Says:

Shmi #10:

what do you think of this 1981 Don Page paper … that purports to show that quantum mechanics is incompatible with semiclassical gravity …?

Oh, of course QM is incompatible with semiclassical gravity! After all, the word “semiclassical” just means “not fully quantum.” 🙂 There may have been people 40 years ago who naïvely thought you could paper over the differences, but I doubt anyone today still disputes the incompatibility. The modern answer, of course, would be that “semiclassical gravity” is only ever an approximation to a truly quantum theory of gravity, something that (as you might have heard…) is still being worked on.

18. Scott Says:

LK2 #11:

what do you mean with “progress in AdS/CFT”?

I mean, progress that’s understood more and more about quantum gravity in AdS universes (evaporating black holes, wormholes, etc. etc.), in terms of the CFT on the boundary. The physicists who work on this topic are very well aware that we don’t live in AdS. 🙂 Their argument is that we now have an “existence proof”—as we didn’t a few decades ago—that a quantum theory of gravity for universes that seem just as complicated as our universe (albeit different in detail) is actually possible, and that it entails no change to the basic rules of QM.

19. Scott Says:

Chris #12:

Doesn’t the 2-qubit gate used in the Sycamore chip arise fairly naturally in chemistry applications?

Does it? Do you have a reference for that?

20. Scott Says:

Everyone: Please, no new questions; just discussion of the existing questions and answers! I had hoped to get some work done today; I hadn’t planned on another ask-me-anything session. Thanks! 🙂

21. Chris Says:

Scott #19:

There’s some discussion of this in the supplementary material of the supremacy paper. See e.g. the discussion on page 15 or on page 24. These gates are even referred to as ‘fSim’ gates, short for fermionic simulation gates (more on this here). Of course they are parameterised, and the parameters chosen/fixed for the supremacy experiment aren’t necessarily directly useful, and one still needs to apply some single-qubit gates, maybe I misunderstood and this is what you were saying?

22. Scott Says:

Chris #21: Thanks, I’d had no idea! (Probably because neither Sergio Boixo nor John Martinis ever mentioned it to me in conversation.) In any case, while the gate might have some uses in fermionic simulations, I think it’s safe to say that that’s not why they chose it: they chose it because they wanted to get to quantum supremacy.

23. bertgoz Says:

Hi Scott, so sorry I missed the AMA… I have a burning question that I have been meaning to ask for a long time, I’ll drop it here in case you feel generous. In complexity theory why do you only consider other resources apart of time and space? Is there a complexity class based on usage of entanglement as a resource? Or energy?

24. Craig Gidney Says:

Chris #12:

> Doesn’t the 2-qubit gate used in the Sycamore chip arise fairly naturally in chemistry applications?

You might be mixing up the Sycamore gate with the family of gates it comes from (the FSIM gates; gates covering ISWAP^a * CZ^b). It’s been shown that the beyond-classical hardware can do the full FSIM set ( https://arxiv.org/abs/2001.08343 ) but in the beyond-classical experiment only gates near the Sycamore gate were actually used.

FSIM gates are excitation-preserving interactions. This type of gate is natural for chemistry simulations that associate qubit excitations with electrons, since chemistry preserves electrons. But if you only have one or two members from the family, you will still need to decompose general excitation-preserving interactions into multiple gates. Not accounting for differing error rates, I think decomposing into Sycamore is slightly worse than decomposing into CNOT because with Sycamore you might need 4 whereas with CNOT you only ever need 3.

25. Scott Says:

bertgoz #23: In some sense, BQP is “the complexity class that captures the power of entanglement” (as exploited by quantum computation), just like BPP is “the class that captures the power of randomness.” There have been sporadic attempts to construct a complexity theory of energy; the trouble is that any model will be tied to a particular technology, since the celebrated work of Bennett, Landauer, et al tells us that any computation can in principle be done with arbitrarily small energy expenditure.

In short, complexity theory can and does consider dozens of resources besides just time and space, including both of the ones you mentioned. On the other hand, complexity theory has a very hard time studying a resource (say, “energy” or “entanglement”) in isolation from everything else. What it can do is to compare the power of models of computation, where defining such a model might involve making choices about multiple resources simultaneously.

26. Scott P. Says:

Nevertheless, one can clearly see a phase transition, with Deep Blue in 1996-1997, where the burden of proof shifted massively from one side to the other, and has stayed there ever since.

I’ll just point out that Deep Blue blatantly cheated in its contest with Kasparov. They gave it a record of all of Kasparov’s matches, but they denied Kasparov access to any of its code or even representative games.

27. Amazing: Feng Pan and Pan Zhang Announced a Way to “Spoof” (Classically Simulate) the Google’s Quantum Supremacy Circuit! | Combinatorics and more Says:

[…] (May, 25, 2021): Scott referred again to Pan and Zhang’s paper in a recent post largely repeating his preliminary thoughts on the […]

28. fred Says:

Scott #4
Roughly, this framework says: “the state of the world is a unit vector in a complex Hilbert space; it changes by unitary transformations; measurement probabilities are given by the Born rule; fill in the rest of the details yourself.”

I think that “measurement” is still pretty undefined.
In the context of a QC, does measuring the state of one qubit at one end of the QC instantly collapse the wave function throughout the entire QC at every qubit? Or does that collapse itself propagates like a wave?
I get that you would probably only measure a few output qubits once the state of the circuit has “stabilized”, but since noise itself is decoherence (entanglement with the non-quantum part of the machine), does noise at one end of the machine instantly affects all other parts of the circuit or is there a finite speed propagation? In other word, is the transition from pure state to mixed state instantaneous throughout the entire machine?

29. bertgoz Says:

Scott #23: thank you so much for your answer, really a highlight of my day!
What is so special about space and time that allow us to considered them in isolation from other resources? And could that special characteristic also be present in any other resource (even if it is not currently known)?

30. Eric Neyman Says:

Scott #15: thanks for the thoughts!

“(In general, it’s disappointed me that in 17 years there’s been almost zero followup to my agreement paper … I hope that will change!)”

No promises, but that might change soon 🙂

31. Scott Says:

bertgoz #29: We don’t consider space and time in isolation. P and PSPACE, for example, are conventionally defined as what you can do with polynomial time and polynomial space respectively, on a digital deterministic Turing machine, with 1 parallel processor, 0 random bits, 0 qubits, 0 nonuniform advice bits, and other details that would become salient only if we compared P and PSPACE to other more obscure complexity classes. It’s just that we don’t normally spell all this out, due to the historical accident that deterministic digital Turing machines came first (so all the other stuff gets defined with reference to them), and also P and especially PSPACE happen to be robust to many common modeling choices (for example, PSPACE = NPSPACE = BPPSPACE = BQPSPACE).

32. Scott Says:

Discussion with my wife Dana yielded the following additional insight. If we wanted to pinpoint the “most basic” computational resource—more basic than space or even time—it would simply be the number of elementary steps (for some given notion of “elementary step”—say, classical or quantum gates). This notion, of course, coincides with time for serial computation, but parallel computation breaks the equivalence. And the way we currently build our computers, it’s an excellent proxy for energy use (what’s most relevant to Bitcoin mining, for example).

33. Scott Says:

Scott P. #26:

I’ll just point out that Deep Blue blatantly cheated in its contest with Kasparov. They gave it a record of all of Kasparov’s matches, but they denied Kasparov access to any of its code or even representative games.

I think that actually makes my analogy better rather than worse! Yes, you can argue the details of Kasparov vs Deep Blue, just as people today argue the details of Google’s supremacy experiment vs classical simulations. Clearly IBM then, like Google now, was motivated by the desire to win and be first, not just by fair-minded scientific curiosity. And yet for all that, we can now say that a phase transition in human vs machine chess supremacy really did happen around that time—if it wasn’t Kasparov vs Deep Blue, it would’ve been something else a couple years later—and I expect the same to be true with quantum supremacy.

34. James Gallagher Says:

Scott #16

QM is totally, 100% fine with discrete time—that just means we talk about unitary transformations (e.g. “gates,” in quantum computing) rather than Hamiltonians.

Well, I wouldn’t say 100%, for example the Path Integral approach is equivalent to the Schrödinger Equation, but the proof requires continuous time:

Relation between Schrödinger’s equation and the path integral formulation

35. Scott Says:

James Gallagher #34: No, once you’ve reinterpreted “the Schrödinger equation” to mean “states evolve by a sequence U1,U2,… of unitary transformations,” and you’ve reinterpreted “the path integral” to mean “you calculate the final amplitude of each basis state |x⟩ by summing over all the paths that terminate at |x⟩”—so that they both make sense at all in the discrete-time setting—the two views are then easily shown to be equivalent in that setting as well.

36. James Gallagher Says:

Scott #35

I don’t think that’s true, you have to assume continuous time between each U1, U2, U3… for the path integral to even get from Un to Un+1.

That’s how Feynman originally came up with the idea from Dirac’s rough Lagrangian formulation

Read Feynman’s Nobel Lecture, he explains it pretty well:

https://www.nobelprize.org/prizes/physics/1965/feynman/lecture/

37. ppnl Says:

Fred #28,

“In the context of a QC, does measuring the state of one qubit at one end of the QC instantly collapse the wave function throughout the entire QC at every qubit?”

Yes the collapse is instantaneous. This is just Bell’s inequality in action. This apparent non-locality was then demonstrated experimentally. And this is where quantum computers get their power.

But you have to be careful here. Those quantum states of entanglement and superposition are not directly observable. They are not be-able states in that you cannot observe things being in those states. You can only tell something strange happened by comparing all the observed be-able states after the collapse.

In the most important sense QM is not non-local because no information can be transmitted from end to end. As a metaphor consider how a shadow can travel faster than light but cannot be used to transmit information faster than light.

38. Scott Says:

James Gallagher #36: Thanks; your link to Feynman’s Nobel lecture led to an enjoyable two hours of reading!

But to say it one more time: there is a discrete-time analog of Schrödinger evolution, namely quantum circuits made up of unitary gates U1,…,UT. There is a also discrete-time analog of the Feynman path integral, namely the calculation of an amplitude ⟨x|U1…UT|y⟩ by a sum over exp(T) different paths that start at |x⟩ and end at |y⟩. And I can prove directly that those two formulations of QM are equivalent—indeed, the second one is just what you get by multiplying together the unitary matrices U1,…,UT and then expanding out the product as a multilinear polynomial in the entries. This is, for example, how Bernstein and Vazirani proved in 1993 that BQP is contained in P#P. I don’t claim that this captures all of Feynman’s achievement with the path integral, but it does capture one little piece of his achievement. It’s not hard to do, and it relies neither on a continuity assumption nor on anything that I need to take on faith. That’s how I know I’m right.

39. fred Says:

ppnl #37

So if the internal state is un-observable, it’s like in the double slit experiment: we don’t worry about the details of how the electrons propagate between the emitter gun and the screen (we only observe the final diffraction pattern on the screen)?

But quantum circuits are typically illustrated very similarly to idealized classical electronic circuits, i.e. wires connecting gates. Don’t quantum circuit designers still have to worry about how “signals” propagate between the gates? Don’t they have to make sure inputs on each gates are somewhat synchronous (because bound by speed of light, etc)?
In the case of digital classical circuits, the state isn’t steady, and there’s a clock that synchronizes the progression of signals from gate to gate, because at a high level, it’s implementing a Turing machine. Maybe that’s just not the case with a QC?

40. fred Says:

Scott #32

“And the way we currently build our computers, it’s an excellent proxy for energy use (what’s most relevant to Bitcoin mining, for example).”

I was wondering about the problem of “wasted” energy in bitcoin mining.
In theory, energy cost of computation can be arbitrarily low, although there’s a limit – at the very bottom you have Laudauer’s limit: expressed in terms of switching 1 billion bits per second at the cost of only 2.8 trillionths of a watt.

Of course if we were to reach such low power, the only thing limiting bitcoin mining would be the cost of producing the processors.
So eventually the cost of processors will be tied closely to crypto currency mining?
Otherwise companies like NVidia would stop selling processors and keep them to just do bitcoin mining? 😛
Maybe the argument breaks because of the inherent volatility of bitcoin value.

41. P Says:

I’d like to expand the chess analogy, to try to get at a concern I have about all the talk on supremacy. (I apologize for the long lead-up.)

In chess there is a single goal—win the game. There also is a standardized measure of chess proficiency, the ELO rating system. As a historical note, the ELO ratings of computer programs made steady progress over the years, with a few big jumps. A nice video is here: https://www.youtube.com/watch?v=wljgxS7tZVE

Now, there are certainly subgoals in chess, also with standardized measures. For instance, we can ask for endgame proficiency, and this too is easy to measure. Moreover, while grandmasters are near the top of the charts at endgames, there are certain individuals who surpass their skills. That all said, there now exists a 7-man endgame database, which (better than any human or chess program without access to the database) wins (speedwise) at this super-specialized task (of winning 7-man endgames), simply by looking up the best move.

Similarly, with computers there is also a single major goal—do arbitrary computable processes quickly. But that is not (yet) what quantum supremacy is about. Rather, as I understand it, companies want to show that quantum computers “win” when performing at least one important subtask.

The previous paragraphs are meant to set up my question, and they are not meant to minimize the importance of subtask supremacy. It is a huge deal if quantum computers start winning endgames (to use the analogy) even if they can’t yet win entire games.

One problem with task supremacy is that there are infinitely many tasks to choose from. Some tasks are obviously important, some are contrived (like breaking pots), some are in between, and most of them nobody has ever thought about. It goes without saying that if a quantum computer starts factoring numbers faster than a classical computer, or can solve the travelling salesman problem more quickly, or any number of other standard problems, the classicists must admit defeat. Other problems make this less clear.

The Google team chose a task that had not received much previous thought. In other words, the time-lapse video of progress on the task would be very sparse. (To use the chess analogy, it would be like saying “My computer program beats humans at the neo-Pirc-Schmolinsky defense” which is a [contrived, for sake of analogy] highly tactical line, that very few humans had seriously considered before.) This fact makes talk of supremacy premature, but not unreasonable.

Afterwards, the classicists went to work, and they beat the quantum computer at the stated task. Doesn’t the reply “if we change the task slightly, then the quantum computer starts winning again” miss the mark (just like changing the “neo-Pirc-Schmolinsky defense” to another opening misses the mark)?

To put it in terms of betting: Do I really believe that my new task is so different that I would be willing to spell it out precisely, and put down a significant amount of money and reputation claiming it cannot be similarly spoofed? If not, the supremacy is ephemeral and meaningless, and Pan and Zhang should be considered the undisputed winners of the previous contest.

We can easily create tasks that quantum computers currently beat classical computers at; not because of any meaningful supremacy, but because humans have not given serious thought about creating classical processes to do those tasks quickly. Any serious attempt to formalize supremacy must take this fact into account, and definitely not exploit it.

42. Scott Says:

P #41: Your analogy doesn’t make sense to me on its own terms. Sure, random circuit sampling is just one specific problem on which you can compare quantum computers against classical ones. But in just the same way, chess is just one specific game on which you can compare humans against computers; there are hundreds of others (Go, Jeopardy, poker, soccer…) that could be used instead.

Also, the whole point of much of the theoretical work that I and others did over the last decade was to study the classical difficulty of problems like random circuit sampling. (See e.g. here, here, here.) While the best current hardness results aren’t quite as strong as we’d like, it is (to put it mildly) not an issue that’s escaped our attention. 🙂

43. Job Says:

It goes without saying that if a quantum computer starts factoring numbers faster than a classical computer, or can solve the travelling salesman problem more quickly, or any number of other standard problems, the classicists must admit defeat.

I don’t think the situation is any different for random circuit sampling.

Actually, classical computers might be in an even better position to spoof the results of a QC on random TSP instances.

It depends on the average-case complexity?
Plus, the QC (using Grover’s algorithm) would be running an exponentially-long circuit, which gives classical heuristics alot to work with.

I expect that we would still see classical spoofing for either TSP or factoring, and plenty of debate.
It’s not like factoring is more firmly outside of P than sampling from random quantum circuits?

IMO it would just be more difficult to cheese the experiment, since you’d have a tougher time getting things to work with “organic” QC gates.

But that’s the whole reason why the supremacy experiment was even possible with the current hardware.

44. P Says:

Scott, if we use your modification of the analogy, my point still remains. If the chosen game is one where humans have not tried to optimize their play, claims of supremacy are premature and possibly ephemeral.

Next, I want to apologize if you thought I was questioning your expertise in this area. I absolutely accept you are an expert at computational issues. To put it in terms of the analogy, I absolutely accept that your three linked papers describe a way to create tasks/games that are natural and interesting, for which an ideal quantum computer would beat an ideal classical computer [subject to the usual provisos, etc…]. (Others online have questioned the utility of these tasks/games; but I’m not talking about that issue. [The discussion around “teapots” was fascinating.]) I don’t know if such tasks are regularly played/spoofed by classical computers, and that is more on topic with my point.

If a friend had to bet their house on a game between the current best chess program, under standard rules and time controls, against any human, I’d say bet on the computer. But of course supremacy is now obvious. If I had suggested that bet for Deep Blue, or any program shortly thereafter, I would have lost them money depending on the human competitor. [Some humans were just good at beating computers.]

45. Shmi Says:

Scott #17

> Oh, of course QM is incompatible with semiclassical gravity! After all, the word “semiclassical” just means “not fully quantum.” 🙂 There may have been people 40 years ago who naïvely thought you could paper over the differences, but I doubt anyone today still disputes the incompatibility.

I’m confused then. Hawking’s most celebrated result, black hole evaporation, that no one seems to seriously question these days, relies on QFT on a curved spacetime, which is as semiclassical as it gets. And if QFT and semiclassical GR are incompatible in the low-energy limit, where the Hawking calculation is done, not just in the Planck scale limit, where there is no doubt that something’s gotta give, then why do people ignore the incompatibility?

I guess one point is that near-thermal radiation from horizons does not rely on entanglement, while the Page’s anti-semi-classical argument does. And we know that reconciling evaporation with entanglement leads to severe problems at, well, the Page time (of half-evaporation). Not sure if the recent result on the entropy of Hawking radiation decreasing after the Page time are altogether believable.

Anyway, wonder if you have any views on the matter, even if it’s not directly relevant to QC.

46. James Gallagher Says:

Scott #38

Well I was only saying “100% comfortable” with discrete-time maybe should be 99% or so!

Thanks for those references to the discrete-time formulations, that does sound interesting.

(Sorry I did not point out which part of that long Nobel lecture I was referring to, it is a long read covering a few different ideas)

47. Scott Says:

P #44: If your point is just that quantum supremacy claims depend for their credibility on people trying hard to refute them and failing, then I vehemently agree! So you and I should both be glad that this is exactly what’s happening right now.

Regarding your question: no, I would not bet that Google’s Sycamore chip can’t be spoofed by a classical computer in 10 years—or right now, for that matter! That’s because the chip has 53 qubits, and 253 is “only” 9 quadrillion, and that’s a number where the chip can already be spoofed by powerful enough classical computers! And even if that weren’t the case, classical computing is still on a Moore’s-Law trajectory.

On the other hand, up the number of qubits from 53 to 100 while keeping the gate fidelities the same, as should hopefully be possible in a couple years’ time (if not now), and then sure, I’ll make that bet. Our theoretical results aren’t the only reason why I would, but they’re one of them.

(One irony, though, is that with the current random circuit sampling experiments, we’d only be able to settle the bet after classical computers had improved enough to simulate the QC, and thereby verify its results retrospectively!)

Incidentally, to say it one more time, stuff like the Pan & Zhang paper is relevant only because we’re right near the crossover point, where the state-of-the-art quantum and classical devices do comparably on sampling benchmarks. (If QCs will ever leave classical computers in the dust on these benchmarks, as theory suggests they will, then by the Intermediate Value Theorem there must be some such crossover point, and it happens to be right about now! 🙂 ) As we get to a safer distance beyond the crossover point, the exact details of the benchmark test will matter less and less.

48. Scott Says:

Shmi #45: You asked the wrong person … but since you did, I’ll answer you. 😀

Yes, Hawking’s famous calculation is “semiclassical,” in the sense that it studies the behavior of quantum fields on a classical spacetime background (namely, one containing a black hole). But crucially, it does that in a situation where modeling spacetime classically is a valid approximation … just like it is here on earth! The semiclassical approximation had better break down a few Planck lengths from the singularity … but if the Equivalence Principle is valid, then it had better not break down near the event horizon of a large black hole, which is where Hawking’s calculation takes place. So that’s why the result is universally believed.

In “full quantum gravity,” by contrast, one is explicitly interested in the singularities themselves (black hole, cosmological, and whatever other kinds might exist), as well as in quantum superpositions of different spacetime geometries and even topologies. These are all situations where the semiclassical approximation (i.e., treating spacetime itself as classical) manifestly won’t work. So that’s the difference.

49. ppnl Says:

fred #39

“So if the internal state is un-observable, it’s like in the double slit experiment: we don’t worry about the details of how the electrons propagate between the emitter gun and the screen (we only observe the final diffraction pattern on the screen)?”

Exactly so. If you knew the details of the propagating electron then the diffraction pattern would disappear. In fact you don’t even have to consciously know the details of the electron. If the electron interacts with the environment in some unknown way so that quantum information was leaked to the environment in some unknown place then the diffraction pattern will disappear.

“But quantum circuits are typically illustrated very similarly to idealized classical electronic circuits, i.e. wires connecting gates. Don’t quantum circuit designers still have to worry about how “signals” propagate between the gates?”

They have to worry about keeping the signal isolated like the electron in the double slit experiment. A typical electrical signal in a typical wire is going to interact with the environment and so you really can’t use that at all. Solving this problem is very very hard.

“But quantum circuits are typically illustrated very similarly to idealized classical electronic circuits, i.e. wires connecting gates. Don’t quantum circuit designers still have to worry about how “signals” propagate between the gates? Don’t they have to make sure inputs on each gates are somewhat synchronous (because bound by speed of light, etc)?”

They have to worry about isolating the signal so that they do not interact with the environment. Digital synchronization is a different unrelated problem.

Digital circuits do not inherently need to be synchronized. I can hook up any number of AND, OR, NAND or whatever gates without any synchronization and the output will reach the proper value in a tiny fraction of a second. The problem is that complex digital circuits may have feed backs and latch circuits that create race conditions. This is solved by forcing a periodic time out to let the circuit equalize and resolve any race conditions. This is what the clock signal does. Many ICs can work in synchronous or asynchronous mode.

A quantum circuit may or may not need a clock signal depending on what you are trying to do. Either way I don’t think it is a problem.

50. Mg Says:

It’s not clear to me what’s allowed in this thread now (after the May 25 update), I wrote basically a strengthening of my last question. If you don’t want that then stop reading and reject, I saved it for a better occasion.

Scott #14, #16

I should’ve thought of this when writing the first comment, but of course just like there is a huge difference from countable to uncountable there is one from finite to countable. A countably infinite set of numbers requires an unbounded number of bits instead of infinite, but that still doesn’t sound great. Finite would be for me the first intuition on the “numbers of the universe”, I think yours too based on something you said about a cellular automaton on a huge 3d grid or something like that. Since finite implies discrete, combined with comment #16 we get the revolutionary proposition that quantum mechanics is not consistent with the first intuition on how the world works ;). So did you find any metaphysical reassurance for this particular unintuitive aspect? I don’t see how the inability to directly observe amplitudes helps (it is intuitive for me that it should be hugely important for philosophy of quantumness).

One thought I had that I hinted at a little in my first comment is that the “God’s computer” model was only obvious because we (I) thought of the universe as classical. In fact it sort of is the point that we might as well define God’s computer as equal to the universe. Since we live in a quantum world, then redoing the thought experiment we land at the conclusion that God’s computer is a quantum computer, and that shouldn’t be worrying, because as you explain in your book, classical isn’t inherently more natural than quantum. So it’s the oldest (and maybe best) cope in the book, “it is what it is”.

With your probability analogy with the needle, real (ℝ) probabilities aren’t a problem as long as they’re just a thing we write on paper, in this case they might as well be a proper class cause it’s just a mind trick anyway. The problem with amplitudes is that they are part of the world state. In a finite classical probabilistic universe probabilities would be from a finite set.

51. Tamás V Says:

Scott #16:

By contrast, if the amplitudes were restricted to some discrete subset, that would mean QM was not exactly true.

I wouldn’t worry about that. In this case, someone may just come up with a discrete-amplitude formulation of QM. And when new discoveries happen, we can tweak QM further, it will be like grandfather’s axe 🙂

52. Scott Says:

Tamas V #51: The establishment of the formal rules of QM was arguably the biggest deal in all of 20th-century science, and the rules haven’t been tweaked a bit for 90 years. Any tweak could plausibly be the biggest deal of 21st-century science. This is not your grandfather’s axe. 😀

53. fred Says:

It seems to me that this blog keeps hammering the same assumptions over and over:

1) QM works

2) if you have a system with N qubits, how hard can it be have N+1 qubits?

3) Then it’s clearly just a matter of time before we have arbitrarily complex QC.

If one dares questioning 3), it’s always taken as an attack on 1), which is a clear sign of being a crackpot.
And actual engineering challenges (which make 2) an open question) are always conveniently swept under the rug with some hand-waving about error correction.

Classical computers work because they scale easily (aka Moore’s Law as an evidence of this), this is the reason why it’s so easy to abstract the machines and just do theoretical work at the levels of the bits.

But quantum systems are inherently very hard to scale, just adding a “qu” in front of “bit” doesn’t make this reality disappear.

54. Tamás V Says:

Scott #52: I agree, that would be a big deal if those rules changed. What I meant was that based on the textbooks I’ve read so far, I always had the impression that the Schrödinger equation with continuous time was part of those celebrated rules, an essential one. If it turns out that time is discrete, it will not be essential any longer, but only an (excellent) approximation. And it will be a big deal, indeed. The question is whether they would still call the adjusted (discretized) theory QM when this (ever) happens; if they did, that would be grandfather’s axe in my opinion.

55. fred Says:

It’s all like visiting the blog of a guy whose entire career is focused on writing papers about how to optimize a network of space elevators (*), or the blog of guy whose entire career is focused on writing papers on how to optimize a grid of massive fusion reactors (**), and then asking them about the actual real world feasibility of either technology, and be told that you’re really confused because none of those violates known physical laws, but if they did, it would be even more AMAZING!

56. Scott Says:

Tamas #54: Yes, as long as states are unit vectors in complex Hilbert space that evolve unitarily and are measured according to the Born rule, it’s QM, regardless of whether space and/or time are continuous or discrete. Not even the physicists who work with continuous-time QM would disagree with that statement.

57. Scott Says:

fred #53: “hand-waving about error-correction” would’ve been a valid critique circa 1993. Today there’s a massive theory of quantum error-correction, explaining precisely what needs to be done — and just in terms of numbers, gate fidelities and so forth, it’s still beyond where the hardware is but no longer impossibly far. How much do you know about that theory?

58. Scott Says:

fred #55: I guess I don’t get any credit in your book for all the work I did to develop the theory of sampling-based quantum supremacy, which led to two major experiments successfully carried out over the last two years? “Completely detached from the real world,” you complain, as others gawk at the newly-built elevators now carrying little test payloads 5 miles into the sky.

I wish you luck in finding a blog that’s a better fit for your interests.

59. Tamás V Says:

Scott #56: Sure, thanks, I wasn’t aware that quantum physicists are so lenient towards time 🙂 Was that similar also in Schrödinger’s time? (Rhetorical question.)

60. Scott Says:

Tamas #59: The idea that QM is just a generalized probability calculus, not bolted down to any specific commitments about spacetime or particles, took a long time to become widely understood, reaching its fullest development with modern quantum computing and information. But (e.g.) von Neumann and Dirac seem to have thought about it this way almost from the very beginning.

61. Tamás V Says:

Scott #60: Wow, I didn’t know that about von Neumann and Dirac, now I respect them even more. And Schrödinger’s equation was nothing more than a transient disturbance on that long path to understanding… just kidding 🙂

62. fred Says:

Scott #57 #58:

Today there’s a massive theory of quantum error-correction, explaining precisely what needs to be done — and just in terms of numbers, gate fidelities and so forth, it’s still beyond where the hardware is but no longer impossibly so. How much do you know about that theory?”

I’m not claiming that the field of QC error correction is bogus. I’m simply saying it’s something that’s not ever covered here as far as I can tell.
I do get it’s way more complicated to convey to a general audience than just the basic laws of QM applied to an idealized QC.
So if scalability is solved, maybe what’s still allowed to ask is whether we will see in our life time an actual QC that can run Shor’s on numbers outside of the range of classical computers?

“Completely detached from the real world,” you complain

I’m not complaining or feeling entitled to anything.
I’m not attacking your legacy or achievements either, we’re all aging (I just turned 50).
It’s simply my honest feedback/observation as someone who has absolutely no beef in any of that QC supremacy stuff, and as someone who’s been reading this blog for a very long time now.
In academia, research progresses forward, at its own pace, but at the same time things are very cyclic (in a groundhog day kind of way): each year brings an entirely new batch of students, which are blank slates that need to be filled with the program. And, as the years fly by, I guess it’s inevitable that one’s arguments move towards to the most economical set of ideas, which is good in many ways, but that probably makes it harder to challenge oneself in uncomfortable new ways.
In the private sector, it’s swim or die, things will move forward with or without you whether you like it or not.

63. fred Says:

My point about space elevator and fusion reactors is that one should never underestimate the engineering challenges, it’s not all about whether the basic physics works or not.

64. fred Says:

ppnl #49

Right, I do know that classical digital electronics can implement static boolean functions, like any combination of NAND gates to implement any boolean function, we just set the inputs to a combination of 1s and 0s, and then the circuit will naturally stabilize to the right set of outputs.
And I mentioned that when we’re trying to implement a Turing machine, each step of the state machine will require all the gates to now work in lockstep with some clock, and you’re limited by signal propagation time and stabilization time of each gate.

I can see how the same distinction applies for QC.
In this case it’s not like we’re dealing with flows of electrons in wires and semi-conductors, but some manipulation that allows us to take the state of some qubits as inputs, operate on them (as a gate), then take their new state as the new input to the next gate, and repeat. I guess that the details are very dependent on the type of qubits (spin of fixed particles, vs photons moving around, etc).
I think that QC gates are always made reversible, right? I forgot why it’s the case, but I can see how it would help make the “computation” evolution time independent (unitary evolution) and a pre-requisite for keeping long term coherence?

65. fred Says:

James #36

Feynman was often boasting and goofing around, but it’s always refreshing to see how humble and careful he was about the validity of his own successes (maybe something more characteristic of physicists because they are dealing with nature instead of the world of applied math):

“That is, I believe there is really no satisfactory quantum electrodynamics, but I’m not sure. And, I believe, that one of the reasons for the slowness of present-day progress in understanding the strong interactions is that there isn’t any relativistic theoretical model, from which you can really calculate everything. Although, it is usually said, that the difficulty lies in the fact that strong interactions are too hard to calculate, I believe, it is really because strong interactions in field theory have no solution, have no sense they’re either infinite, or, if you try to modify them, the modification destroys the unitarity. I don’t think we have a completely satisfactory relativistic quantum-mechanical model, even one that doesn’t agree with nature, but, at least, agrees with the logic that the sum of probability of all alternatives has to be 100%. Therefore, I think that the renormalization theory is simply a way to sweep the difficulties of the divergences of electrodynamics under the rug. I am, of course, not sure of that.”

It’s also interesting to compare his lecture to the one from Tomonaga (one of his fellow Nobel prize winners):
https://www.nobelprize.org/prizes/physics/1965/tomonaga/lecture/

66. Scott Says:

fred #62: You seem to be under the mistaken impression that this is the official blog of quantum computing. This is my personal blog, where I talk about some combination of what I’m interested in, what I understand, and what I might have something new to say about. Please find a different blog where the range of topics is more to your liking. If it doesn’t exist, start one.

67. fred Says:

Scott #66

“You seem to be under the mistaken impression that this is the official blog of quantum computing.”

True, when it comes down to it, it seems to be mainly about some weird “I’m right and Gil Kalai is wrong!” pissing contest.
https://www.scottaaronson.com/blog/?p=4684#comment-1833253

“Please find a different blog where the range of topics is more to your liking.”

Okay.

68. Job Says:

fred #39,

But quantum circuits are typically illustrated very similarly to idealized classical electronic circuits, i.e. wires connecting gates. Don’t quantum circuit designers still have to worry about how “signals” propagate between the gates? Don’t they have to make sure inputs on each gates are somewhat synchronous (because bound by speed of light, etc)?

I would compare a QC with a computer where the bits are coins inside a black box.

You flip the coins by interacting with the box, but you don’t pass the coins around via a signal.

Most of the time you don’t even know what the state of the coins is.
You usually start out by shaking the box such that every possible state of the coins is equally likely.

Then each operation is shaking the box in a precise way, and all you know is that you’re manipulating the probability distribution of the state of the coins.

But in practice there is a clock, because QCs have a classical component (the control system).

For example, if the QC is using surface codes then it periodically measures a subset of the qubits to detect errors.
The whole thing is carefully orchestrated, interleaving classical and quantum steps.

In that sense, the speed-of-light is still a factor because it limits the rate of operations that can be performed on logical qubits.
And that can be significant because a slower QC needs to maintain coherence for longer, so it might require even more redundant qubits and additional control circuitry, which can further limit the rate of operations.

But it only makes things harder, not impossible?

69. Kerem Says:

Scott #33

I think your description and positioning of what happened between Kasparov and Deep Blue is really profound and it’s also historically verifiable. Irrespective of the details of that match, around that time a fuzzy concept of chess-supremacy appeared and a few years later it was completely clear that playing competitively against computers stopped being fun or interesting. A rather embarrassing match for the interested is Adams vs Hydra (2005) which could be called the settling point by historians.

There was, of course, a clear exponential progress that was driving all this: What was Deep Blue then now fits on an iPhone thanks to better chess playing algorithms and scaled hardware.

The question is, do you clearly see a similar force driving the state of QC today both from an algorithm and hardware point of view?

70. ppnl Says:

fred #64,

Yes quantum gates must be reversible. A normal AND gate will take two inputs and produce one output. But you cannot do that in reverse. From the output you cannot always figure out what the two inputs were. As a result you have lost information. The way it loses information is to radiate heat into the environment. If a quantum logic gate radiates information into the environment then it is interacting with the environment. That causes wave collapse and your QC crashes. Thus a quantum gate must preserve information about the inputs so that no information is lost and in principle it does not have to radiate heat into the environment.

If you could put a cat into a box that isolated it from the external environment then you would basically have a quantum computer running a “cat” program. As a result the cat could be put into a alive/dead superposition.

The problem of building a QC is the problem of putting all the logic circuits in a box so that no information is lost and every operation is reversible. This is hard with a few logic circuits never mind a cat.

71. James Gallagher Says:

fred #65

Feynman’s (and Dirac’s) complaints that renormalization was just a bad mathematical technique that had no physical justification was mostly resolved by the (nobel prize winning) scale-invariance and renomalization group methods of Ken Wilson.

However, in the spirit of Feynman’s “peculiar and unusual point of view” – I should point out that this is just a much more sophisticated mathematical technique which may be hiding the more obvious solution:

The original idea that infinities had to be subtracted was the simple correct physical solution!

But by “infinity” we might just say, subtract the entire universe in the evolution equation – so we modify the Schrödinger Equation to something like U(t+dt) = exp(hL).U(t) – U(t).

Of course this in batshit crazy, but it does lead to a natural explanation of global 3D space (as a global dynamic attractor in the state space, left as an exercise) – and I would wonder if there is anything worthwhile about such an idea?

72. Scott Says:

Kerem #69:

There was, of course, a clear exponential progress that was driving all this: What was Deep Blue then now fits on an iPhone thanks to better chess playing algorithms and scaled hardware.
The question is, do you clearly see a similar force driving the state of QC today both from an algorithm and hardware point of view?

Of course! Well, at least with hardware. We don’t know how much quantum algorithms will improve.

73. gentzen Says:

James Gallagher #71: “However, in the spirit of Feynman’s “peculiar and unusual point of view” – I should point out that this is just a much more sophisticated mathematical technique which may be hiding the more obvious solution:”

My own approach would be to lookup currently taught solutions (or opinions) in recent QFT1 lecture notes. (Paul Teller’s “An Interpretive Introduction to Quantum Field Theory” still provides most of my philosophical basis for understanding QFT, but reviewers say that “His presentation of QFT could loosely be described as the “older” quantum field theory, since he does not address gauge theories and makes no use of modern mathematical formalism. By his own admission, all of the ideas in the book were known by 1950.”) Here is what I got:

There is a famous quote due to Dirac about QED (1975): “I must say that I am very dissatisfied with the situation, because this so-called ‘good theory’ does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small – not neglecting it just because it is infinitely great and you do not want it!”

This is almost true, but QFT is neither neglecting infinities nor in an arbitrary way.

Infinities are one reason why QFT is claimed to be mathematically ill-defined or even inconsistent. Yet QFT is a well-defined and consistent calculational framework to predict certain particle observables to extremely high precision.

Many points of view; one is that it is our own fault: QFT is somewhat idealised; it assumes infinitely extended fields (IR) with infinite spatial resolution (UV) (Footnote: The UV and the IR are the two main sources for infinities.); there is no wonder that the theory produces infinities. Still, it is better to stick to idealised assumptions and live with infinities than use some enormous discrete systems (actual solid state systems). There is also a physics reason why these infinities can be absorbed somehow: Our observable everyday physics should neither depend on how large the universe actually is (IR) nor on how smooth or coarse-grained space is (UV).

We can in fact use infinities to learn about allowable particle interactions. This leads to curious effects: running coupling and quantum anomalies.

74. gentzen Says:

Mg #50: There was a recent discussion on “Infinities in QFT (and physics in general)”. Overall, the “actual infinities don’t exist in the real word” camp had a hard time. This is a consistent pattern, and I can live with it. My main current opinions concern interpretations of probability, and date back at least to April 2017. I don’t mind such discussions, and tried to clarify my own position: “But what I had in mind was more related to a paradox in interpretation of probability than to an attack on using real numbers to describe reality. The paradox is how mathematics forces us to give precise values for probabilities, even for events which cannot be repeated arbitrarily often (not even in principle). ”

Another clarification was: “I want to add that I do appreciate Gisin’s later work to make the connection to intuitionism, but even so I had contact to people working on dependent type theory, category theory, and all that higher order stuff, it never crossed my mind that there might be a connection to the riddle of how to avoid accidental infinite information content.”

Later I also tried to explain with a concrete example how a central paradigm of intuitionism (the concrete representation of information is important) is related to that discussion: “What I try to show with this example is that adding more words later can reduce the information content of what has been said previously. But how much information can be removed later depends on the representation of the information. So representations are important in intuitionisitic mathematics, and classical mathematics is seen as a truncation where equivalence has been replaced by equality.”

Maybe it was not the most natural example. A very canonical example is that for a computation that can fail, all failing computations are put into the same equivalence class, independent of how much output the computation produced before failing. But this means that in this representation, failing allows to remove all information given earlier. So with one last word, everything that has been said before can be invalidated!

75. fred Says:

Job #68
ppnl #70
James #71

Thank you all! Very interesting!

76. Gil Kalai Says:

Hi Scott, regarding the analogy between Google’s Sycamore and IBM’s Deep Blue, here are three differences
1. In the Sycamore case, the researchers largely invented a new game to play
2. In the Sycamore case, the researchers themselves also played the part of Kasparov
3. In the Sycamore case, a huge leap compared to previous efforts is claimed

77. Tamás V Says:

My analogy for the Sycamore case is this: there was no contest, but

1. Google said Kasparov would lose 9-1
2. IBM said Kasparov would lose only 5.5-4.5

In 2023 IBM’s device will have 1K qubits, and nobody will give a damn anymore.

78. ANonymous Says:

ppnl #70, fred #64:

no, quantum gates do not need to be reversible, as shown, for example by this proposal for efficient universal quantum computation based on dissipation: https://arxiv.org/abs/0803.1447
and by measurement-based quantum computation: PRL 86, 5188 (2001), reviewed in https://arxiv.org/abs/0910.1116

79. ppnl Says:

ANonymous,

Yeah, I have heard of dissipative approach but cannot wrap my head around what is being claimed let alone how it works. This is over my head.

80. Peter Gerdes Says:

Hey Scott (or anyone else here who might know) do you have any recommendations for a book to learn research level quantum computation work (or as close as possible until you have to read the most recent papers) for a classical computability theorist?

I don’t mind skipping the chapters covering Turing machines and I wouldn’t mind a brush up on complexity classes etc but I’m looking for something where I can skip the basics of non-quantum computation without finding myself flipping back for notation every 5 minutes and doesn’t avoid covering the hard new stuff. I’ve seen a few books claimed to be aimed at mathematical audiences but they seem to think mathematical audiences mean analysts (I mean I enjoyed linear algebra in ug/grad and I’m not bad at it but I don’t yet have intuitions about diagonals and eigenvectors).

Anyone have suggestions?

81. YD Says:

What do you think of Baba Brinkman’s new music video about quantum supremacy?

82. Scott Says:

YD #81: Thanks, I hadn’t seen that!! See today’s blog post for an answer.

83. Dennis Says:

Scott #0 and #4:

“quantum mechanics is exactly true”; the framework says “the state of the world is a unit vector in a complex Hilbert space”

Nit-picky but important remark (which you are probably aware of):
We only model the state of the world as a unit vector in a complex Hilbert space. We make our models using the framework of quantum mechanics. It is the best framework that we have. The framework uses complex numbers.

But complex numbers themselves do not exist. Even real numbers do not exist [1]. So our models cannot be “exactly true”. They only work devastatingly well (and probably well enough). Until they don’t.

[1]: e.g. this paper by Gisin (if a real number was physically real, you could store more information in it than there are atoms in the universe)

Comment Policy: All comments are placed in moderation and reviewed prior to appearing. Comments can be left in moderation for any reason, but in particular, for ad-hominem attacks, hatred of groups of people, or snide and patronizing tone. Also: comments that link to a paper or article and, in effect, challenge me to respond to it are at severe risk of being left in moderation, as such comments place demands on my time that I can no longer meet. You'll have a much better chance of a response from me if you formulate your own argument here, rather than outsourcing the job to someone else. I sometimes accidentally miss perfectly reasonable comments in the moderation queue, or they get caught in the spam filter. If you feel this may have been the case with your comment, shoot me an email.

You can now use rich HTML in comments! You can also use basic TeX, by enclosing it within  for displayed equations or  for inline equations.