“Can Quantum Computing Reveal the True Meaning of Quantum Mechanics?”

I now have a 3500-word post on that question up at NOVA’s “Nature of Reality” blog.  If you’ve been reading Shtetl-Optimized religiously for the past decade (why?), there won’t be much new to you there, but if not, well, I hope you like it!  Comments are welcome, either here or there.  Thanks so much to Kate Becker at NOVA for commissioning this piece, and for her help editing it.

106 Responses to ““Can Quantum Computing Reveal the True Meaning of Quantum Mechanics?””

  1. wolfgang Says:

    Scott,

    you mention “the debate about whether the mind is a machine”.
    But this has been settled centuries ago by Descartes:

    I cannot doubt that I exist, however I can doubt that my brain (or any other computational device) exists.
    It follows that I cannot be my brain.

    ps: I happen to think that this is also an argument in favor of Copenhagen. 😎

  2. Phylliida Says:

    Are there any other interpretations of quantum mechanics besides those three that are reasonable enough that they have quite a few followers? And how do these experiments impact those?

  3. Scott Says:

    wolfgang #1: Alright then, the debate about whether other people’s minds are machines. 😉

  4. Michael Higgins Says:

    (Neophyte warning… forgive stupid or naive questions!)

    Under the many-worlds interpretation, is it fair to say that the “collapse” of the wave function is just “my” way of perceiving the wave function … from a certain angle, so to speak? And the “other mes” are perceiving the other possible collapses, simply from different “angles”? The wave function didn’t “really” collapse any more than a cube becomes a square when you look at it side-on.

    If that’s a fair statement, is it also fair to say that the philosophical problem is “why do I see *this* collapse and not any of the other possible collapses?” If so, I wonder if that’s any more profound a question than asking “why do I perceive the present as the present right now, instead of perceiving yesterday or tomorrow as the present” or even “why is my consciousness mine and not yours?” (And presumably you can ask the symmetric questions with respect to me.)

    In short, under-many worlds, all of the possible measurements are “out there”, and the mystery is why our consciousness is localized in such a way that we can’t “see” the rest of them?

  5. David Says:

    “I cannot doubt that I exist, however I can doubt that my brain (or any other computational device) exists.
    It follows that I cannot be my brain.”

    This only follows if “doubts that” introduces an extensional context, but it’s pretty much the archetypical example of an intensional verb.

    Hence why the ability to doubt that Superman has glasses while knowing that Clark Kent does does not entail that Superman is not Clark Kent.

    http://plato.stanford.edu/entries/logic-intensional/

  6. Scott Says:

    Phylliida #2: Basically, there are as many “different” interpretations as there are different emotional attitudes that a person can have about quantum mechanics.

    E.g., there’s something called the “transactional interpretation”, which I’ve never understood. Fortunately I’m not alone: see here for debate about whether TI even makes sense, and whether it can account for Shor’s factoring algorithm—or even, for that matter, multiparticle states or the difference between fermions and bosons. (It seems that Cramer, who invented TI, simply didn’t bother to state the rules with anything like the clarity that would be needed. Other people can guess the rules, but then it’s unclear whether the result is still “official” TI or a new heterodox sect.)

    Anyway, there are also many sects within Many-Worlds (relative-state, many-minds, decoherent histories, …), which differ in language and emphasis and on how explicit they’re willing to be about the “parallel universes.” Likewise, there are many sects within Copenhagenism (e.g., Chris Fuchs’s QBism, whatever David Mermin currently believes…), which again differ on language and on how explicit they are in their instrumentalism. Then there are interpretations that try to syncretize Many-Worlds with Copenhagen, saying Bohr was right about this while Everett was right about that.

    There’s Bousso and Susskind’s “cosmological interpretation”, for which I personally have a great fondness.

    There are things that are called interpretations—e.g., the GRW “interpretation” and the Penrose “interpretation”—but which are actually just rival physical theories (or attempts at theories), which make predictions for certain not-yet-done experiments that flat-out differ from those of QM. These, incidentally, are the only “interpretations” for which experiments could be directly relevant in supporting them or ruling them out (rather than relevant in the indirect sense I discussed in the article, of shifting what we consider natural or important). The other interpretations all agree about the predicted outcomes of any experiment that anyone can imagine doing, and whose results have to be published in a journal rather than just silently experienced by some conscious subject. 🙂

    Finally, one ought to mention the Shut-Up-and-Calculate Interpretation, the Quantum-Mechanics-Needs-No-Interpretation Interpretation, and the Quantum-Mechanics-Can-Supply-Its-Own-Interpretation Interpretation.

  7. Phill Somerville Says:

    The best general explanation of coherence-decoherence & quantum computation, ever.

  8. Amir Says:

    Regarding Bohmian mechanics,

    1. Do you know of a good article discussing this approach to QM and Leibniz’s idea of pre-established harmony, or parallelism, in the philosophy of the mind? These ideas ring somewhat similar to me.

    2. I just read in wikipedia that it is possible to formulate Bohmian QFT in two ways, both conserving the number of particles and creating/annihilating them at random. Highly neat.

  9. William Hoza Says:

    Hi Scott,

    We can have one qubit “measure” another qubit with a CNOT gate. Certainly, that “measurement” doesn’t cause a wavefunction collapse, because e.g. we can use the resulting two qubits to run an EPR experiment, which obviously relies on the wavefunction having not collapsed.

    So if I understand correctly, the Copenhagen interpretation asserts that wavefunctions collapse only when “large” systems get involved with measurements. Collapsing wavefunctions cause quantum computations to go awry, in testable ways. So if we managed to build a “large” quantum computer that worked, wouldn’t that provide some serious evidence against the Copenhagen interpretation?

    Thanks!

  10. Daniel Brod Says:

    Commenting on and extending comment #9

    As I understand it, the Copenhagen interpretation does not say that collapses happen when large systems get involved with measurements. It says collapses happen when a measurement apparatus interacts with a quantum system, and is annoyingly silent on what makes a ‘measurement apparatus’ qualitatively different from a ‘collection of smaller quantum systems that should be evolving unitarily’.

    However, GRW theories do *precisely* that. The include a timescale for spontaneous collapses of wavefunctions, which is what makes “small” systems behave unitarily for a long time, but “large” systems collapse instantly. So then , I repurpose comment #9 and ask: would a large quantum computer falsify GRW theories (or conversely, do GRW theories pose an obstacle for scalable QC)?

    My personal intuition is no. I bet we could use error correction to counter it, or topological QC, or measurement-based QC where each new qubit is only entangled with the others as it is needed, or some other smart trick like that, but I might be wrong.

  11. Scott Says:

    William #9: Yes, there are Many-Worlders who’d happily use an argument like yours to say that a large QC would be serious evidence against Copenhagen. But if you probe them, what they really mean is just that they considered Copenhagen to be vague, quasi-mystical nonsense from the beginning, and a QC would just make the nonsense more blatant.

    Meanwhile, all the “modern Copenhagenists” I know adamantly deny that they think wavefunctions collapse whenever “large” systems get involved—so they would say that scalable QC poses not the slightest challenge to what they believe, and indeed should obviously be possible (at least in principle).

    They would instead say something like: in practice, we can almost always use environmental decoherence to decide when to treat a wavefunction as having collapsed. In general, however, it’s up to the individual user of quantum mechanics to make that determination. And in Wigner’s-friend-type thought experiments, it might be right for one observer to treat a wavefunction as having collapsed but wrong for another observer to do so, but this is not a real problem, since such situations can only persist as long as the observers aren’t communicating. In general, the goal of science is not, as the Many-Worlders claim, to describe the “state of the universe,” which is an ill-defined concept; instead it’s to give individual observers tools to explain and predict their observations.

    I’m not necessarily endorsing this; just trying to tell you what they would say.

  12. Eric Dennis Says:

    Scott,

    I agree that a working, large-scale quantum computer would be some evidence against Bohmian Mechanics, and in favor of any other coherent physical theory (like, say, GRW). My reasons are vaguely related to yours, but perhaps more concrete. Since BM introduces beables distinct from the wavefunction itself, it becomes more natural to question the physical primacy of a gargantuan (entangled) wavefunction. More so than other versions of quantum mechanics, BM leads one to suspect the possibility of a deeper theory in which the wavefunction is in fact emergent. (My own inklings in this direction: http://arxiv.org/abs/1003.3192)

    So while it may be entertaining to speculate about the state of affairs were a big quantum computer to be realized, we must be clear that the existence of mere theories of quantum computing are irrelevant to any evaluation of competing versions of quantum mechanics *right now*. Indeed, the lack of progress over the last 20 years in building credible (i.e., non-DWave) machines of more than a couple qubits counts as evidence *in favor* of BM.

  13. Scott Says:

    Eric #12: I agree with you that, more so than Copenhagen or Many-Worlds, Bohmian mechanics seems to lend itself to speculations about changes to QM itself (Valentini’s are another example). But I would use language differently than you: I would say that, if you want quantum computing not to work, then what you want is not an “interpretation” at all (neither Bohm’s nor any other), but predictively new (and spectacularly important) physics. For most purposes, we can decouple the question of whether textbook QM is predictively exactly right, from the question of how to interpret it assuming it is.

    (Note added for Gil Kalai: here, by “predictively exactly right,” I mean not merely that everything that happens can be described using QM, but also that all the information-theoretic tasks that QM formally allows can in principle be done—i.e., that there’s no additional principle on top of QM that disallows QC or Bell violations or anything else of that kind. If there were such an impossibility principle, I would also count that as “spectacularly important new physics.”)

  14. Job Says:

    From the article:

    It would be the final nail in the coffin of the idea—which many of my colleagues still defend—that quantum mechanics, as currently understood, must be merely an approximation that works for a few particles at a time;

    I admit that i’m still in this camp, and exactly because of “Occam’s Razor with Computational Aftershave.”

    Other than an eventual Quantum Computer, what are the best arguments or evidence in favor of QM at large scales?

  15. William Hoza Says:

    Scott #11: Thanks, that helps. I guess I was conflating the Copenhagen interpretation with “objective collapse theories,” a term I just learned.

  16. Scott Says:

    Job #14: In a nutshell, the argument is that there’s already direct experimental evidence that QM works perfectly even for systems of many thousands of entangled particles (to take a few examples: superconductors, buckyball double-slit experiments, various condensed-matter systems…). And even as a pure intellectual exercise, it’s nearly impossible to think of any principled way that Nature could allow all those things, yet not allow scalable QC. It’s rather like trying to invent laws of physics that would allow birds but not airplanes.

  17. wolfgang Says:

    @David #5

    If you make the statement that “Superman is Clark Kent”, what you really mean is that C.K. can “transform” into S. , but obviously the two are not identical, because there are several properties P so that P(CK) =/= P(S).

    The mind-body problem however is about the question if I am identical with my brain and Descartes argued that this cannot be the case, because D(I) =/= D(B) , where D(x) is the doubt I have about the existence of x.

  18. phil Says:

    Everett said the universal wavefunction branched into orthogonal universes.

  19. John Sidles Says:

    Dirac’s Great Truth  (as articulated by Scott) “It’s nearly impossible to think of any principled way that Nature could allow all those things [in essence, the well-verified quantum foundations of the BIPM’s ‘New SI’ roadmap], yet not allow scalable QC.”

    Taking this statement as a Great Truth inspires us to frame (what I understand to be) Gil Kalai’s skeptical postulates as the following dual Great Truth:

    Bill Lawvere’s Great Truth (as articulated by the HoTT/UF community)  “It’s nearly irresistible to frame principled ways that Nature allows all those things [in essence, the unified quantum foundations of the BIPM’s ‘New SI’ roadmap], even when such framings do not permit scalable QC.”

    Here “frame principled ways” amounts in practice to “maximally substitute mathematical considerations of universality and naturality for considerations of physicality”.

    When we implement this principled program, we find that the chief element of the BIPM’s ‘New SI’ that is resistant to principled explanation in mathematical terms of naturality and universality, is the physical/thermodynamical restriction that Nature strictly imposes upon quantum Hamiltonians, namely that conserved quantities couple to long-range electromagnetic and gravitational fields.

    The topological protections associated to long-range electromagnetic and gravitational fields: (1) are of course vital to the BIPM’s new SI (in particular the AC Josephson and quantum Hall effects, and also absolute gravimeters), and (2) the decoherence associated to these same fields is of course vital to the remarkable successes of PTIME quantum dynamical simulations, and finally, (3) the same decoherence is associated to the extreme difficulty that experimentalists encounter in implementing not just scalable QC experiments, but even the (nominally simpler) BOSONSAMPLING experiments.

    It’s striking that canonical QC textbooks (Nielsen and Chuang’s Quantum Computation and Quantum Information for example) implicitly assume that any Hamiltonian can be implemented effectively.

    Postulating ‘Hamiltonian Egality” is terrifically helpful in proving theorems, but has proven unexpectedly difficult to achieve in engineering practice. So maybe it’s not such as good assumption?

    Conclusion  “Is scalable QC feasible?” is a fundamental question, and so is “Why does Nature restrict our Hamiltonians to long-range field theories that (in practice) are unexpectedly easy to simulate?” The language of Scott’s essay suggests a natural, unifying answer to both questions:

    The unified Aaronson/Kalai principle  “Nature does  even more  far less computational work than a quantum computer could efficiently simulate — then it hides the  fruit  shortcut of its labor where no one can ever observe it [namely, at spatial infinity] … and this is why scalable QC is infeasible.

  20. David Says:

    Scott #11:
    Instrumentalism: QM is a device for predicting observations and it doesn’t tell us anything about unobservables.

    Mysticism: QM says that reality is indeterminate until it is collapsed into determinacy by observation/measurement.

    In both the article and the comments, you seem to go back and forth describing the Copenhagen interpretation as each of these views, presumably because so do the interpretation’s proponents.

    But the views are quite different and should probably be separated. If you really think that QM tells us nothing about unobservables, then in particular it doesn’t tell us that they lack properties (or have contradictory properties at the same time) prior to measurement, or that we make it the case that an electron settles on spin up.

    It sounds like your “modern Copenhagenist” friends are basically instrumentalists, but I don’t see why we need a fancy new name for that view.

  21. Job Says:

    QM works perfectly even for systems of many thousands of entangled particles

    i think i would be satisfied with a strong example of interference at a large scale – e.g. of the kind that drives Simon’s algorithm.

    For example, if we entangle n particles and then send them through the equivalent of a double-slit, do the results indicate that the system of n particles is also behaving as a wave?

  22. David Says:

    Wolfgang #17:

    No, I mean they are identical. ‘Superman’ and ‘Clark Kent’ are two names for the same person, a person who both stops villains and reports news. This isn’t even my example; it’s a commonplace in phil language.

    But if you think superheroes work differently than other people with two names then you can substitute a regular such person. ‘Mark Twain’ and ‘Sam Clemens’ are a popular choice. It’s not hard to get ‘doubt that’ sentences that are true with the one name and not with the other.

    Also, I’m aware that that was Descartes’ argument. He was confused about intensionality.

  23. Scott Says:

    Job #21:

      For example, if we entangle n particles and then send them through the equivalent of a double-slit, do the results indicate that the system of n particles is also behaving as a wave?

    Yes, for n in the thousands. See the Zeilinger group’s double-slit experiments with large molecules (buckyballs and larger).

  24. wolfgang Says:

    @David #22

    That “same person” X in your example has two different properties “stops villains” and “reports news” in the same sense that my car is blue and at the parking lot. My car is identical with itself, but “blue” and “at the parking lot” are simply different properties of that car.

    However, my car is not identical with your car because there is a difference in several properties. E.g. while I do not doubt that I own a car I can doubt if you own a car. Therefore this difference in doubt is sufficient to know that my car is not yours.

  25. Sandro Says:

    Wolfgang #17, I’m afraid Descartes didn’t prove what you think he proved. Dualism is not as philosophically viable as you imply [1], and neuroscience has plenty of data that casts serious doubt on any sort of dualism [2]. I’m afraid Descartes was simply a fancy machine after all, but with an overinflated sense of importance!

    [1] http://plato.stanford.edu/entries/dualism/
    [2] https://en.wikipedia.org/wiki/Neuroscience_of_free_will

  26. Amit Says:

    On 21st June, two graduate students from TIFR posted the following paper.

    http://arxiv.org/abs/1506.06399

    From The abstract:- We show that there exists a Boolean function F which observes the following separations among deterministic query complexity , randomized zero error query complexity ,….This refutes the conjecture made by Saks and Wigderson that for any Boolean function f, …..Independently of us, Ambainis et al proved that different variants of the function …..Viewed as separation results, our results are subsumed by those of Ambainis et al. However, while the functions considerd in the work of Ambainis et al are different variants of F, we work with the original function F itself.

  27. Mitchell Porter Says:

    Scott #6:

    ‘There’s Bousso and Susskind’s “cosmological interpretation”, for which I personally have a great fondness.’

    The “reasoning” in that paper is worthy of Frank Tipler.

    Tipler once wrote: if event horizons form, unitarity will be violated; therefore no event horizons form; therefore intelligent life lasts forever into the big crunch, shepherding the dynamics of the universe in order to prevent the formation of event horizons.

    Bousso and Susskind write: if a measurement can’t be performed infinitely many times, real-valued probabilities wouldn’t have operational meaning; therefore, there is an observer who can perform infinitely many measurements; therefore this weird version of holographic cosmology we propose, must be the right one.

    Like Y. Nomura’s similar paper, this is actually a “Copenhagen interpretation of Everett” – note the emphasis on operationalism. Whereas similar-sounding stuff by Carroll, Tegmark, Aguirre – that also attaches quantum-interpretational significance to the inflationary multiverse – is closer to being mainline Many Worlds, because it’s unashamedly “realist” and “ontological”.

  28. Jeremy Says:

    I’m confused. I thought the copenhagen interpretation was the “shut up and calculate” interpretation. This seems to be borne out in the comment section. If that’s not the case, then what do modern copenhageners believe?

  29. Scott Says:

    Amit #26: Thanks very much for the pointer—I hadn’t seen that!

  30. Scott Says:

    Jeremy #28: I like to say that Copenhagen is “Shut Up and Calculate” minus the “Shut Up.” 🙂

  31. Scott Says:

    Mitchell #27: OK, the part I have fondness for is not at all the argument you quoted, but the idea that looking at cosmological horizons in deSitter space is the right way to determine whether decoherence has irreversibly occurred and the wavefunction has “collapsed.”

  32. Steve Says:

    Are there any interpretations of QM that would be strengthened or validated if it turns out that functional quantum computers cannot be built to, say, solve currently intractable factoring problems?

  33. Scott Says:

    Steve #32: As I said in comment #13, in that case I would say that we’re no longer dealing with the problem of “interpretation” at all (i.e., “assuming textbook quantum mechanics is the ultimate empirical description of what kinds of preparation, measurement, etc. we can and can’t do, how should we understand the resulting picture of reality?”). Rather, we’d just be dealing with a flat-out huge new empirical discovery in physics!

  34. fred Says:

    If the question at hand is really about the “maximum” underlying computational resources of reality (i.e. can a QC be really faster than a classical one), then “computation” is really about the maximum information density and maximum rate at which that information can be modified or travel in the real world. I.e. all the hard questions around black holes, the big bang, singularities, Planck limits, etc.

  35. Jay Says:

    Great reading! As a related question, how do you interpret Afshar experiment? e.g. do you think that violates the complementarity principle?

  36. JollyJoker Says:

    Did the idea that there are an exponential amount of things that aren’t just probabilities exist before 1982? From Wikipedia:

    “For instance, Erwin Schrödinger originally viewed the electron’s wavefunction as its charge density smeared across the field, whereas Max Born reinterpreted it as the electron’s probability density distributed across the field.”

    https://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics#History_of_interpretations

    But if they can do stuff to each other they’re obviously “more real” than probabilities. Yet I see nothing about “exponential amount of stuff exists” on the Wikipedia page. Would that be covered by “Wavefunction real”?

    I wonder what the interpretations would look like if someone had come up with Shor’s algorithm in 1930.

  37. Mike Says:

    “I wonder what the interpretations would look like if someone had come up with Shor’s algorithm in 1930.”

    I’ve often wondered whether the MWI would be sitting in preferred spot (as various CI related interpretations for the most part have), if someone in 1930 had predicted the results of the single photon interference experiments and explained the outcome as the inevitable result of multiverse interference, and had gone into the realm of SF by saying that this interference could be used as a vast computational resource.

  38. Scott Says:

    Jay #35: My understanding is that the Afshar experiment correctly shows that Bohr’s verbiage about light “behaving either as a wave or a particle, but never both” was deeply confused. However, one didn’t need the experiment to see that his verbiage was confused! As soon as you have a modern mathematical understanding of QM (which Bohr never really had, though Heisenberg and Schrodinger and Dirac and von Neumann did)—that is, as soon as you agree that the ground truth is neither particles NOR waves (in the classical sense), but simply a unit vector of amplitudes, evolving unitarily—you can’t possibly predict any outcome for the Afshar experiment other than the observed one, and indeed there’s no need even to do the experiment. Where Afshar veers into crackpot territory, is where he loudly proclaims otherwise.

  39. Slaviks Says:

    Scott, to strengthen your #16 and share my bewilderment:

    BM is a dead attempt at interpretation before the discovery of quantum field theory and of the modern understanding of what an (elementary) quantum particle is. No BM apologet has even been (and I bet will never be!) able to reconcile the “pilot wave” with spectacularly precise measurements of the Lamb shift, electron’s anomalous magnetic moment and other textbook stuff that intimately involves the many-particle entanglement of the Fock space (or its mathematical equivalent necessary to arrive at the same hard testable and tested conclusions).

    So how can a person of intellectual integrity seriously suggest that BM has anything to do with physical reality except being a pure exercise for the mind, rarely inspiring and mostly misleading?

  40. jonas Says:

    Thank you for this article, it was good to read. The part that captured my interest is the particular example where Alice sends Bob quantum information about a vector, and then Bob measures it in a direction only he knows. (This shows I have studied almost no quantum TCS, for this is a basic example I should have met earlier.) This seems interesting to me because it sounds like a very practical result.

    I have some questions about this protocol. Firstly, although it seems that this vector subspace problem can’t be solved (randomized) efficiently with classical information, is there a precise proof for that? Secondly, does this result mean that if I have N classical bits and want to compress them such that later any k of those bits can be determined (I choose which k of those bits, but only once), I could use O(k log N) qubits to store that information? If I did this, how much resources would I needed on a quantum computer to prepare and later retrieve the bits? In particular, would I need to spend runtime exponential in N?

    Different topic: let’s go back to these “interpretations of quantum mechanics” you’re talking about in the article. Your comment #11 suggests a question. Is there any experiment that could be performed at least in principle, and where different interpretations of QM give different predictions, despite that both interpretations agree with the same basic rules of QM? The experiment would be such that in theory the result is prescribed by the rules of QM, but we aren’t smart enough to do the actual calculations to derive the result, and so doing the experiment would still help. Those kinds of experiments exist already in classical physics when many particles interact in a way that is too complicated to simulate even with a fast computer in practice, even though in theory you could simulate them with a faster computer.

  41. Scott Says:

    jonas #40: Yes, the whole technical achievement with that vector/subspace communication problem was to prove that there’s no efficient classical randomized protocol to solve it. See this breakthrough 1999 paper by Ran Raz (which proves a lower bound for a slightly more complicated version of the problem), and this 2010 paper by Klartag and Regev (which proves the lower bound for the problem as I stated it).

    No, for k-out-of-N retrieval (or even, for that matter, 1-out-of-N retrieval), quantum encodings give you no asymptotic advantage over classical encodings: you still need Ω(N) bits. See this 1999 paper by Ashwin Nayak.

    No, again, as I use the word “interpretation,” they’re all accounts that logically imply the same predictions for any experiment, regardless of the computational complexity of calculating those predictions. If you have something that makes different predictions, then (by definition) it’s not an interpretation; it’s a new physical theory.

    There’s one caveat to this: namely, I allow the different interpretations to make different “predictions” about subjective, first-person questions like what it would feel like (or whether it would feel like anything at all) to have the double-slit experiment performed on your own brain—provided the interpretations agree (as they do agree) about what you’d remember about the experience, or be able to publish in a journal about it!

  42. luca turin Says:

    Completely agree with #7 Phil Somerville: best thought-out and best-written article on the subject in this or likely any other universe.

  43. Scott Says:

    luca #42 (and others): Thanks!!

  44. Eric Dennis Says:

    Scott #13: Not sure which language you’re referring to, but I agree that rendering quantum computation infeasible would require an entirely new theory (like my link), not just a new interpretation of the present quantum formalism.

    Against your remark about decoupling, my point was that finding such a theory–in order to resolve, e.g., the massive parallelism storage disaster–may critically depend on starting from the right interpretation in the first place.

    But I’m curious whether you agree with my inference from your essay that lack of experimental progress in quantum computing is evidence in favor of BM.

  45. Scott Says:

    Eric #44:

      But I’m curious whether you agree with my inference from your essay that lack of experimental progress in quantum computing is evidence in favor of BM.

    No, not in the slightest (here I’m interpreting “BM” in the sense you mean by it, as an idea for a genuinely empirically-different theory). To me, that would be like someone in the 1890s reasoning from the lack of progress in building heavier-than-air flying machines to the conclusion that something must be wrong with the laws of aerodynamics.

    I never expected practical QC within some specific timeframe (20 years or whatever), and on this blog and elsewhere, have consistently opposed the hypesters who have promised it. While it might happen a lot sooner, for all I know it could take another hundred years, and I don’t even know if human civilization will last that long. But it seems crazy to me to say “we still don’t have practical QC after 20 years, ergo the problem must be with the laws of quantum physics.” For keep in mind, the experimenters haven’t encountered a shadow of a hint of anything that looks like a fundamental violation of unitarity—just ordinary environmental decoherence, which they knew would be there and was not a surprise to them. And while the qubits still aren’t quite good enough to seriously scale them up, the coherence times have improved by several orders of magnitude since the 1990s. So, by that metric—which seems a lot more scientifically useful to me than the layperson’s metric of “how big a number has been factored?”—there has been remarkable experimental progress, led by brilliant people like Martinis and Schoelkopf and Wineland and Blatt.

  46. Eric Dennis Says:

    Scott #45: I like your aerodynamics analogy, but it has two problems. A minor one: if anything, more like the 1790s, as you seem to acknowledge. A major one: there was no reason to be otherwise suspicious of aerodynamics. Quantum mechanics–even in its best, Bohmian version–is a very awkward physical theory. The inescapable non-locality Bell showed to fester inside it is one thing. But the massive, and FAPP massively gratuitous, parallelism of the wavefunction is extremely suspicious. So if failed Leonardo-style flying machines and the like are all we have right now to guess a direction, we have to consider them, since it is highly unlikely that where we’re standing right now is solid ground.

    Also, my criterion wasn’t successful quantum computing in 20 years. I wouldn’t have thought that likely either, back when I was in the field. My criterion was just some noticeable progress in approaching massive parallelism. Your brushing off qubit-number as a metric seems like a cop-out. I doubt anyone is really suspicious of the possibility of long coherence times for small systems. We’re suspicious of massive parallelism.

  47. Scott Says:

    Eric #46: The trouble is that, if you have long coherence times for small systems (which are also controllable and interactable with other nearby small systems), then it’s extremely hard to understand how scalable QC could not be possible. Under our current understanding of physics, that’s almost the definition of scalable QC. So there’s an enormous burden on anyone who thinks QC is impossible, to construct a theory that blocks it but that also naturally accommodates all the existing experiments with many-body quantum systems, not one of which has shown the slightest hint of a deviation from QM.

    Furthermore, we don’t yet have long enough coherence times for small, controllable quantum systems sitting next to other small, controllable quantum systems to do the fault-tolerance needed for scalable QC (though they’ve been getting closer), providing a compelling explanation for why we don’t yet have scalable QC. But you yourself said that you “doubt anyone is really suspicious” that there’s more than a technological obstacle there (actually I do know people who suspect that, like Dyakonov, but let’s leave that aside!). And if we don’t suspect more than a technological obstacle there, then it’s hard for most of us to suspect it for scalable QC.

    Anyway, a central point of my essay was that quantum computing is not exponential parallelism in any naïve sense. It’s true, we conjecture that simulating QC efficiently using classical resources would require exponential parallelism—but crucially, that doesn’t imply that the thing itself should be thought of that way. Certainly it lets you do much, much less than “generic” exponential parallelism would let you do.

    It’s similar to how quantum mechanics is still “local” in the sense that it upholds no superluminal signalling—it’s only simulating quantum mechanics classically that requires nonlocality, because of the Bell correlations. And of course, this is no longer a matter for speculation, just an observed fact about nature.

    For me, our experience with previous cases like nonlocality, the elegance of QM from a mathematical standpoint (all the “awkwardness,” I’d say, is from a human standpoint), and the weirdly limited capabilities of QCs, so far from what any science-fiction writer would’ve imagined (yes factoring, but probably not NP-complete problems), all feel like indications that QM and QC are here to stay.

    But the nice thing about this disagreement is that, unlike with more philosophical issues, here we might find out who’s right within our lifetimes.

  48. ppnl Says:

    What are your thoughts on the recent paper on gravity causing decoherence that has been in the news? A closed system decohering would seem to break QM. But then I’m not sure it is a closed system.

    And does this give any support for Penrose’s views on gravity collapse?

  49. Job Says:

    Yes, for n in the thousands.

    I’m undecided here, because i’m still missing is a clear reason for the quantum speedup in Simon’s algorithm.

    I’ve looked at this problem in detail. I’ve implemented a simulation of Simon’s algorithm, right up to the classical step and understand how interference plays a key role.

    I’ve also gone through the proofs and the oracle separations using Yao’s principle. They prove that a standard Turing Machine can’t match a QC on Simon’s problem, but IMO there is a very subtle problem in that different classical systems may have access to slightly more information (e.g. about the input circuit) than a standard Turing Machine – so i’m not totally satisfied.

    Additionally, i’ve also seen how a classical probabilistic system can produce the same type of interference behavior and results, e.g. using a Spekkens type of model. In fact, the interference pattern (for simple input circuits) is so similar that the classical post-processing step in Simon’s algorithm will work just as well with the toy model.

    On the other hand, as soon as we introduce Toffoli gates, there is a real wall and any classical model will exhibit an error rate that grows very fast.

    It seems that the quantum speedup for Simon’s algorithm is not due to interference alone. It’s interference even when non-parity gates (e.g. Toffoli) are used. I’d like to understand this better.

    For this reason, i think that even experimental results confirming large scale interference are still inconclusive. I would need large scale interference in the right context – e.g. with a Toffoli version of a double-slit thrown in, whatever that is.

  50. John Sidles Says:

    Thesis (borrowing the language of Scott’s PBS/NOVA essay)>  “The pace of recent advances in [tensor network dynamics] is so striking that even the idea of it — let alone the thing itself — can shift the terms of the [quantum computing] debate, which analogies people use in thinking about it, which possibilities they find natural and which contrived.”

    Some recent work  This week’s essay on CalTech’s Quantum Frontiers weblog, titled “Holography and the MERA” by Aidan Chatwin-Davies (June 26, 2015), discusses the recent preprint “Consistency Conditions for an AdS/MERA Correspondence” (arXiv:1504.06632, 2015) by Bao, Cao, Carroll, Chatwin-Davies, Hunter-Jones, Pollack, and Remmen.

    The still-evolving AdS/MERA quantum framework builds upon the MERA (Multi-scale Entanglement Renormalization Ansatz) work presented by Brian Swingle and colleagues at last month’s KITP in their “S-Sourcery” talk, as reviewed in comment #11 of the recent Shtetl Optimized essay “FCRC Highlights” (June 19th, 2015), and as fleshed-out in the preprint “Area Law for Gapless States from Local Entanglement Thermodynamics” (arXiv:1505.07106) by Swingle and John McGreevy.

    Two dual-ing alternatives

    Alternative A  MERA-mechanics is physically a good practical approximation of (Nature’s exact) Hilbert-dynamics, versus

    Alternative B  Hilbert-mechanics is mathematically the universal and natural resolution of (Nature’s exact) MERA-dynamics.

    These alternatives present young quantum-researchers with precisely the choice that Scott’s PBS/NOVA essay spotlights “In choosing a picture of physical reality, we should be loath to posit computational effort on Nature’s part that vastly exceeds what could ever in principle be observed.”

    Conclusion  Young quantum researchers who read Scott’s PBS/NOVA essay in the still-brightening MERA-light will agree with Luca Turn #42 (and others, including me) that “[Scott’s essay] is the best thought-out and best-written article on the subject [of quantum computing] in this or likely any other universe.”

    Young quantum research need not apprehend a “Quantum Computing Winter” with vibrant ideas like S-Sourcery, AdS/MERA, and MERA-mechanics on the table!

  51. Scott Says:

    Job #49: I’ll observe, for the record, that you indicated in comment #21 that you’d be satisfied with n entangled particles behaving as a wave in the double-slit experiment (for large values of n), I pointed out to you that that’s already been done, and now you’ve changed the requirement. 🙂

    The Toffoli gate, combined with any non-basis-preserving 1-qubit gate, is known to be universal for QC. So your new requirement basically boils down to saying that you’ll believe scalable QC is possible when you actually see a scalable QC. And sure, that’s a consistent position you can take—at least if you’re an outsider, rather than someone trying to build scalable QC (in which case, you’d better at least have hunches about what might work). Of course, taking the attitude “I’ll believe this technology when I see it…” does put you in the position of constantly being surprised when new technologies come along, rather than seeing them as the more-or-less predictable outcomes of previously-understood science.

  52. Scott Says:

    ppnl #48:

      What are your thoughts on the recent paper on gravity causing decoherence that has been in the news? A closed system decohering would seem to break QM. But then I’m not sure it is a closed system.

      And does this give any support for Penrose’s views on gravity collapse?

    No, it’s not “decoherence to nowhere”: it’s two branches falling out of phase because of gravitational time dilation, but if there’s no leakage to the environment, then it’s still unitary and you’d eventually see a recoherence. And it’s entirely a consequence of known physics, so it lends no support to any Penrose-like views.

    Sabine Hossenfelder has a nice analysis at Backreaction. Here’s her summary:

      it is a nice paper that points out an effect of macroscopic quantum systems in gravitational fields that had not previously been studied. This may become relevant for interferometry of large composite objects at some point. But it is an exceedingly weak effect, and I for sure am very skeptical that it can be measured any time in the soon future. This effect doesn’t teach us anything about Schrödinger’s cat or the measurement problem that we didn’t know already, and it for sure has nothing to do with quantum gravity.

    One of the authors, Igor Pikovski, also wrote to Sabine to confirm this understanding.

  53. yme Says:

    wolfgang #1: It is possible for someone to be unaware of what the millionth digit of pi is, in which case he may doubt that it is greater than 4. On the other hand, it is not possible to doubt that 5 is greater than 4. Therefore, the millionth digit of pi is not identical to 5.

    The only problem is, guess what the millionth digit of pi is? Right. It’s 5.

    So, clearly, something is wrong with this type of reasoning.

  54. John Sidles Says:

    As comment #1 on this week’s (admirable) Quantum Frontiers essay “Holography and the MERA”, I’ve aggregated and summarized the AdS/MERA-related elements of the Shtetl Optimized discussion … the gist of the summary is that the quantum community benefits greatly from unresolved tensions that are strikingly evident in Scott’s essays and in the comments that these essays stimulate. Good!

    Please let me join with Scott too (in his comment #52), in praising Sabine Hossenfelder’s “nice analysis” at the quantum weblog Backreaction.

    An engineer’s caveat  Sabine’s essay correctly emphasizes the central role of gravitational potentials in unraveling fundamental puzzling aspects of quantum transport. And it is natural to generalize these investigations to systematically encompass quantum descriptions of transport in other potential gradients: first electric gradients, then magnetic gradients, then temperature gradients, then pressure gradients, then chemical potential gradients … the list of transport-potentials is long. The list of transported quantities includes any quantity that is dynamically conserved; energy, mass, charge, and momentum are Nature’s canonically conserved quantities (others are possible depending upon the circumstances).

    It follows that systematic attempts to “maximally substitute mathematical considerations of universality and naturality for considerations of physicality” (per comment #19) run head-on into very serious tensions between freedom of inquiry and freedom of pedagogy on the one-hand, and the interests of state-security and trade-secrecy on the other hand … particularly regard to ubiquitous problems of transport dynamics in potential gradients,

    As one concrete example, Shtetl Optimized readers are invited to contemplate the Schlumberger Corporation (revenue $49B in 2014) web page Perforating Gun Systems and Charges.” Here the transport of energy, mass, momentum, and electric charge rivals in complexity and technological sophistication (and in monetary investment) the world’e entire aggregate program of fundamental quantum research.

    We are rightly made to feel uneasy when we contemplate that the purpose of Schlumberger’s technology is to burn as much as possible of the world’s carbon, and to enrich as much as possible the corporations, nations, and oligarch’s who control those carbon resources, and to disseminate as widely as possible the shaped-charge technologies that have proven to be some of the most cost-effective terror-weapons of the 21st century.

    Given these well-founded tensions, it’s unsurprising — perhaps it’s even a good thing? — that the most interesting and widely-applicable transport-related works of Dirac, von Neumann, Onsager, and even Feynman remain classified even in the present century.

    Uneasy conclusion  STEAM history provides no strong basis to believe that quantum computing research is unentangled from urgent interests of state-security and trade-secrecy.

  55. wolfgang Says:

    @yme #53

    I do not doubt the statement “5 > 4” but I may doubt the statement “the millionth digit of pi …” and thus it follows that these are for me two different statements.
    What was your point again?

  56. wolfgang Says:

    @yme @david etc.

    Descartes’ argument is really just two statements:

    1) There is a function D(x) equal to my doubt about x.
    Let’s say D=0 means I cannot doubt x and D>0 means
    I can have doubts or actually have doubts.

    2) from D(I) =/= D(B) it follows that I =/= B

    I think you can (try to) attack 1) , perhaps by making a clever
    argument about D(D(I)) or something like that.
    But you cannot really attack 2) imho, because it is basic math / logic.

  57. Job Says:

    I’ll observe, for the record, that you indicated in comment #21 that you’d be satisfied with n entangled particles behaving as a wave in the double-slit experiment

    Yes, of the kind that drives Simon’s algorithm. as i also mentioned. 🙂

    So your new requirement basically boils down to saying that you’ll believe scalable QC is possible when you actually see a scalable QC.

    In purely theoretical terms, it’s alot like believing that a given quantum algorithm can solve an optimization problem faster than a classical algorithm.

    The line gets pushed back and forth quite a bit. I can believe that it’s possible but i wouldn’t accept it without conclusive evidence – which of course can take a while since both sides are constantly evolving.

    I could come back and say that i’d need demonstration of an even narrower example of quantum interference if that were the one case where classical models break down.

  58. Sniffnoy Says:

    Wolfgang: You’re equivocating between statements and objects — between objects and descriptions of those objects.

    Is “x” in D(x) a statement or an object? I assume it must be a statement.

    In that case, (1) is true; and it is true, as you say in #55, that if D(I) != D(B), then I and B are different statements.

    But this does not imply that any objects they refer to in parallel are distinct. To put it in the contrapositive, if you have a statement, and you replace some object reference by a different reference to that same object, you get a different statement, about which the level of doubt may be different. So no non-identity of objects can be concluded this way. Just as you cannot conclude “The millionth digit of pi must be distinct from 5, because my doubt about its being greather than 4 differs” (because your doubt is not a function of the actual objects — the millionth digit of pi, and 5 — but rather the descriptions “the millionth digit of pi”, and “5”), so you cannot conclude that your mind is not a machine (because your doubts are actually about “your mind” and “a machine”, not your mind and a machine).

    There’s actually another lesser problem with what you’re saying, which really should have highlighted the problem to you earlier — existence isn’t a predicate! Or a function, for that matter. In actual logic you don’t consider objects and then talk about whether they exist, you talk about properties and talk about whether there is any object with that property. In the case where such an object is guaranteed beforehand to be unique we often assign it a name beforehand, and talk about “whether it exists”, and generally this is not too dangerous, but formally it is not correct. This should have highlighted to you that everything you’re talking about is dependent on statements and properties — not objects, so non-identity of objects can’t be concluded from it.

    (PS, Scott, would you mind deleting my two stuck-in-moderation comments? They’re screwing up my comment numbering. Thank you! 🙂 )

  59. wolfgang Says:

    @Sniffnoy #58

    >> a statement or an object
    Let me just warn you that the implicit assumption here that there can be things (“statements”) which are not (physical) objects is a big step towards dualism already 😎

    But let us assume you can convince me that I = B, in other words I am my brain.
    Now let me point to that futuristic supercomputer S over there, which is capable of simulating human brains and is right now simulating my brain B.
    Will you also try to convince me that I = S , although it is clear that B is not S ?

  60. Douglas Knight Says:

    Sniffnoy, consider not referring to comments by number, but instead using permalinks, like I did.

  61. James Gallagher Says:

    In the comments to your So you think Quantum Computing is Bunk? post a couple of years ago I tried to argue that a very simple “interpretation” (I prefer “model”) could weaken QC – a discrete time randomly seeded evolution of the entire universe state-vector/wave-function. Such a model would allow all current quantum mechanical experiments to work (including quantum zeno experiments) because the mathematical structure is almost precisely equivalent to the standard postulates of quantum mechanics, .

    But I was not very clear as to how this would impact Quantum Computing, such as Shor’s method.

    I now realise that it is the discrete time evolution that is the crucial thing. If nature “jumps” fundamentally randomly with half-life approx 10^-43 secs and then rotates the entire universe state-vector we would have a quantum mechanical universe like we currently observe. But the non-continuous time evolution will impact quantum computing algorithms which are based on an assumed continuous time evolution.

  62. Not a Philosopher Says:

    @wolfgang #59

    David (#5 and #22) made a point that you missed, so I’ll try to make it clearer. You wrote:

    2) from D(I) =/= D(B) it follows that I =/= B

    […]

    But you cannot really attack 2) imho, because it is basic math / logic.

    That’s not true, as Frege[1] noted. A simple example that refutes your statement is to take

    D(x) = Alice doubts x wrote The Adventures of Tom Sawyer

    If Alice doesn’t know that Mark Twain is Samuel Clemens, then D(Mark Twain) != D(Samuel Clemens) even though Mark Twain = Samuel Clemens (as we know).

    The same thing can be done to the argument in your comment #2.

    [1] https://en.wikipedia.org/wiki/Sense_and_reference

  63. Sniffnoy Says:

    Wolfgang #59:

    Let me just warn you that the implicit assumption here that there can be things (“statements”) which are not (physical) objects is a big step towards dualism already

    I’m not talking about “objects” in some physical sense. I mean “objects” as in the thing that variables (or more generally, terms) in a theory refer to. I don’t see the relevance of your reply.

    But let us assume you can convince me that I = B, in other words I am my brain.
    Now let me point to that futuristic supercomputer S over there, which is capable of simulating human brains and is right now simulating my brain B.
    Will you also try to convince me that I = S , although it is clear that B is not S ?

    Answer: I don’t care, because I’m not arguing about that. I’m just yet another person explaining why your initial argument is wrong. It would be nice to know if you are still claiming that argument.

  64. wolfgang Says:

    @Not a Philosopher #62 and @Sniffnoy #63

    You are making the very valid point that in general my doubts about things can change. So if I learn that S.C really is M.T my doubts will change and the identity of S.C. with M.T. can be established to some degree.
    Fair point.

    But Descartes’ argument is that such a change in doubt is not possible for I and B. I *cannot* doubt that I am, but there will *always* be some doubt about B.

    >> I don’t care, because I’m not arguing about that.
    But you should consider it, for this reason:

    The hypothetical supercomputer S (in the imagination) is such that every argument b1, b2, … in favor of I = B has an equivalent argument s1, s2, … in favor of I = S.
    The only way to avoid a contradiction (I=B and I=S but B is not S) is to argue that S does not really exist, it is only imagined.
    But this means you argue with Descartes: You reject I = S because you have doubts that S can exist.

  65. Eric Dennis Says:

    Scott #47: Similarly, if you can raise the seat a bit and teach your dog to drive, then it’s extremely hard understand how you couldn’t make the dog your chauffeur. And I don’t think the burden you refer to is enormous, because I’ve already done it. In that link from #12, I construct a relatively simple model that exhibits ordinary unitary evolution in a certain regime, outside of which non-linearities create an endogenous decoherence that squashes massive parallelism. The effective maximal coherent qubit number is controlled by a new constant, whose zero-limit corresponds to the pure unitary case.

    You argue that because QC doesn’t yield all the fruits of “naive parallelism,” we shouldn’t be too concerned that the exponential storage blowup is actually an aspect of nature, in the same way that no superluminal signalling means we shouldn’t be too worried about quantum non-locality. I hadn’t thought about it before, but I agree these two arguments are connected—and neither makes sense to me. Why in the world would our ability to harness a demonstrable phenomenon—non-locality or massive parallelism—to achieve a certain human purpose—signalling or solving NP-complete problems—have any bearing on the reality and physical significance of that phenomenon itself? Certainly the onus is on you to carefully demonstrate why the non-harnessability of a given physical fact makes that fact any less real, any less calling-out-for-explanation. It stinks of the glib positivism on the part of Bohr and his apologists that Bell took apart many decades ago.

  66. jonas Says:

    Scott #41: Thanks, I see now that I’ve misunderstood the vector subspace problem, despite that your description was correct. The reference you gave to the articles helped. Still, this is an interesting problem and I’m glad you brought it up.

  67. Sniffnoy Says:

    Wolfgang #64:

    No, I’m afraid you’ve still missed the point. It has nothing to do with change. That the things designated by particular descriptions may change does help demonstrate the distinction between descriptions of objects and the objects themselves, but:
    A. This is in no way essential to the distinction; it is not the distinction itself;
    B. From a mathematical point of view, this isn’t even true — descriptions whose referent can change over time are just incomplete descriptions in the first place, and you should have specified a time to have a complete description!

    The “millionth digit of pi” example serves to demonstrate the distinction in the absence of any element of time or change. The millionth digit of pi is 5, not in a present-tense sense but in a timeless and absolute sense. And yet it is still possible for me to doubt that the millionth digit of pi is bigger than 4, while not doubting that 5 is bigger than 4.

    The description is not the object; the map is not the territory. That you have made two distinct marks on your map does not mean there are two distinct objects in the territory. (Conversely, one might try to define an object by a property that doesn’t actually uniquely characterize it; making only one mark on your map doesn’t mean there’s only one object in the territory.) Your argument still fails to go through, because you’re still trying to argue that we can deduce that two objects are distinct because our representations of them have different properties.

  68. James Gallagher Says:

    correction to my post #61 above

    Apparently many Quantum Computing algorithms do not rely on continuous time evolution (of the Hamiltonian)

    Ok, then I need to concede that (large-scale) QC is viable, and almost certainly useful, even in a simplistic “discrete time” model (… and maybe STFU. 😉 )

  69. Jeremy Says:

    Scott, do you have a more detailed explanation of the Afshar experiment? The Wikipedia article lists countless explanations from all different points of view, and I don’t grasp it enough to understand how much is genuine confusion in the field and how much is just different ways of proving the same theorem. Your brief comment about how if you just do the calculations with wavefunctions there is nothing to be confused about makes sense to me, but then why are all these physicists writing different responses and rebuttals to his conclusions?

  70. Scott Says:

    Jeremy #69: I’m very far from an expert on this, so you should probably ask someone else!

    The main thing I can assure you, from long experience, is: never, ever conclude, from the mere fact that there’s a large number of responses and rebuttals written about something, then it must be deep and important.

    Here’s an analogy: suppose some clever individual named Aarchaf puts out a paper purporting to prove that the Pythagorean theorem doesn’t follow from Euclid’s axioms. As such things do, the piece gets widespread media attention, so that motivates mainstream mathematicians to write rebuttal papers. But the rebuttals don’t exactly agree about where the problem lies, so then that motivates a secondary literature analyzing the rebuttals. Some papers point out that, well, OK, yes, Euclid wasn’t fully precise (by today’s standards) in stating his axioms, and there’s a weird (mis)interpretation of them that does lead to something other than what today we would call “Euclidean geometry,” and under that misinterpretation, the Pythagorean theorem (along with most facts of elementary geometry) doesn’t hold. Some people pounce on this and say, “aha, so Aarchaf was RIGHT! no backsies!” But other people say the whole discussion is kind of pedantic and silly, since clearly the misreading isn’t how anyone has understood Euclid’s axioms for the past 2300 years of mathematics, or how Euclid himself would have understood them if asked. Math is a lot more robust than the pouncing laypeople (laysayers?) give it credit for; it doesn’t crumble because of what might have been, or might as well have been, a papyrus transcription error somewhere in ancient Greece.

    Academics will probably continue to write papers about this “controversy” for years, until they get tired of it, because that’s what academics do: they write papers! Indeed, the mere fact that so many papers have already been written is all the justification they need to write further papers, either buttressing or attacking the previous ones.

    So, if you cared, you could spend years of life delving into this literature. As an outsider, however, probably the highest-order bit to take away is that, gee whiz, the Pythagorean theorem does follow from the axioms of Euclidean geometry (as anyone understands them today), and does so by any of the proofs you learned in high school. And to whatever extent this whole big literature made anyone think that was in doubt, it’s been a negative contribution to human knowledge.

  71. prasad Says:

    @wolfgang

    You can doubt that your mother’s husband is your father. But you cannot doubt that your father is your father. Therefore, with perfect generality, all people everywhere everywhere know they were conceived out of wedlock 🙂

  72. wolfgang Says:

    @prasad

    Nice joke.

    But notice that Descartes’ doubt is not about if I am I (or if my father is my father).

    As for your example, if you have reasonable doubt that your mother’s husband exists, then I would say there is equal suspicion that you were born out of wedlock.

  73. Ben Standeven Says:

    @wolfgang: Ah, maybe I see your problem: Doubting that two things are equal is not the same as denying that they are equal. Descartes’ argument justifies the former, but not the latter.

  74. wolfgang Says:

    @Ben et al.

    Let me explain it one more time, using your biological father F and the husband H of your mother as example.

    D(F) > 0 but to the extent we trust biology your doubt will be quite small, let’s say D(F) = e > 0.
    D(H) may be much larger initially, perhaps you have never met H yet. In other words, initially it is unclear if F = H, but
    of course we cannot rule it out.

    We cannot rule it out, because, as mentioned above, D is time dependent; so if this is an important issue to me and if F = H it should be possible, at least in principle, to reduce D(H) down to D = e, perhaps by visiting H etc.
    so that D(F) = D(H).

    In general, if A = B, then there should be a way to get to
    D(A) = D(B) at least in principle.

    Even more, in science it should be possible to provide direct evidence that F = H, e.g. with a genetic test, if F is indeed identical with H.

    But D(I) = 0 and D(B) = e > 0 and I can *never* get the two to be equal.
    In addition I provided at least one good argument why I = B leads to contradictions (read the argument above about the supercomputer) *unless* we accept Descartes’ argument in the first place.

    In other words I cannot be B in the same sense F can be H.

    ps: This was really my last comment on this topic.

  75. Gil Kalai Says:

    Hi Scott, I liked the assay. It gives for me an excellent description (and a very reasonable one) to the question how a reality of quantum computers would reflect on choosing an interpretation for QM. (And also a very nice description of these interpretations on their own and various other cool ideas.)

    It is also an interesting question what will be the consequences (if any at all) on choosing a QM interpretation from a failure of QC – because of some additional principle on top of QM that disallows QC – (or as Preskill puts this possibility: because the Hamiltonian or the quantum state of the world must impose noise that overwhelm fault-tolerant quantum protocols, and “quantum computational supremacy” in general). I must admit that while devoting much effort to such a possibility that I find very interesting (and rather plausible), I did not think much about how it would reflect on the various interpretations for QM.

  76. Gil Kalai Says:

    In view of Scott #13 let me say how I see things.

    First regarding the possibility that quantum noise is an obstacle in principle for QC. I see four possible points of view.

    a) Noise in quantum systems is not a basic physics issue but primarily an engineering issue: The quantum evolution of the world itself is unitary; noise is just a “subjective matter” because in nature you cannot distinguish what the noise is; etc. Consequently the unitary model of quantum computation is the correct description of the computational world. Every thing that a noiseless quantum circuit can do (in polynomial number of steps) *must* be possible.

    Common to the next points of view is to regard a) as wrong or at least questionable. There are good reasons for that both from the point of view of physics and the point of view of computation, since understanding the modeling for errors in a computational model is crucial and so is modeling noise/approximation in many areas of quantum physics.

    b) Since it is known and quite easy that quantum circuits works with 1/poly(n) noise-level. All our TCS intuitions should tell us that reaching 1/poly(n) level is, in principle, feasible.

    c) Reaching a 1/poly(n) level of noise could represent an exponential effort which would doomed QC to be unfeasible even in principle. But reaching a constant fixed level of noise must be only an engineering task and a constant small fixed level of noise enables quantum fault tolerance via the threshold theorem. (This can implemented by software or by some anyonic hardware.) Now that the threshold theorem was proved QC are possible in principle.

    d) Reaching a constant level of noise for any implementation of universal quantum circuits or quantum fault tolerance requires an exponential effort with the number of qubits and hence is doomed to fail. For any such attempts the noise level scales up at least linearly with the number of qubits.

    (Of course there could be variations. For example, it can be reasonable for one to say that the threshold theorem and c) reaffirms his/her apriority belief in a) ).

    I think that a) is not correct and this is a major issue. I would say that b) was not very convincing even when it was relevant. c) is certainly appealing and powerful. I personally think that d) is the correct alternative. (And it also gives an interesting “regime” of noisy quantum systems for study even if it is incorrect.)

    Putting a) aside (and I think it was *very* instrumental that in the 90-00s that researchers did put a) aside), it is certainly reasonable to think that constant fixed noise level is more “natural” than a noise level that scales up with the number of qubits. Just like the first guess for gravitation could have been that it scales like 1/D and not like 1/D^2.

    But there are good reasons to prefer d) over c) and, in any case, there are a variety of experimental attempts in the next few years that put (to some extent, at least,) the alternatives c) and d) to test.

    Second, on the more general issue of quantum physics we can think about

    A) QM is only an approximation for large number of particles.

    B) Everything can formally be written in the basic language of QM. (But we often don’t know how even for well understood and successful physic theories like the standard model.) But not every law of physics can be derived from the basic language of QM. Some law of physics represent the specific initial conditions and Hamiltonian of our world.

    C) Every law of physics can be *derived* from the basic language of QM.

    Like Scott I also disagree with people who are blasé about possibility A). (But I don’t mind people who seriously study it.) It is also not clear why a hypothetical failure of QM for a large number of particles can explain the failure of achieving universal computation or BosonSampling on say eight qubits/bosons. It is a little like thinking that a failure to compute Ramsey numbers is because the laws of arithmetic are only approximations for very large integers where, in fact, we may never be able to compute R(7,7) as a matter of principle.

    C) is a fairly common view but I think C) is incorrect (and not useful) and that B) is a correct view on quantum physics. In any case, even if you think that C) is correct you may realize that we do not know how to derive even basic laws of physics from the rudimentary parts of QM. (Just like that even believers of P=NP should be open to the idea that we do not *have yet* efficient algorithms for NP-complete problems.)

    Finally, regarding Scott’s comment #13
    Usually I am a little confused from the use of superlatives over the Shtetl Optimized. It is hard for me to tell if exaggerations are meant to be provocative, to entertain, to be nice to people, or should be taken on faith value, or all of the above. So I am not sure what Scott precisely means by saying that “If there were such an impossibility principle, I would also count that as ‘spectacularly important new physics.’ ” Here is my own opinion regarding it. I think that a principle reason for failure of QC would indeed (likely) count as “a spectaculary important new physics”.

    Two comments:

    1) Part of the spectacularly important physics has already happened: the notions of quantum computers/circuits; the notion of quantum error-correction and quantum fault-tolerance and some other ideas and concepts from modern quantum information/Quantum computations. Moreover, a principle of no quantum fault-tolerance may have substantial overlaps with some existing vastly important other principles and methods needed to go from the very rudimentary parts of QM to more advanced quantum physics. However, even taking these into account, what is left to be done regarding a principle reason for failure of QC would likely be a spectacularly important new physics.

    b) Operating quantum computers (and even more modest manifestation of quantum supremacy) would represent spectacularly important new reality and spectacularly important new physics (and new chemistry) and spectacularly important new technology, and spectacularly important new theory of computing.

    (Of course, we can also simply fail both in progressing the QC agenda and in drawing useful insights from such a failure .)

  77. Jay Says:

    Gil, your opinion is as usual very interesting, but I find hard to follow the first part your post #76. Specific questions:

    You say a-d are different points of view. But there seems to be little difference between a (noise is just an engineering issue) b (reaching 1/poly(n) is in principle feasible) and c (constant level is in principle feasible). Is it correct to say that b and c are just different ways to defend opinion a? In addition, is it true that the various threshold theorems prove c (for a big collection of noise model), but not b?

    Regarding d, is it one point of view or two points of view? e.g. is saying “an exponential effort with the number of qubits” trully the same thing as saying “the noise level scales up at least linearly”?

    Moreover, is point of view d *alone* consistent with threshold theorems, or it must be supplemented by the hypothesize that noise level is at least somewhat high at the smallest scales? I mean, if the noise level is low enough at the smallest scale, then these properties you suggest couldn’t hold unless the threshold theorems fail. Is it what you’re suggesting, or you suggest that the noise level would already be to high even at the smalest scales?

    If that’s the former, then do you have a model of noise that escapes threshold theorems? If that’s the latter, do you have some intuition about what kind of hamiltonian or initial conditions could force this feature?

  78. Gil Kalai Says:

    Hi Jay, I would say that point of view a) is orthogonal to b-c-d.

    a) says that noise in principle is an engineering matter and as far as the laws of physics are concerned it *must* be the case that everything that a noiseless quantum circuit can do can also be achieved in the noisy quantum reality.

    A priori I think this point of view is not correct. (But as I said, you could regard the threshold theorem as giving it some aposteriori support.)

    Now very briefly: b) says that a is incorrect but once we can do universal computation with 1/poly(n) level of noise game is over. c) says that a and b are incorrect but once we can do universal computation with a small constant level of noise game is over. and d) says that a,b,c are incorrect.

    At present the main “dispute” would be between c) and d) but let me say a few words about b).

    b) represents a point of view that people could have taken before the threshold theorem was known. Today, people do take a sort of similar view regarding BosonSampling (with few tens of bosons) where there is no “threshold theorem”. People also take (today) this point of view to justify the possibility of having “small” universal quantum circuits without full fault-tolerance, say on 10-20 qubits.

    Now for c) and d)

    c) says that with feasible/moderate engineering effort we can create a quantum circuit on n qubits with a small constant noise levels. If true, then (by the threshold theorem, under some further assumptions) quantum fault-tolerance is possible. Again, even without fault tolerance it suggests that you can have with feasible engineering effort universal quantum computers on 5, 10 or even a few tens of qubits.

    d) reflects what I think is the situation. For implementation of universal quantum circuits or even just quantum error correcting codes needed for fault tolerance the error rate will scale at least linearly with the number of qubits and the engineering effort required to push it below a constant is super exponential with the number of qubits. This is my point of view: It has two parts, first that for given implementations, error rate scales up at least linearly with n and and second that the effort required to push down the noise below constant would explode exponentially making any such implementation fail for a small number of qubits. The second part seems a reasonable (but not a mathematical) consequence of the first.

    You are correct that to forbid quantum supremacy (with the threshold theorem and without it) d) should crucially hold for a rather small number of qubits. (If this is what you mean by “smallest scale”.) This, of course, refers to the constants involved in these asymptotic statements. We know that we can create now single qubits with fidelity which is below the threshold. My point of view says that it will not be possible to groups a hundred of such qubits to demonstrate a surface code and it will not be possible to demonstrate universal computation on even a small number of qubits (say 8 as I wrote in the previous comment). There are reasons, in principle, to think that indeed the failure will occur for small number of qubits (and for BosonSampling for small number of bosons).

    “If that’s the former, then do you have a model of noise that escapes threshold theorems? If that’s the latter, do you have some intuition about what kind of hamiltonian or initial conditions could force this feature?”

    Here I lost you about what the former/later are. But overall the answer is yes on both matters. There is a modeling of noise* which is relevant for the asymptotic statement d). (I expect it to escape any threshold theorem.) And there are good theoretical reasons for why implementation of quantum circuits will fail on a small number of qubits (well before it becomes impossible for a classical computer to simulate the situation).

    (*) There is some dispute, however, (e.g. between me and Aram) if my modeling, which is given by implicit conditions, is kosher.

  79. Jay Says:

    Thanks, this helps.

    > If this is what you mean

    Yes

    > There is a [somewhat disputed] modeling of noise (…) I expect it to escape any threshold theorem

    Ok. Do you expect it (or a variation on it) to become kosher soon? (from the point of view of, say, most experts discussing this topic)

    > there are good theoretical reasons for why implementation of quantum circuits will fail on a small number of qubits

    Can you tell more? and, more specifically, do you have some intuition about what kind of hamiltonian or initial conditions could force this feature?

  80. Gil Kalai Says:

    Jay, Of course there is a completely kosher modeling which supports d) just like the modeling which support c). The Hamiltonian models for noisy quantum circuits starting with Terhal and Burkard (2005) assume that the noise rate is a small constant, or that certain norm describing the noise is bounded by a small constant and all you need to move from c) to d) is to replace this assumption by the assumption that the rate (norm) increases linearly with the number of qubits.

    My (less obvious and more interesting, yet disputed) modeling draws from d) consequences (implicitly described) for general implementations of quantum circuits. (I think that they are kosher as well…)

    Regarding the second question: for all we know, noisy quantum states describing implementations of universal noisy circuits with a small number n of qubits can be compared to implementations of noisy bosonsampling with a small number n of bosons. For the later model Kindler and I showed that the states can be approximated by low-degree polynomials and this is likely to apply also for the first model (for low entropy states.) For our result, all you need to assume is that the initial conditions are subject to Gaussian noise of some kind.

    This gives a good theoretical reason that breakdown will occur for handful of qubits since we do not expect “bounded-depth machines” that practically compete well (or are even superior to) the full power of classical computers.

  81. Jay Says:

    >This gives a good theoretical reason that breakdown will occur for handful of qubits

    Gil, again I’m not following your reasoning here. From my layman perspective, it seems very natural that noise should scale up at least linearly* with the number of qbits for boson sampling, absent error correction, or for any quantum circuits, absent error correction.** But this is fully consistent with QC being physically possible, e.g. we could both have polynomial noise for boson sampling, absent error correction, and negligible noise for boson sampling, present error correction. So, what is your line of thought according to which the former is evidence against the latter?

    * actually I would have expected superpolynomial
    ** in the same vein and reversed direction, I also don’t get what intuition about the physical world would plead for position b, i.e. noise should increase as 1/poly(n).

  82. Gil Kalai Says:

    Jay, Let me explain from scratch what is the good theoretical reason to think that breakdown in achieving universal or “interesting” states will occur (“in principle”) for a handful of qubits/bosons well before any interesting states needed for quantum fault-tolerance and well before any “quantum supremacy” will be manifested. (I will not rely on my a) – d) items.)

    Consider the following questions:

     

    1) Will it be possible to manisfest “quantum supremacy” with BosonSampling via 10, 20, or 30 bosons?  (My view: no!)

    2) Will it be possible to create small universal quantum computers on 10 or 20 qubits? (Again, my view: no!)

    3) Will it be possible to create on 100-200  qubits quantum error-correction codes which will give substantial improvement of the noise level for the encoded qubits? (Again, my view: no!)

    These are tasks that people are fairly optimistic to experimentally demonstrate in the next years. The third task (or something analogous in “anyonic hardware”) will be necessary as a step towards quantum fault-tolerance.

    Here is the argument:

    Noisy BosonSampling, and (likely) also noisy realizations of quantum circuits (without fault-tolerance), represent a very very very low computational class. Much below P, and even well below bounded depth computation. (This is what my work with Kindler shows for BosonSampling.) The physical implementations for those are therefore, computationally speaking, very primitive computing machines. (For the later model –  when it comes to evolutions leading to low-entropy states.)

    There is no precedence that such  low-level computation machines or algorithms perform in practice  competitively to a full classical computer not to speak about superior performance. This is the theoretical reason that “breakdown” (for items 1-3) will occur for a small number of bosons/ qubits. Again, success in 1) (and likely 2 and 3) amounts to demonstrating a bounded-depth computational device that performs better *in practice* in certain interesting regimes compared to a classical computer. (This goes against our experience with drawing practical conclusions from asymptotic statements.)

    Quite interestingly, this low computational complexity level (“noisy low-degree polynomials”)  allows creating robust classical information! (And it also describes efficiently-learnable states!)

    Of course, these matters are being tested experimentally. I think that experimentalists should take note of this argument (and my other works and models, of course,)  but they should certainly give priority, at this stage, to their own points of view, intuitions and expectations. It will be very interesting to see what the experiments will tell us.

  83. Vikram Says:

    Hi Dr. Aaronson,

    I really enjoyed this piece and I’m a student myself studying computational biology.

    I have lately become very interesting in public understanding of science and writing pieces that simplify some of systems biology stuff in simple language for people. In fact, I have written some lately but I feel that I am running into trouble in that my writing needs to evolve.

    I’m a long term reader of your blog and I’ve seen just how well you can explain some concepts to someone not in the field. From your years of writing and experience, can you recommend some books or papers to read that are would be accessible to a student like me and help me improve my writing? Thanks!

  84. Scott Says:

    Vikram #83: By far the most important thing you can do is read voraciously, and carefully study the writing you like to see what makes it work and what you do or don’t want to emulate about it. Underline sentences. Silently repeat them to yourself; absorb the rhythm, the choice of vocabulary, the order in which ideas enter the stage. Constantly ask yourself: why did the author say it this way, rather than some other way? How could he or she have said it better?

    From reading (e.g.) Carl Sagan, Richard Dawkins, Steven Pinker, Rebecca Goldstein, Steven Weinberg, Alan Sokal, Martin Gardner, and Bertrand Russell, I learned as much about writing as I did about whatever it was they were writing about. Not that you should only read popular science—quite the contrary! But if you’re interested in using language as a vehicle to explain, reason, cut through pomp and obscurity, and sometimes amuse, rather than (say) to orchestrate emotional moods or signal your loyalty to a cause or any of the other things people use language for, authors like the ones I listed could certainly be a place to start.

    As for writing guides, you might enjoy Pinker’s recent The Sense of Style—it makes for a good contrast with Strunk & White, which is the classic of the genre.

  85. Jay Says:

    Well thanks Gil, now I think I see your point (noisy QC likely form a low complexity class, which for a CT theorist should suggest that noisy QC won’t be efficient). I’d dispute this is strong evidence*, but whatever. One last technical point: does noisy BosonSampling (or noisy QC) trully represents a complexity class at all? If the increase in noise is not bounded, then the output will be random for all large enough systems, no?

    * because usual programs on usual computers do run well, despite most don’t bother to include error correction (which means that these programs will fail for large enough n, e.g. are not designed to preserve inclusion in P)

  86. John Sidles Says:

    July’s QIT news  Shetl Optimized readers seeking to concretely appreciate Gil Kalai’s postulates (of #75,76,78,80,82) will be greatly assisted by the links in John Preskill’s recent tweet

    Verstraete hails Landau, Vazirani & Vidick @IQIM_Caltech: simulating correlated 1D systems ‘has finally been cracked’

    Preskill’s tweet references Frank Verstraete’s “Quantum Hamiltonian complexity: Worth the wait”, which admirably reviews Zeph Landau, Umesh Vazirani and Thomas Vidick’s breakthrough article “A polynomial time algorithm for the ground state of one-dimensional gapped local Hamiltonians” (both in Nature Physics, July 2015).

    In brief, the Landau, Vazirani, and Vidick article surveys a world in which “the physically relevant corner of Hilbert space” is only “a tiny (polynomial) fraction of [Hilbert] exponential space”:

    Think of the Hilbert space as a vast lake that is lightly frozen—the rare solidly frozen stretches corresponding to those states that have a succinct MPS description.

    The goal of the algorithm is to pull a sled to a distant island, representing the actual ground state, during a blizzard. Success requires navigating a route that at all times keeps the sled on solidly frozen ice. The low visibility forces the algorithm to map its route one step at a time.

    At any iteration, the algorithm restricts its exploration to a polynomial dimensional subspace of the Hilbert space [such that] a polynomial time algorithm provably finds the ground state.

    Kudos to Landau, Vazirani, and Vidick for a great mathematicalr esult and for vivid imagery!

    Engineers can be forgiven for wondering “Why is this news? The idea that (in essence) Gil Kalai’s postulates generically hold within “a physically relevant corner of Hilbert space” (in Verstraete’s phrase) that is cut-out by “noisy low-degree polynomials” (in Gil Kalai’s phrase) has been around at least since Verstraete and Cirac’s preprint “Renormalization algorithms for quantum-many body systems in two and higher dimensions” (arXiv:cond-mat/0407066 … a seminal article that is referenced by Landau, Vazirani, and Vidick).

    What’s New  Landau, Vazirani, and Vidick have proved rigorously what practicing simulationists have long “known” effectively: PTIME computational resources suffice to simulate large classes of quantum Hamiltonians … a class so large that it potentially (plausibly?) extends to all the quantum Hamiltonians that nature provides.

    Conclusion  Feynman’s maxim “A very great deal more truth can become known than can be proven” suggests that quantum researchers may come to “know” the plausible truth of Kalai-type conjectures long before they can be proved.

    Moreover, the breakthrough work of Verstraete, Cirac, Landau, Vazirani, and Vidick exhibits a concrete mathematical strategy by which Kalai-type conjectures might eventually be proved. Which would be cool!

  87. Scott Says:

    John Sidles #86: Yes, the Landau-Vazirani-Vidick result is great. But it only works for 1D gapped systems. And there’s an excellent reason for that: because, as Verstraete points out, the 2D gapped case is known to be NP-hard!

  88. John SIdles Says:

    Scott Aaronson #86: Yes, the 2D gapped case is known to be NP-hard. But don’t Gil Kalai’s “noisy low-degree polynomials” similarly encompass NP-hard ground-state problems? And there are natural examples of this: the reduction of 3SAT to ground-state problems in the simulation of classical bits in thermal baths with error-correction, for example.

    The Kalai-world postulate  For nature’s Hamiltonians, quantum dynamical simulations are only polynomially harder than classical dynamical simulations.

    Implications  Kalai-world quantum SAT-solvers can offer at most a polynomial speedup versus classical SAT-solvers … ditto for quantum factoring engines versus classical factoring engines … ditto for biomolecular ligand-binding simulations on quantum computers versus classical computers.

    It is well to keep in mind, however, that “mere” polynomial speedups have commonly proven to be utterly transformational (fast fourier transforms are an early paradigmatic example).

    Conclusion The prospect of “mere” polynomial speedups provides a more-than-ample practical reason for complexity theorists to investigate quantum-computing algorithms, even in the event that Kalai’s postulates are someday appreciated to hold for nature’s Hamiltonians.

  89. Gil Kalai Says:

    Jay (#85), what I say is that the basic computational power of noisy quantum systems with a small number of elements (qubits/bosons) is described by a very low-level complexity: “approximating low degree polynomials”. This will not allow to demonstrate quantum supremacy for quantum systems with a small number of elements (without noise, quantum supremacy could be demonstrated for systems with few tens of elements) and it will not allow to create quantum error-correction.

    The issue is not some unexpected behavior for quantum systems with many elements, but the rather expected behavior of noisy systems with a small number of elements, a behavior that does not allow scaling.

    John (#88), Actually I expect that for this extremely low computational class, ground states *can* be approximated efficiently. What we can expect for this class is:

    1) Efficient learning is possible,

    2) Robust classical memory is possible,

    3) Robust quantum memory is impossible,

    4) Ground states admit efficient approximation.

  90. Jay Says:

    Dear Gil, does that mean “yes this is a proper complexity class” or does that mean “no this is not but never mind”?

  91. Jay Says:

    Ok, I’ll take that as “no it’s not, but admitted it’s not would be more embarrassing than you think”. 🙂

  92. Gil Kalai Says:

    Jay, yes, my thinking is that the ability to
    approximate low degree polynomials gives the limit for probability distributions that quantum devices can achieve for the small scale of few qubits/ bosons, and this does not allow quantum supremacy or quantum fault tolerance.

  93. Sam Says:

    Scott,

    I’m quite new to QM and QC, and I didn’t quite know what you were referring to with this:

    “the idea…that quantum mechanics, as currently understood, must be merely an approximation that works for a few particles at a time; and when systems get larger, some new principle must take over to stop the exponential explosion.”

    Is there a better name for the above? I’d like to read up more about it elsewhere. Thanks!

  94. Gil Kalai Says:

    Jay, (My answer #92 to #90;#85 was before seeing #91); The computational power described by distributions that can be approximated by low degree polynomials *is* a non trivial class in the sense that the outcome is *not* going to be random for large systems. As I said, I think that

    a) for small systems (e.g. all attempts for BosonSampling; all implementations of quantum circuits) this is the relevant computational class,

    b) it will not allow quantum fault-tolerance

    c) it will not allow to manifest in practice superior-to-classic computation,

    d) because of of b) it will still be very useful class for quantum, approximately pure, distributions for larger scales.

    Your thinking seems to be that in reality, without fault-tolerance, the behavior will be even worse (all that is left is completely random distributions), but that it will be manifested only for large scales (large number of qubits/bosons) like for classical computers.

    Our results don’t support the part of your thinking which is more pessimistic. (I also see no justification for your pessimistic expectation for superpolynomial increase of error-rate * in #81).

  95. Scott Says:

    Sam #93: I’d simply call the idea “in-principle quantum computing skepticism,” or “the Extended Church-Turing Thesis.”

    I’m too lazy to look up links, but if you want skeptical viewpoints about QC, you can google for any of the following:
    – Gil Kalai’s blog debate with Aram Harrow
    – Leonid Levin’s “On quantum computing”
    – Oded Goldreich’s essay of a similar title
    – ArXiv preprints by Michal Dyakonov and Robert Alicki
    – Gerard ‘t Hooft’s arXiv preprints about cellular-automaton models for physics
    – Stephen Wolfram’s A New Kind of Science

    Meanwhile, for my responses, you can try:
    – My “Multilinear Formulas and Skepticism of Quantum Computing” paper
    – My 3-page paper “Are quantum states exponentially long vectors?”
    – The quantum computing section of “Why Philosophers Should Care About Computational Complexity”
    – My PowerPoint talk “So You Think Quantum Computing Is Bunk”
    – My blog post “Whether or not God plays dice, I do” (and a bunch of other posts)

  96. James Gallagher Says:

    Sam #93, Scott ’95

    Well Scott is being a little evasive in his reply.

    Simply,

    Is there a State Vector and Schrodinger Evolution equation which applies to the entire universe?

  97. John Sidles Says:

    Regarding Kalai’s skeptical postulates (as set forth, e.g., in Gil’s eprints arXiv:quant-ph/0607021 and arXiv:1106.0485), an arc of more than 100 arxiv eprints by Dorje Brody, Lane Hughston, Abhay Ashtekar, and Troy Schilling provides mathematical foundations for these postulates.

    The beginning of the arc is anchored in student-friendly eprints like Ashtekar and Schilling “Geometrical Formulation of Quantum Mechanics” (1997, arXiv:gr-qc/9706069v1) and Brody and Hughston “Geometric Quantum Mechanics” (1999, arXiv:quant-ph/9906086). The arc extends to the present era in articles like Ashtekar’s “Introduction to Loop Quantum Gravity” (2012, arXiv:1201.4598) and Brody and Hughston “Universal Quantum Measurements” (arXiv:1505.01981v1). In aggregard, the arxiv presently offers more than 100 articles by these authors, which can be read as a compendium of post-Hilbert analyses of quantum dynamics.

    The two-decade span of these articles provides insight into career options for skeptical-postulate researchers. Schilling joined the defense/intelligence/cryptography research community (not much is known about his subsequent research). Ashtekar of course has made a brilliant career in quantum gravity. Brody and Hughston have continued to work actively in fundamental physics research, while pursuing with great success parallel academic careers in “city” financial analysis.

    Many more skeptic-friendly math-centric eprint-arcs can be constructed; see for example the already-vast and still-burgeoning literature relating to quantum dynamics on algebraic state-spaces (e.g., Landau, Vazirani, Vidick, Cirac, Verstraete, etc., of #86).

    At present there are no textbooks that accessibly summarize this material … but it will be surprising (to me anyway) if this situation persists much longer. On the other hand, ongoing advances in category theory and algebraic geometry have scarcely begun to be assimilated into this literature, so plenty of lines of investigation remain open and largely unexplored.

    Conclusion  The mathematical methods that are associated to serious investigation of Kalai’s skeptical postulates are proving to be sufficiently interesting and powerful as to support an abundance of academic careers opportunities for young investigators.

  98. Scott Says:

    James #96: What you call “evasion,” I prefer to call “bending over backwards to be fair to the other side, and give pointers to its views.” 🙂

  99. John Sidles Says:

    James Gallagher  requests a direct reply to “is there a State Vector and Schrodinger Evolution equation which applies to the entire universe?”

    One such answer, which is mathematically compatible with both sides of the Kalai/Harrow debate and is physically compatible with all existing experiments, is this one: “Yes. Moreover this universal state-vector has a PSPACE algebraic representation whose Hamiltonian evolution can be unraveled with PTIME computational resources.”

    What would it take to refute this hypothesis? Scalable scattershot BosonSampling experiments whose outputs exactly sampled from certain permanent-related distributions would physically generate “state-vectors having a PSPACE algebraic representation whose Hamiltonian evolution cannot be unraveled with PTIME computational resources” (conditioned upon certain plausible complexity theory postulates, as set forth in Aaronson and Arkhipov arXiv:1011.3245 and arXiv:1309.7460).

    Such BosonSampling experiments would in the opinion of many (including me) rank among the most seminal scientific advances of this or any century. So it is a very considerable accomplishment even to conceive such experiments (as Scott and Alex Arkhipov did), and thereby create an ongoing and exceptionally inspirational challenge to experimentalists and engineers.

    Please let me say too, that Shtetl Optimized admirably airs diverse views on this tough subject, and Scott’s walk-the-talk respect for scientific diversity is not the least of his many outstanding contributions to the academic community. Good on `yah, Scott!

  100. Gil Kalai Says:

    “Is there a State Vector and Schrodinger Evolution equation which applies to the entire universe?”

    Many (probably even most) skeptics of quantum computers will answer “YES” to this question. Certainly, this applies to Alicki, Dyakonov and me. The open question is if the state vector and Hamiltonian of the entire universe allows quantum fault-tolerance and “quantum supremacy” or not.

    Both the optimists and the pessimists regarding QC need to make some extra assumptions on the nature of the quantum evolution describing our universe. (Of course, we do not have and we do not expect direct access to the Hamiltonian and state vector of the entire universe.)

  101. James Gallagher Says:

    Thanks Gil.

    But I now do wonder if you’re relying on anthropic arguments regarding the Hamiltonian?

    In some universes QC works, but in others it fails due to bad fault-tolerance characteristics of the Hamiltonian.

  102. John Sidles Says:

    James Gallagher postulates that  “In some universes QC works, but in others it fails due to bad fault-tolerance characteristics of the Hamiltonian.”

    An even stronger postulate — which is consonant with all experimental evidence and all theoretical understanding to date — might be

    In all universes that are governed by relativistic renormalizable (gauge-invariant) field theories, QC fails due to bad fault-tolerance characteristics of the Hamiltonian.

    If this postulate proves to be correct, then the “noisy low-degree polynomial” state-spaces of Kalai’s postulates — and of much other recent work, e.g. Swingle’s ‘S-Sourcery’ per #82 and Landau, Vazirani & Vidick per #86 — assume the role of non-renormalizable dynamics that Matthew Schwartz vividly characterizes in his free-as-in-freedom course notes for Harvard’s PHYS 253A: quantum field theory:

    Section 23.6
    Summary of non-renormalizable theories

    I want to emphasize again: non-renormalizable theories are very predictive, not just at tree-level (classically) but also through quantum effects. The quantum effects are calculable and non-analytic in momentum space.

    In fact, non-renormalizable theories are so predictive that it is often better to make a non-renormalizable approximation to a renormalizable theory than to use the full renormalizable theory itself.

    Examples include Einstein gravity instead of string theory, Heavy Quark Effective Theory (HQET) instead of QCD, Ginzburg-Landau theories in condensed matter, the Schrodinger equation instead of the Dirac equation, and chemistry instead of physics.

    Conclusion  Even in the event that scalable QC and BosonSampling experiments do someday prove to be feasible, it remains theoretically and experimentally plausible that “noisy low-degree polynomial” state-spaces suffice to describe (in Matthew Schwartz’ vivid phrasing) “all of chemistry but not all of physics.”

    Indeed, researchers like Brian Swingle (#82) are even beginning to argue that the “predict all of chemistry” outcome is not just plausible, but even likely.

    This is why the quantum dynamics and thermodynamics of Gil Kalai’s “noisy low-degree polynomial” state-spaces is well-worth studying.

  103. James Gallagher Says:

    Thanks John #102, I should really read up on some of those references.

    btw To clarify, by “anthropic” I was thinking that maybe processes like photosynthesis would not work in a universe with poor fault-tolerance for QC, and perhaps other crucial biological systems would also not be feasible.

    So (fairly) large-scale QC works in our Universe because otherwise we wouldn’t be here to observe it!

    edit: correct spelling of anthropic

  104. Muhammad A. Tirmazi Says:

    Out of curiosity, aside from Quantum Computing, are you also interested in Artificial Intelligence and Machine Learning? Stuff like Neural Networks?

  105. Scott Says:

    Muhammad #104: Yes! In fact I was an AI / machine learning person before I switched to quantum computing in grad school.

  106. Boris Borcic Says:

    Somehow I feel Many-Worlds conflates two things when it gets presented as the conservative interpretation. A first thing is the intuition that the Schrödinger equation allows interpretation in contrast to other equations of physics, as recursive code, with observation and preparation events corresponding to frame changes — recursive calls or returns, while frames like pave and cover completely a vaster space…

    And that’s where I see a hitch, I feel mainstream Many-Worlds jumps to the conclusion of what’s at first blush strongly suggested at the realization of the “recursion” in Schrödinger’s equation, the strong suggestion of a single vaster space to be completely covered or paved by maximal exploitation of the frame-change liberties included with Schrödinger’s equation. The wave-function of the multiverse.

    I believe there’s space for a posture that adheres to the “observations are frame changes of Schrödinger’s equation” part while being wary of over-precipitously going forward to the maximal inductive generalization to a frame of all frames.