What happens when an unstoppable PR force hits an NP-hard problem? The answer’s getting clearer

Update (Jan. 23): Daniel Lidar, one of the authors of the “Defining and detecting…” paper, was kind enough to email me his reactions to this post.  While he thought the post was generally a “very nice summary” of their paper, he pointed out one important oversight in my discussion.  Ironically, this oversight arose from my desire to bend over backwards to be generous to D-Wave!  Specifically, I claimed that there were maybe ~10% of randomly-chosen 512-qubit problem instances on which the D-Wave Two slightly outperformed the simulated annealing solver (compared to ~75% where simulated annealing outperformed the D-Wave Two), while also listing several reasons (such as the minimum annealing time, and the lack of any characterization of the “good” instances) why that “speedup” is likely to be entirely an artifact.  I obtained the ~10% and ~75% figures by eyeballing Figure 7 in the paper, and looking at which quantiles were just above and just below the 100 line when N=512.

However, I neglected to mention that even the slight “speedup” on ~10% of instances, only appears when one looks at the “quantiles of ratio”: in other words, when one plots the probability distribution of [Simulated annealing time / D-Wave time] over all instances, and then looks at (say) the ~10% of the distribution that’s best for the D-Wave machine.  The slight speedup disappears when one looks at the “ratio of quantiles”: that is, when one (say) divides the amount of time that simulated annealing needs to solve its best 10% of instances, by the amount of time that the D-Wave machine needs to solve its best 10%.  And Rønnow et al. give arguments in their paper that ratio of quantiles is probably the more relevant performance comparison than quantiles of ratio.  (Incidentally, the slight speedup on a few instances also only appears for certain values of the parameter r, which controls how many possible settings there are for each coupling.  Apparently it appears for r=1, but disappears for r=3 and r=7—thereby heightening one’s suspicion that we’re dealing with an artifact of the minimum annealing time or something like that, rather than a genuine speedup.)

There’s one other important point in the paper that I didn’t mention: namely, all the ratios of simulated annealing time to D-Wave time are normalized by 512/N, where N is the number of spins in the instance being tested.  In this way, one eliminates the advantages of the D-Wave machine that come purely from its parallelism (which has nothing whatsoever to do with “quantumness,” and which could easily skew things in D-Wave’s favor if not controlled for), while still not penalizing the D-Wave machine in absolute terms.


A few days ago, a group of nine authors (Troels Rønnow, Zhihui Wang, Joshua Job, Sergio Boixo, Sergei Isakov, David Wecker, John Martinis, Daniel Lidar, and Matthias Troyer) released their long-awaited arXiv preprint Defining and detecting quantum speedup, which contains the most thorough performance analysis of the D-Wave devices to date, and which seems to me to set a new standard of care for any future analyses along these lines.  Notable aspects of the paper: it uses data from the 512-qubit machine (a previous comparison had been dismissed by D-Wave’s supporters because it studied the 128-qubit model only); it concentrates explicitly from the beginning on comparisons of scaling behavior between the D-Wave devices and comparable classical algorithms, rather than getting “sidetracked” by other issues; and it includes authors from both USC and Google’s Quantum AI Lab, two places that have made large investments in D-Wave’s machines and have every reason to want to see them succeed.

Let me quote the abstract in full:

The development of small-scale digital and analog quantum devices raises the question of how to fairly assess and compare the computational power of classical and quantum devices, and of how to detect quantum speedup. Here we show how to define and measure quantum speedup in various scenarios, and how to avoid pitfalls that might mask or fake quantum speedup. We illustrate our discussion with data from a randomized benchmark test on a D-Wave Two device with up to 503 qubits. Comparing the performance of the device on random spin glass instances with limited precision to simulated classical and quantum annealers, we find no evidence of quantum speedup when the entire data set is considered, and obtain inconclusive results when comparing subsets of instances on an instance-by-instance basis. Our results for one particular benchmark do not rule out the possibility of speedup for other classes of problems and illustrate that quantum speedup is elusive and can depend on the question posed.

Since the paper is exceedingly well-written, and since I have maybe an hour before I’m called back to baby duty, my inclination is simply to ask people to RTFP rather than writing yet another long blog post.  But maybe there are four points worth calling attention to:

  1. The paper finds, empirically, that the time needed to solve random size-N instances of the quadratic binary optimization (QUBO) problem on D-Wave’s Chimera constraint graph seems to scale like exp(c√N) for some constant c—and that this is true regardless of whether one attacks the problem using the D-Wave Two, quantum Monte Carlo (i.e., a classical algorithm that tries to mimic the native physics of the machine), or an optimized classical simulated annealing code.  Notably, exp(c√N) is just what one would have predicted from theoretical arguments based on treewidth; and the constant c doesn’t appear to be better for the D-Wave Two than for simulated annealing.
  2. The last sentence of the abstract (“Our results … do not rule out the possibility of speedup for other classes of problems”) is, of course, the reed on which D-Wave’s supporters will now have to hang their hopes.  But note that it’s unclear what experimental results could ever “rule out the possibility of speedup for other classes of problems.”  (No matter how many wrong predictions a psychic has made, the possibility remains that she’d be flawless at predicting the results of Croatian ping-pong tournaments…)  Furthermore, like with previous experiments, the instances tested all involved finding ground states for random coupling configurations of the D-Wave machine’s own architecture.  In other words, this was a set of instances where one might have thought, a priori, that the D-Wave machine would have an immense home-field advantage.  Thus, one really needs to look more closely, to see whether there’s any positive evidence for an asymptotic speedup by the D-Wave machine.
  3. Here, for D-Wave supporters, the biggest crumb the paper throws is that, if one considers only the ~10% of instances on which the D-Wave machine does best, then the machine does do slightly better on those instances than simulated annealing does.  (Conversely, simulated annealing does better than the D-Wave machine on the ~75% of instances on which it does best.)  Unfortunately, no one seems to know how to characterize the instances on which the D-Wave machine will do best: one just has to try it and see what happens!  And of course, it’s extremely rare that two heuristic algorithms will succeed or fail on exactly the same set of instances: it’s much more likely that their performances will be correlated, but imperfectly.  So it’s unclear, at least to me, whether this finding represents anything other than the “noise” that would inevitably occur even if one classical algorithm were pitted against another one.
  4. As the paper points out, there’s also a systematic effect that biases results in the D-Wave Two’s favor, if one isn’t careful.  Namely, the D-Wave Two has a minimum annealing time of 20 microseconds, which is often greater than the optimum annealing time, particularly for small instance sizes.  The effect of that is artificially to increase the D-Wave Two’s running time for small instances, and thereby make its scaling behavior look better than it really is.  The authors say they don’t know whether even the D-Wave Two’s apparent advantage for its “top 10% of instances” will persist after this effect is fully accounted for.

Those seeking something less technical might want to check out an excellent recent article in Inc. by Will Bourne, entitled “D-Wave’s dream machine” (“D-Wave thinks it has built the first commercial quantum computer.  Mother Nature has other ideas”).  Wisely, Bourne chose not to mention me at all in this piece.  Instead, he gradually builds a skeptical case almost entirely on quotes from people like Seth Lloyd and Daniel Lidar, who one might have thought would be more open to D-Wave’s claims.  Bourne’s piece illustrates that it is possible for the mainstream press to get the D-Wave story pretty much right, and that you don’t even need a physics background to do so: all you need is a willingness to commit journalism.

Oh.  I’d be remiss not to mention that, in the few days between the appearance of this paper and my having a chance to write this post, two other preprints of likely interest to the Shtetl-Optimized commentariat showed up on quant-ph.  The first, by a large list of authors mostly from D-Wave, is called Entanglement in a quantum annealing processor.  This paper presents evidence for a point that many skeptics (including me) had been willing to grant for some time: namely, that the states generated by the D-Wave machines contain some nonzero amount of entanglement.  (Note that, because of a technical property called “stoquasticity,” such entanglement is entirely compatible with the machines continuing to be efficiently simulable on a classical computer using Quantum Monte Carlo.)  While it doesn’t address the performance question at all, this paper seems like a perfectly fine piece of science.

From the opposite side of the (eigen)spectrum comes the latest preprint by QC skeptic Michel Dyakonov, entitled Prospects for quantum computing: Extremely doubtful.  Ironically, Dyakonov and D-Wave seem to agree completely about the irrelevance of fault-tolerance and other insights from quantum computing theory.  It’s just that D-Wave thinks QC can work even without the theoretical insights, whereas Dyakonov thinks that QC can’t work even with the insights.  Unless I missed it, there’s no new scientific content in Dyakonov’s article.  It’s basically a summary of some simple facts about QC and quantum fault-tolerance, accompanied by sneering asides about how complicated and implausible it all sounds, and how detached from reality the theorists are.

And as for the obvious comparisons to previous “complicated and implausible” technologies, like (say) classical computing, or heavier-than-air flight, or controlled nuclear fission?  Dyakonov says that such comparisons are invalid, because they ignore the many technologies proposed in previous eras that didn’t work.  What’s striking is how little he seems to care about why the previous technologies failed: was it because they violated clearly-articulated laws of physics?  Or because there turned out to be better ways to do the same things?  Or because the technologies were simply too hard, too expensive, or too far ahead of their time?  Supposing QC to be impossible, which of those is the reason for the impossibility?  Since we’re not asking about something “arbitrary” here (like teaching a donkey to read), but rather about the computational power of Nature itself, isn’t it of immense scientific interest to know the reason for QC’s impossibility?  How does Dyakonov propose to learn the reason, assuming he concedes that he doesn’t already know it?

(As I’ve said many times, I’d support even the experiments that D-Wave was doing, if D-Wave and its supporters would only call them for what they were: experiments.  Forays into the unknown.  Attempts to find out what happens when a particular speculative approach is thrown at NP-hard optimization problems.  It’s only when people obfuscate the results of those experiments, in order to claim something as “commercially useful” that quite obviously isn’t yet, that they leave the realm of science, and indeed walk straight into the eager jaws of skeptics like Dyakonov.)

Anyway, since we seem to have circled back to D-Wave, I’d like to end this post by announcing my second retirement as Chief D-Wave Skeptic.  The first time I retired, it was because I mistakenly thought that D-Wave had fundamentally changed, and would put science ahead of PR from that point forward.  (The truth seems to be that there were, and are, individuals at D-Wave committed to science, but others who remain PR-focused.)  This time, I’m retiring for a different reason: because scientists like the authors of the “Defining and detecting” preprint, and journalists like Will Bourne, are doing my job better than I ever did it.  If the D-Wave debate were the American Civil War, then my role would be that of the frothy-mouthed abolitionist pamphleteer: someone who repeats over and over points that are fundamentally true, but in a strident manner that serves only to alienate fence-sitters and allies.  As I played my ineffective broken record, the Wave Power simply moved from one triumph to another, expanding its reach to Google, NASA, Lockheed Martin, and beyond.  I must have looked like a lonely loon on the wrong side of history.

But today the situation is different.  Today Honest Abe and his generals (Honest Matthias and his coauthors?) are meeting the Wave Power on the battlefield of careful performance comparisons against Quantum Monte Carlo and simulated annealing.  And while the battles might continue all the way to 2000 qubits or beyond, the results so far are not looking great for the Wave Power.  The intractability of NP-complete problems—that which we useless, ivory-tower theorists had prophesied years ago, to much derision and laughter—would seem to be rearing its head.  So, now that the bombs are bursting and the spins decohering in midair, what is there for a gun-shy pampleteer like myself to do but sit back and watch it all play out?

Well, and maybe blog about it occasionally.  But not as “Chief Skeptic,” just as another interested observer.

185 Responses to “What happens when an unstoppable PR force hits an NP-hard problem? The answer’s getting clearer”

  1. D Says:

    Ok, comparing D-Wave to the Confederacy is pretty tasteless, but it’s forgivable for the chance to refer to “the Wave Power”.

  2. rrtucci Says:

    “The Bourne Conspiracy”

  3. Scott Says:

    D #1: Yeah, that was my calculation too.

    Since no pun I’ve made on this blog has ever gone un-pun-ished, let me state categorically, for the record, that I didn’t intend any moral comparison at all between the slavery of the antebellum south and the wavery of the Canadian northwest.

  4. John Sidles Says:

    A deplorable aspect of Michel Dynakov’s e-print (arXiv:1401.3629) is its neglect of commendable analyses like Yaakov Weinstein’s recent Quantum Error Correction During 50 Gates (arXiv:1312.1245); in this e-print Weinstein supplies precisely the detailed noise modeling that Dynakov requests.

    A Hopeful Proposition  Public appreciation of informed, in-depth (and eminently collegial) analyses like Weinstein’s may help to sustain progress in quantum information theory.

    Thank you for your efforts, Yaakov Weinstein.

  5. Anonymous Says:

    Hey Scott,

    Seth Lloyd was just on NPR talking about quantum computers and their implications for encryption and NSA surveillance:

    http://onpoint.wbur.org/2014/01/08/nsa-cryptography-quantum

    He really came across as rather certain of the near-term inevitability of quantum computers! Do you think he went overboard?

  6. Matt Leifer Says:

    It is worth mentioning, for those who can’t be bothered to RTFP, that Rønnow is giving the next Q+ talk on 28th Jan. https://plus.google.com/events/c7prl3ob61i2cr6sglu5s0tmm0c Hope to see some of you there.

  7. George W Harris Says:

    Tsk. An American Civil War metaphor is all very well, but “bombs bursting in air” is from the War of 1812.

  8. Scott Says:

    Anonymous #5: I haven’t listened to that NPR segment. But if Seth “came across as rather certain of the near-term inevitability of quantum computers,” then yes, it sounds like he went overboard.

  9. Scott Says:

    George Harris #7: Yeah, yeah, I know — there’s a whole museum about the War of 1812 that I enjoy stopping at while walking along the Charlestown harbor. And Gary Larson drew cartoons involving both polar bears and penguins, despite knowing full well that they don’t exist at the same pole. By the time you’d noticed the problem, you’d already laughed at the cartoon.

  10. Sid Says:

    Scott #3:

    Did you intend the pun when you wrote un’pun’ished? Either way, it was very punny.

  11. Raoul Ohio Says:

    Help me out. I don’t get “the Wave Power” pun.

  12. Rahul Says:

    Hey Scott,

    Is there a discrete way of thinking about space? I do not mean to say that you just (trivially) discretize space or somethink like that. I mean, is there a discrete (and hopefully algorithmic) way of thinking about space? Is it theoretically impossible for something discrete (in any sense) to be able to generate something continuous?

  13. Scott Says:

    Sid #10: I didn’t intend it, but I inserted the hyphen after noticing it.

  14. Scott Says:

    Raoul #11: http://en.wikipedia.org/wiki/Slave_Power

  15. Scott Says:

    Rahul #12: That’s an enormous question, and very far outside the scope of this post! A short answer is that most of the non-string approaches to quantum gravity (loop quantum gravity, spin foams, etc.) aim at precisely what you’re asking for, a discrete way of thinking about space. The hope is that, once you get down to the Planck scale of ~10-33 cm or ~10-43 sec, spacetime will reveal a “quantum foamy” nature that prevents you from resolving distances or times to any greater precision. Of course, it’s nothing as simple as spacetime being a lattice—that wouldn’t even be rotationally invariant, let alone Lorentz-invariant. And it’s still quantum-mechanical, so you still have amplitudes, which are continuous complex numbers. But in some sense, the goal is to build up a spacetime out of discrete, indivisible “pixels.” Of course, the goal is to recover something like a smooth classical spacetime obeying general relativity when you look at sufficiently many pixels. I should tell you that physicists strongly disagree on the extent to which the LQG / spin foam programs have succeeded (or can succeed) at that goal.

    In string theory, the gravitational degrees of freedom are thought to be much more nonlocal, so that there wouldn’t be anything like “pixels” of spacetime. But even string theory implements the holographic entropy bound, which implies that the number of qubits describing any bounded region of spacetime has to be finite, and be upper-bounded by ~1069 qubits per square meter of surface area (i.e., ~1 qubit per Planck area). So, that’s a sort of “discreteness,” albeit not implemented locally, by anything like pixels or a cellular automaton.

  16. J Says:

    Scott Says:
    Comment #14 January 17th, 2014 at 2:34 am
    Raoul #11: http://en.wikipedia.org/wiki/Slave_Power

    “The argument was that this small group of rich men had seized political control of their own states and were trying to take over the national government in an illegitimate fashion in order to expand and protect slavery.”

    Small group of tea partiers do somewhat amazing stuff now too right?

  17. Rahul Says:

    Scott #15:
    Thanks Scott. I will follow up on the links. And sorry about posting at the inappropriate place.
    However, i was wondering more along the lines of whether the mathematical idea of continuous space itself can be understood in a discrete algorithmic way or not and not the physical idea alone. Anyway, maybe the links you mention might make this clearer.

  18. Alexander Vlasov Says:

    Supposing QC to be impossible, which of those is the reason for the impossibility?

    A reason for doubts about QC (at least for CS people) could be some form of “Occam complexity razor”, e.g. : Our theories should not state increasing of complexity of the Nature without direct experimental evidence. QC causes introduction of new complexity class and such a principle is relevant. BTW, from such point of view, both D-Wave activity and experiments on boson sampling can be considered as indirect support for standard QC.

  19. Daniel Burgarth Says:

    We have a Q+ hangout upcoming on this subject on Tuesday the 28th, 14:00 London Time; http://goo.gl/ZdjQph

    SPEAKER: Troels Frimodt Rønnow

    TITLE: Quantum annealing on 503 qubits

    ABSTRACT: Quantum speedup refers to the advantage of quantum devices can over classical ones in solving classes of computational problems. In this talk we show how to correctly define and measure quantum speedup in experimental devices. We show how to avoid issues that might mask or fake quantum speedup. As illustration we will compare the performance of a D-Wave Two quantum annealing device on random spin glass instances to simulated classical and quantum annealers, and other classical solvers.

  20. Daniel Burgarth Says:

    Sorry I should have seen that Matt has already posted this.

  21. Scott Says:

    Rahul #17:

      However, i was wondering more along the lines of whether the mathematical idea of continuous space itself can be understood in a discrete algorithmic way or not and not the physical idea alone.

    Well, there are lots of different things you might mean by that, so I’m not sure I know what kind of answer you’re looking for.

    The standard mathematical definition of the real numbers—namely, via Cauchy sequences—defines them as the limits of infinite sequences of rational numbers. In that sense, one could say that one “builds up the continuous out of the discrete.” On the other hand, in the standard view, it really is important to take infinite limits: if you don’t, then you’ll get only a countably infinite set, with very different properties than the full set of reals.

    On the third hand, another way to approach your question is through fields such as computable real analysis. There, one observes that, for almost everything one wants to do in practice with the reals, it suffices to consider a countable subset of reals only (e.g., the computable reals, those for which there exists an algorithm to approximate them to any desired precision). So, that would again introduce a sort of “discreteness” (or at least countability) into one’s view of continuous space.

  22. Scott Says:

    Matt #6 and Daniel #19: Thanks very much for the announcement! I encourage people to attend, and am sorry I won’t be able to make it myself.

  23. Scott Says:

    Alexander Vlasov #18:

      A reason for doubts about QC (at least for CS people) could be some form of “Occam complexity razor”, e.g. : Our theories should not state increasing of complexity of the Nature without direct experimental evidence.

    Sure, but the Extended Church-Turing Thesis (i.e., the belief that polynomial classical complexity suffices to simulate Nature) is just a “meta-belief” about how the laws of physics ought to be—and a meta-belief that seems to contradict quantum mechanics. So, given the immense experimental evidence for QM, my personal view is that the meta-belief would only turn into a serious argument if one could go further, and describe actual candidate laws of physics that implemented the meta-belief while also remaining consistent with observation.

  24. John Sidles Says:

    Rahul asks “Is there a discrete (and hopefully algorithmic) way of thinking about space?”

    Rahul, both art and history serve to guide us toward a concrete answer “yes” to your fine question.

    To begin with art, Jorge Luis Borges’ short story Averroës’s Search contains the passage:

    He [Averroës] told himself, without excessive faith, that what we seek is often nearby.

    Here Borges inspires us to appreciate that already, in our daily scientific lives, it is routine practice to computationally synthesize spatial properties from underlying dynamics that is discrete and algorithmic. Consider for example a discrete set of interacting qudits coupled pairwise by Hamiltonian interactions. The energy is of course conserved, and other thermodynamic quantities may be conserved too (depending upon the form of the Hamiltonian couplings); thus the transport of conserved thermodynamical quantities (energy, charge, mass, momentum) may be unravelled computationally (Carmichael-style) as a dynamical process (of Lindblad-form).

    In this manner a coarse-grained notion of space (in its ordinary sense) either emerges or doesn’t from microscopic discrete/algorithmic model, as a coarse-grained description of thermodynamic transport, with macroscopic properties like “dimension” and “isotropy” and “boost invariance” all depending upon the qudit couplings chosen.

    These ideas are of course familiar to us all … they are extant already in multiple branches of science, engineering, and mathematics. Thus the synthesis of space-time from underlying nonspatial dynamics already is routine practice.

    For further illumination we turn to history, as authorized by (for example) Tony Zee in his Quantum Field Theory in a Nutshell

    “Any self-respecting physicist should learn about the history of physics, and the history of quantum field theory is among the most fascinating. … In all previous revolutions in physics, a formerly cherished concept has to be jettisoned. If we are poised before another conceptual shift, something else might have to go. Lorentz invariance perhaps? More likely, we will have to abandon strict locality.”

    In particular, Borges’ story inspires us to reflect that Zee’s “abandonment of strict locality” already is concretely and routinely achieved, by scientists and engineers of many disciplines, by constructing the notion of “locality” as an explicitly macroscale thermodynamical concept that is not associated to any microscale notion of space-time.

    More broadly, uniting Borges’ art-and-philosophy insights with Tony Zee’s math-and-science insights helps us to appreciate that discrete / algorithmic / computational / informatic / synthetic reconceptions of space-time already are extant in the literature of thermodynamic transport. Indeed the authors of these works include luminaries like Dirac, von Karman, von Neumann, Onsager, Kolmogorov, Landau, Sakharov, and even Feynman.

    Regrettably (for students) their primary works relating to transport theory have remained state-secrets and/or trade-secrets — hence unpublished — and an unfortunate consequence has been that the traditional evolutionary process of distillation / clarification / naturalization / unification of transport-related physical intuitions and mathematical frameworks has been slow and Balkanized.

    That is why (disadvantageously for students) even the best-selling textbooks on transport theory are (mathematically speaking) disorganized accretions of ad hoc recipes. Ouch!

    A paradoxical consequence is that research topics like scalable BosonSampling have become crucial to our evolving understanding of quantum information theory precisely because (as Scott aptly says) they have “a grand total of zero known practical applications” … and so the log-jam issues of state-secrets and/or trade-secrets and/or math-secrets need not be directly confronted.

  25. Alexander Vlasov Says:

    It is rather meta-suggestion about how our theories should be …

  26. Michael Marthaler Says:

    Did anybody here read the latest D-Wave paper. I really don’t understand their spectroscopic data

    Fig 3 in
    http://arxiv.org/abs/1401.3500

    It looks to me as if state which are higher in energy are well occupied despite the fact that Temperature is very low. Additionally, fig 3c and d do not quite show an avoided crossing.

  27. Scott Says:

    Alexander #25: Right, but my point was that we don’t currently have any viable theories that implement the meta-suggestion (i.e., that reproduce the experimentally-tested predictions of QM without also predicting exponential classical complexity), and it seems unhelpful to keep repeating the meta-suggestion without grappling with that fact.

  28. John Sidles Says:

    Scott asserts: “We  don’t  do currently have viable  theories  computational simulations that reproduce the  predictions  observations of QM without predicting exponential complexity.”

    Scott, didn’t the 1998 and 2013 Nobel committees appreciate that changing “predictions” to “observations” and “theories” to “computational simulations” suffices to reverse the assertion?

  29. Scott Says:

    John #28: Even after your edits, I believe the answer is still no. Density Functional Theory and the like are wonderful approximation heuristics, but they don’t always give the right answers even in the practical situations of interest to chemists, so they’re not candidates for the general-purpose methods that would be needed to restore the Extended Church-Turing Thesis.

  30. Alexander Vlasov Says:

    Scott #27, I may not argue, because viable theories are extremely rare and complicated things. It would be more constructive, e.g., to show that some set of reasonable axioms together with suggestion about impossibility of QC cause contradiction.

  31. John Sidles Says:

    Speaking less seriously, in regard to the feasibility of large-scale quantum simulation, doesn’t the Zeno’s Paradox joke close enough for all practical purposes apply?

    More seriously, for how many more doublings can we plausibly sustain the present 10-30 year doubling-times for “number of qubits” in quantum computing versus 3-5 years for “number of atoms” doubling-times in chemical-accuracy quantum simulation?

    Most seriously, aren’t the known computational barriers to obtaining chemical accuracy in large-scale quantum simulation plausibly polynomial rather than exponential?

    Consider for example recent surveys like Bochevarov et al “Jaguar: A high-performance quantum chemistry software program with strengths in life and materials sciences” (2013):

    “Jaguar was originally designed as an ab initio quantum chemical program that would be able to routinely work with larger molecules (100 atoms and more). For this reason, the developers ruled out an implementation of post-SCF methods such as Møller–Plesset perturbation theory, configuration interaction, and coupled cluster theories, which all suffer from computational cost scaling as the fifth or higher power of the molecular size N.

    Instead, our efforts were concentrated on methods which scale as N^3, at most. Only such approaches would allow one to routinely process molecular systems involving at least a thousand basis functions, which is what one expects to use in a hundred-atom calculation.

    And moreover:

    Despite the widely held claim that (individual) DFT [density functional theory] functionals, in contrast to multideterminant ab initio theories, cannot be improved in a systematic way, conceptual DFT development and improvement do not seem to slow down.

    In view of this tremendous progress in simulation capability, which has been sustained for 49 years through many Moore’s Law doublings, it’s unsurprising that the Kohn-Sham DFT articles are the most-cited in the entire physics literature.

    Conclusion  In regard to large-scale quantum simulations to chemical accuracy, the present literature affirms (empirically, but strongly) the Extended Church-Turing Thesis. That is why sustaining the ECT through multiple further capability-doublings poses, not a problem, but rather a marvelous opportunity for quantum information theory.

  32. Gil Kalai Says:

    Scott #23: Sure, but the Extended Church-Turing Thesis (i.e., the belief that polynomial classical complexity suffices to simulate Nature) is just a “meta-belief” about how the laws of physics ought to be—and a meta-belief that seems to contradict quantum mechanics.

    (This comes very close to my own understanding of Scott’s position over the years “A central claim that Scott is making for many years is that the possibility of quantum computers is a logical consequence of quantum mechanics.”)

    I am always curious what is the reason to think so. Why the extended Church-Turing thesis “seems to contradict” quantum mechanics, any more than the laws of thermodynamics “seems to contradict” quantum mechanics? Is it simply because you cannot *derive* the extended Turing thesis from QM?

    (Let me also mention that understanding the cognitive barrier that don’t allow donkeys and other animals to talk is a very important intellectual challenge.)

  33. Scott Says:

    Gil #32: No, it’s also because we have (what look for all the world like) conditional hardness results for the general problem of simulating quantum mechanics on a classical computer. As you’re well aware, Shor gave the most famous such result, while Arkhipov and I (and many others) have given additional hardness results.

  34. Gil Kalai Says:

    Yes sure. The extended Church-Turing thesis as you described it (namely the belief that polynomial classical complexity suffices to simulate Nature) indeed is in contradiction with the possibility of nature (even with our help) to implement Shor’s algorithm or BosonSampling. But what is the reason to think (and what is the precise meaning of the statement) that the Church-Turing thesis contradicts QM itself?

  35. Scott Says:

    Gil, I’d prefer that you didn’t ask questions for which you already know my answers perfectly well! Shor’s algorithm and BosonSampling strike me as predictions of quantum mechanics, in much the same sense that Bell inequality violation was a prediction of QM. I.e., all of these things describe physical setups that seem to be perfectly within “the rules of the game.” They then predict, both that such a setup can indeed be built, and that its behavior once built will follow exactly the same rules that apply to other cases.

    Now, if you think that the described experiment is not physically possible, then I’d say that the burden falls squarely on you to propose alternative rules, and show both that your rules are empirically superior to the accepted rules, and that they do indeed disallow the experiment. Listing various experimental difficulties, reminding people of the difference between mathematical models and reality, etc. etc. (as Dyakonov does) is emphatically irrelevant. What’s needed is a new set of rules that disallow the experiment, and the new rules have to be empirically successful and internally mathematically coherent just as the old rules were.

    I admire that, nearly alone among QC skeptics, you seem to appreciate the weight of this burden. On the other hand, from what I’ve seen, I’d say that you haven’t yet come close to meeting the burden. My conjecture, of course, is that you won’t meet it—i.e., that Shor’s algorithm, BosonSampling, etc. are indeed physically realizable just as the Bell experiment was physically realizable, and they’ll uphold the predictions of QM just as the latter did.

  36. Alexander Vlasov Says:

    Scott #35, Axioms of QM are not enough to ensure Shor’s factoring algorithm (more generally, quantum circuit model). It may be compared with periodic table – the regular structure may not be derived directly from the axioms and needs additional investigations (e.g. very specific symmetry of Schrödinger equation with Coulomb’s potential).

  37. matt Says:

    Regarding density function theory (DFT), I think the readers should know a few things. John Sidles is completely correct that it is an extremely important and practical algorithm. However, one should also be aware that it is limited in some ways. While very high accuracy is attained for certain molecules, the algorithm thus far has no hope of obtaining even qualitatively correct results for something like the two dimensional Hubbard model or many other strongly interacting systems; there even are serious issues with computing excited state or transport properties for many weakly interacting systems although there is progress on that. The reason for this has to do with the difficulty of constructing the functional. DFT is commonly taught in a rather bad way in most physics courses; the original Kohn-Sham derivation is covered to explain that some functional exists with certain desirable properties. However, this derivation completely ignores the question of whether we have a good approximation to this functional. One can give this a complexity theoretic flavor, giving hardness results to computing this functional. However, these hardness results also ignore the main point as they rely on situations that are artificial compared to the situations of real interest. That is, there appears to be no complexity theoretic barrier to constructing a functional that will capture much of the physics of a certain strongly interacting system like the Hubbard model. The trouble is just that we don’t have such a functional.

    I don’t know how to add italicized and boldfaced quotes and “Conclusion” at various points, and I won’t talk about the bright STEM future. But I will say that continued progress on chemical accuracy for certain molecules may have no bearing on obtaining even qualitative results for other molecules.

  38. John Sidles Says:

    Scott requests  “Gil, I’d prefer that you didn’t ask questions for which you already know my answers perfectly well! … What’s needed is a new set of rules that disallow the [BosonSampling-type] experiment, and the new rules have to be empirically successful and internally mathematically coherent just as the old rules were.”

    Gil’s thesis can be defended by embracing Borges’ Principle (of #24)  — “Que suele estar muy cerca lo que buscamos” [what we seek often is near to us]  — as follows:

    (1)  We postulate that the infeasibility of Shor Factoring / BosonSampling / general quantum simulation arises not by any alteration of quantum theory, but rather from a familiar restriction of it.

    (2)  The familiar restriction is precisely the restriction that Nature imposes (via relativistic field theory): long-range interactions are mediated solely by gauge bosons.

    (3)  Three consequences of the Nature’s QED restriction are:

      (physical)  scalable quantum computing, Shor factoring, and BosonSampling is infeasible, because

      (mathematical)  low-dimension varietal dynamics (as induced by the localizing long-range interactions of gauge field theory) provides concrete models for the Kalai postulates and Nature’s state-spaces, such that

      (computational)  the Extended Church-Turing thesis is true, both in abstract principle and in concrete practice.

    This emerging world-view naturally explains two salient features of the 21st century scientific world:

      (unforeseen difficulties)  progress toward scalable quantum computing has been far more slow and difficult than was foreseen in the 20th century, and

      (unforeseen successes)  progress in the simulation of large-scale dynamical systems has been far more rapid and sustained than was foreseen in the 20th century.

    What’s Next?  Art, history, and cognitive science — and as it seems to me, traditional Enlightenment optimism too — all indicate that 21st century quantum information theory is evolving toward a Grothendieck-style mathematical naturality:

    The sea [of mathematical naturality] advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it, yet it finally surrounds the resistant substance. […] It will be difficult for future mathematicians to do without this new effort at abstraction, which is perhaps quite small, compared to that our predecessors faced.

    Transformational advances in our integrative mathematical understanding (e.g. Chow’s Theorem and GAGA for example) ensure the self-consistency of these trends, which (as it seems to me) are reflected equally in the mathematical community’s ever-more-elegant formalisms, and in the engineering community’s ever-more-efficient dynamical simulation algorithms, and in the medical / enterprise / investment community’s ever-increasing optimism that these integrative advances will help concretely in addressing the urgent challenges of the 21st century, and in fulfilling its wonderful opportunities.

    Conclusion  A royal road toward grasping these lessons “in width, in depth, and in context” (to borrow a phrase from the historian Michael Howard), unencumbered by considerations of state-secrets and trade-secrets, is to pursue scalable BosonSampling research vigorously, both experimentally and in large-scale computational simulations.

    In journeying along this wonderful royal road (as it seems to me) Gil and Scott — and colleagues like Alex Arkhipov and Aram Harrow too — surely can find much to enjoyably discuss, and plenty of marvelous discoveries to share. Good!

  39. Ian Durham Says:

    I’d like to point out (and this doesn’t excuse it) that D-Wave is a company and the purpose of companies is to make money. Ergo, it is no surprise that there are people at D-Wave who are more interested in PR than in science. It is to be expected from such entities. It doesn’t necessarily excuse any grandiose claims, but it does put it into perspective. They’re in the business of making money and, these days, making money requires hype. Again, I’m not excusing anything, just noting it for posterity.

  40. Mike Says:

    “They’re in the business of making money and, these days, making money requires hype.”

    The right kind of hype can certainly enhance the chances of making money. But if you’re talking about “real” money — not just one or three or six D-Wave computers — it will take a lot more than hype. I suspect that at some point it’s going to take a D-Wave product the market believes in based on actual performance, and at least at this point I think it’s fair to say that at least the results so far do not seem to be moving in the right direction. Of course, as some have said, the absence of evidence is not evidence of absence. However, you can say that about almost any incorrect idea that you’re not capable yet of finally disproving.

  41. Scott Says:

    Ian Durham #39:

      I’d like to point out (and this doesn’t excuse it) that D-Wave is a company and the purpose of companies is to make money.

    You’re about the 70 millionth person to have made that point to me, and I’m glad you agree that it doesn’t excuse anything. When people argue to me that it does excuse things, I like to answer: “Well, of course I lied about murdering those people! Try to see things from my perspective: if you’d murdered them, you would’ve lied about it too!” 🙂

    Or I say: “yes, I fully understand that they’re a company, and producing cringe-inducing hype is just what’s done in the startup world. But you shouldn’t criticize me either, for paying D-Wave the compliment of holding them to the standards to which I’d hold any QC research. For you see, I’m a procrastinating academic QC blogger, and that’s just what we do in my world!”

    Look: the path of intellectual honesty is a tough one, and in the current science funding environment, there are endless temptations for all of us to wander off it. But I remain firmly convinced that it’s the only path for QC that isn’t a complete dead-end. If nothing else, it’s the only path that lets me argue against Dyakonov with conviction…

  42. Ian Durham Says:

    Look: the path of intellectual honesty is a tough one, and in the current science funding environment, there are endless temptations for all of us to wander off it. But I remain firmly convinced that it’s the only path for QC that isn’t a complete dead-end. If nothing else, it’s the only path that lets me argue against Dyakonov with conviction…

    Sure and these latest results certainly have caused me to rethink my initial assessment (which was not wholly supportive, but was also not entirely dismissive either after having talked to Daniel Lidar about it). I also completely agree with the bit about intellectual honesty. But, again, my point was never to excuse what they’re doing. It is simply to say we shouldn’t be surprised that a for-profit company is doing this. The more interesting and provocative question is whether this whole episode is indicative of an inherent dissonance between fundamental science and capitalism [note: I have no opinion one way or the other on this].

  43. quax Says:

    Not the first to pick up on this but I cannot resist …

    “The truth seems to be that there were, and are, individuals at D-Wave committed to science, but others who remain PR-focused.”

    Surprising isn’t it 🙂

    Never heard of such a thing before in the high tech venture business!

    Snark aside, I will obviously have to blog about these results and the “inherent dissonance between fundamental science and capitalism” as Ian Durham so aptly put it.

  44. Mike Says:

    Looking at science through business-tinted glasses and thinking deeply about the broader question whether there is an “inherent dissonance between fundamental science and capitalism” are certainly worthy topics for debate, but let’s not lose sight of the main question at hand: the actual nature of D-Wave’s machines and whether they will ultimately succeed in business because of the technical success of their machines — which is the only way it will happen. I guess for purposes of these types of forums, I vote for more science and a bit less business. 🙂

  45. Scott Says:

    Alexander #36:

      Axioms of QM are not enough to ensure Shor’s factoring algorithm (more generally, quantum circuit model).

    Yes, that’s true, but the additional axioms that you need (to give some examples: there exists a way to apply a CNOT gate, decoherence can be decreased below ~0.1% per qubit per gate…) all seem so basic that their failure would be even more surprising in some ways than the failure of QM itself.

    Or am I wrong? Which axiom am I missing? And—here’s the crucial question, the one the QC skeptics never want to answer—if your axiom failed to hold, could you then simulate Nature in classical polynomial time? how? what’s the algorithm?

  46. John Sidles Says:

    Scott asks three ultra-concise questions:

    Q1  Which axiom [generically obstructs scalable quantum computing and/or scalable BosonSampling]?

    Q2  If [that] axiom fails to hold, could we then simulate Nature in classical polynomial time?

    Q3  How? What’s the algorithm?

    The analysis of the Yaakov Weinstein/Mitre preprint (arXiv:1312.1245, per #4) specifies concretely one such axiom:

    We simulate 50 single-qubit gates on the ((7,1,3)) [quantum error-correcting] code in a nonequiprobable Pauli operator error environment with non-correlated errors.

    Here the key axiom is that errors are “non-correlated.” Building upon Weinstein’s axiom, ultra-concise answers to Scott’s three questions are:

    A1  Massless quantum field theory generically predicts, and laboratory experience generically affirms, that the axiom of qubit error-independence fails to scale (as do assumptions of, e.g., photon-source/sink independence in BosonSampling experiments).

    A2  Yes. PTIME quantum dynamical simulation is generically feasible, provided that non-local error-dynamics are explicitly included in the simulation.

    A3  Carmichael-style trajectory unravelling of noisy quantum dynamics, including in particular Lindblad-style (non-Hamiltonian) stochastic flow, acts to compress trajectories onto low-dimension subvarieties of Hilbert space, that can be integrated with PTIME resources.

    The program of A1-A3 is not a vision of some distant mathematical future; rather it concisely describes an inexorably rising tide of Grothendieck-style mathematical naturality that already dominates — and increasingly unifies — the burgeoning global enterprise of large-scale quantum simulation.

    In regard to Gil Kalai’s postulates (of arXiv:1106.0485, arXiv:0904.3265, and arXiv:quant-ph/0607021 for example), the above class of simulations concretely model these postulates, and thus serve to establish (albeit non-rigorously) both the mathematical consistency and physical plausibility of these postulates.

    Moreover (to address Matt’s comment of #37, relating to the Bochevarov survey article of #31), a key issue nowadays is not whether n-atom quantum simulation is feasible in PTIME, but rather whether the optimal PTIME exponent is n^3 (of DFT methods) versus n^4, n^5, n^7 (etc.). Here the pace of progress is so rapid, that the scaling exponents and their coefficients change year-by-year, and no obvious limits to sustained progress in simulation capability are presently known.

    To quote from Victor Hugo’s Toilers of the Sea:

    La mer était étale, mais le reflux commençait à se faire sentir; le moment était excellent pour partir. [It was slack water, but the tide was beginning to make itself felt; the moment was favorable for setting out.]

    (indeed, Grothendieck’s notion of “étale cohomology” is believed to originate in this passage).

    Conclusion  For quantum simulationists the present decade’s moment is favorable for setting out. So much so, that the global scientific community already is well-embarked upon this voyage of discovery, whose compass and destinations none of us can foresee. Good!

  47. Alexander Vlasov Says:

    Scott #45. I think two important additional axioms may be:
    1) Inclusion of finite dimensional quantum circuit model into infinite dimensional Hilbert space of realistic quantum theory stable with respect to evolution of the bigger theory
    2) Possibility of regular control of evolution in exponentially big space of parameter.

    Relating simulation of Nature in polynomial time – I agree, it would be nice indeed, but Occam complexity razor unlike ECT states a bit other thing – if you we may not simulate the Nature in polynomial time, please show direct and clear reason why (complain about our problem to invent good theory for polynomial simulation is not enough).

  48. Fernando Says:

    Changing gears a bit, but wouldn’t be possible to encode a problem in the D-Wave’s computer which is not NP-Complete, but that is still believed to be in NP but not in P?

  49. wolfgang Says:

    >> if one considers only the ~10% of instances on which the
    >> D-Wave machine does best

    Perhaps a stupid question (and already answered somewhere),
    but what happens if one selects some of those 10% problems and runs the D-wave optimization again? Would it solve the problem the 2nd time with the same (faster) speed?

  50. Scott Says:

    wolfgang #49: I think they automatically account for that, by averaging over many runs on the same instance.

  51. Scott Says:

    Fernando #48: Well, that’s a big question! If one could do it, then it would certainly count as a “quantum speedup,” regardless of whether the particular NP-intermediate problem was useful for anything or not.

    The trouble is that we don’t currently know any good candidate for an NP-intermediate problem where quantum annealing would give you a speedup. For universal quantum computers, of course there are problems like factoring, but the D-Wave people themselves say explicitly that they’re not interested in factoring or universality, and that’s not what their machine is designed for.

    Of course, if it turns out that indeed the machine is efficiently classically simulable using Quantum Monte Carlo, then we’ll have a ready explanation for the failure to find an NP-intermediate problem for which the machine gives a superpolynomial speedup: namely, that there isn’t one! Anyway, this is an obvious direction for research in the near future.

  52. Gil Kalai Says:

    Hi Scott, regarding your comment #35, I did not know what your answer would be! Indeed the main question is if Shor’s algorithm, BosonSampling, etc. are physically realizable. (I conjecture that the answer is negative, and the way I see it, for a negative answer we need “just” better rules for modeling noisy quantum systems, so the issue is not the rules or predictions of QM, but the rules of quantum noise/approximation.)

    A year ago when we discussed it over your blog, you and I agreed that the question, while profoundly related to QM (and a negative answer will have profound impact on our understanding of QM), is external to the basic framework of QM, and depends on additional layers of physics laws. Now it looked that you moved back to a more “fundamentalistic” point of view (or rhetoric) that equates QC skepticism with QM skepticism.

    (Your later comment #45 (partially) clarified to me your point of view.)

  53. Gil Kalai Says:

    Scott (#45) : “Yes, that’s true, but the additional axioms that you need (to give some examples: there exists a way to apply a CNOT gate, decoherence can be decreased below ~0.1% per qubit per gate…) all seem so basic that their failure would be even more surprising in some ways than the failure of QM itself.” (emphasis mine)

    This is a very unreasonable point of view.

    “Or am I wrong? Which axiom am I missing?”

    I think that you are wrong, and I will be happy to discuss what you are missing if you wish. But do you really think that the failure of QM is more likely than the possibility that you, Scott, are wrong or miss something?

    “here’s the crucial question, the one the QC skeptics never want to answer—if your axiom failed to hold, could you then simulate Nature in classical polynomial time? how? what’s the algorithm?”

    What precisely do you mean by “never?”

    I related to this question many times including here and in a lecture at MIT a year ago. My short answer is yes. If computationally superior quantum computers are not possible then any robust physical process can be simulated, in principle, by a classical algorithm. (And I have quite more to say about it.) Robert Alicki raised a different possibility that I missed. Robert sees the possibility that some quantum processes (like those described by computations from quantum field theory) indeed require exponential (in some parameters) classical resources, but also every physical experiment to probe accurately the results also requires exponential resources (in these parameters).

    I still prefer my answer but I was pleased to see a possibility that I missed. (This is the fun aspect of our business.)

  54. Scott Says:

    Gil #53: My logic is pretty simple. To derive the possibility of QC, you need:

    (1) QM, which is something that I at least know how to define, and whose failure is a logical possibility I know how to contemplate.

    (2) Other hypotheses that, in general, seem so fundamental to how the world works that if someone asked me to make a complete list of these other hypotheses, I couldn’t even come close. (“Err, is ‘no invincible evil gremlins changing your Hamiltonian from one time step to the next, with the sole goal of preventing you from factoring numbers’ something I need to write down explicitly? What about the possibility of preparing a bunch of qubits in the 0 state?”)

    A failure of QC means either a failure of (1) or a failure of (2). But almost by definition, a failure of (2) would be even more scientifically amazing to me than a failure of (1)—since with (2), we’re talking about beliefs that all seem so “obvious” that I don’t even know which ones need to be listed!

    Incidentally, thanks for reporting Robert Alicki’s speculation; it helps to clarify his views for me. But I don’t see his idea as a separate possibility: rather, I consider it 100% compatible with the classical Extended Church-Turing Thesis. The time needed for measurement has to be counted toward the total resource requirements for your computation—and if measuring takes exponential time, then I’d say there’s no problem whatsoever (even under the ECT) with the results of the measurement being exponentially hard to simulate classically.

  55. Rahul Says:

    What if there is no demonstrable failure of QM nor a failure of any more fundamental axiom and yet scalable, computationally superior quantum computers remain impossible to actually build?

    Is this not a possibility at all? No chance of us being perpetually plagued by the curse of de-coherence?

  56. Scott Says:

    Rahul #55: At this point we’re no longer discussing the real issue, but only the meaning of the word “impossible.”

    Yes, it’s entirely conceivable that no fundamental obstacle exists to building useful QCs, but in fact useful QCs will never be built. They might turn out to be too hard or too expensive, people might lose interest in them, civilization might collapse first, etc.

    But would that mean QC had turned out to be “impossible”? I’d say: obviously not! It would just mean that QC was one of the vast number of possible things that were never in fact done. I don’t think the distinction between “can happen” and “will happen” is really so hard to grasp; it’s even built into our language… 😉

  57. Gil Kalai Says:

    Scott #54 wouldn’t the same simple “logic” suggest at the time that the most plausible reason for why perpetual motion machines are impossible is the failure of Newtonian mechanics?

  58. Scott Says:

    Gil #57: No, it would suggest it was either a failure of Newtonian mechanics, or else a failure of something even more fundamental than the specific Newtonian rules. And in fact it turned out to be the latter. Presumably you agree that thermodynamics is not some engineering detail, but one of the pillars of modern physics. Likewise, if another “thermodynamics-like pillar” needed to be erected to explain the impossibility of QC, then I’d regard that not as an engineering detail, but as a development for fundamental physics at least as exciting as the overthrow of QM.

  59. Gil Kalai Says:

    Yes, but before you knew about thermodynamics the alternatice to “failure to Newtonian Mechanics” could look like a list of “obvious” things in your (2) option, including the evil gremlins changing your Hamiltonian from one time step to the next, with the sole goal of preventing you from perpetual motion.

    In any case, I regard the possibility of another “thermodynamics-like pillar” as much more reasonable compared to the possibility of “break-down of QM”.

  60. John Sidles Says:

    Scott asks a good question: “To derive the possibility of [scalable quantum computing], you need […] hypotheses that, in general, seem so fundamental to how the world works that if someone asked me to make a complete list of these other hypotheses, I couldn’t even come close. (“Err, is ‘no invincible evil gremlins changing your Hamiltonian from one time step to the next, with the sole goal of preventing you from factoring numbers’ something I need to write down explicitly? What about the possibility of preparing a bunch of qubits in the 0 state?”)

    Scott, the quantum optics literature plainly points to precisely the “other hypothesis” that you seek. But rather than “evil gremlins”, it’s more fun (and more accurate) to appreciate this literature as the reflection not of an opposing entity that is not “an evil gremlin”, but rather a more subtle entity, along the lines of the Dilbert cartoon’s character Mordac, the Preventer of Information Services.”

    Mordac (memorandum to all scientists)  Hamiltonians for BosonSampling and/or quantum computing devices must be ordered exclusively from the quantum electrodynamics (QED) parts-catalog.

    Alice  Restricting our design-space to Mordac’s QED-catalog wasn’t easy, but none the less, my team of quantum engineers has designed and experimentally validated on-demand \(n\)-photon sources that scale to arbitrarily large \(n\) with lower-bounded probability of successfully emitting quantum-coherent \(n\)-photon states.

    Bob  Restricting our design-space to Mordac’s QED-catalog wasn’t easy, but none the less, my team of quantum engineers has designed and experimentally validated \(n\)-photon detectors that scale to arbitrarily large \(n\) with lower-bounded probability of successfully detecting \(n\)-photon incoming states.

    The Pointy-Haired Boss  Wonderful! Alice and Bob, you are hereby directed to perform large-$n$ BosonSampling experiments.

    Mordac (says nothing, but laughs evilly)

    Howard Carmichael’s introductory Preface to his two-volume Statistical Methods in Quantum Optics is good place for students (especially) to begin gaining an appreciation of the deep physical and mathematical foundations that ground Mordac’s laughter:

    “As a graduate student working in quantum optics I encountered […] deep irritation caused by the work I was doing, something quite fundamental that I did not understand. […] Certain elementary notions that are accepted as starting points for work in quantum optics somehow had no fundamental foundation, no verifiable root. My inclination was to mine physics vertically, and here was a subject whose tunnels were dug horizontally. […] I now appreciate more clearly where my question was headed: Yes it does head downward, and it goes very deep. What is less clear is that there is a path in that direction understood by anyone very well. […] Here one must face those notorious issues of interpretation that stimulate much confusion and contention but few definite answers.”

    Conclusion  Young quantum systems engineers are well-advised to study Howard Carmichael’s assessments carefully  … and to respect the impressive (as they seem to me) width, depth, and context of Gil Kalai’s quantum postulates … and most of all, to be wary of Mordac’s evil laughter.

  61. Rahul Says:

    But maybe it won’t be a “thermodynamics-like pillar” this time around? Couldn’t-the impossibility of scalable QC result from an engineering-like detail?

    “as exciting as the overthrow of QM” sounds like a very high bar. Perhaps the end of QC-pursuits will be a whimper & not a bang?

  62. Alexander Vlasov Says:

    Gil Kalai #59, I could understand your trouble about problem with QC due to noise or error correcting problem if we would have working quantum computing circuits with many qubits failing with Shor’s factoring algorithms only due to errors or noise.

  63. Gil Kalai Says:

    Rahul: “But maybe it won’t be a “thermodynamics-like pillar” this time around? Couldn’t-the impossibility of scalable QC result from an engineering-like detail?”

    Yes, this is possible. For my own work of modeling mathematically the “quantum fault-tolerance barrier” it does not matter if it is rooted in physics or engineering. Mathematically, it represents a major and exciting phase transition.

  64. Scott Says:

    Rahul #61: I’d say that anything big enough to reduce the computational power of the universe from BQP down to BPP, by definition would not be an “engineering-like detail,” but a major pillar of physics. We get to choose when to use words like “pillar” or “detail” to describe a physical effect, and the only reasonable basis for such a decision is how important the effect’s consequences are. And changing the complexity class of the universe is one of the more dramatic consequences I can think of.

    (and that’s the last I’d like to say about this; there are more interesting/meaty/fruitful things to discuss…)

  65. Gil Kalai Says:

    (Regarding #45 and #54) Let me consider just one of the list of things that Scott regards as so basic that their failure would be even more surprising in some ways than the failure of QM itself (no, not the one about gremlins).

    (*) decoherence can be decreased below ~0.1% per qubit per gate…)

    The negation of (*) includes

    (**) For all attempts at universal quantum computing devices, decoherence cannot be decreased below ~0.1% per qubit per gate…. In fact, decoherence will scale up linearly with the number of qubits.

    I can believe that Scott regards (*) as more likely than (**) and we can agree to disagree on this matter. But it seems very unreasonable to regard (**) as even more surprising than the failure of QM itself.

    (At the end I find myself puzzled by the “failure of QM” rhetorics…)

  66. Rahul Says:

    Gil Kalai #65:

    Loved your comment.

    My gut feeling is that you are getting it exactly right. (apologies if I’m too presumptuous saying this)

  67. Scott Says:

    Gil #65: I absolutely would regard (**) as even more surprising than the failure of QM itself. And the reason is that you, or Rahul, don’t have the liberty to just declare something like (**) a “candidate law of nature.” You have to explain it, by deriving it from a state space, dynamics, and/or initial boundary conditions—the form that physical laws take. And personally, based on what else I know about the physical world (e.g., locality), I don’t understand what kind of fundamental explanation there could possibly be for (**), that wouldn’t basically boil down to “evil gremlins.” So I’m afraid I remain as puzzled by your position as you are by mine.

  68. Rahul Says:

    Scott #67:

    I can’t speak for Gil but I think your comment is mis-interpreting (misrepresenting?) his position.

    No one’s declaring Gil’s (**) as a law just yet. Not at all.

    What he seems to be saying is, were there to exist such a derivation (of the form you allude to), that fact, although undeniably astonishing, would be less surprising than discovering a fatal flaw in the edifice of QM itself.

  69. Scott Says:

    Rahul #68: OK. All I can say is that, while I can contemplate the logical possibility of a derivation of (**) that would be “less astonishing” than the overthrow of QM, I can’t understand how such a derivation could go in practice without being more astonishing. We may have hit a bedrock of differing intuitions here where there’s nothing more to say…

  70. matt Says:

    I will be very happy if John Sidles can use DFT, or if Gil Kalai can use some undisclosed algorithm, to definitively answer the question: what is the phase diagram of the two-dimensional Hubbard model, including long-range Coulomb interaction, as a function of doping and interaction strength? Perhaps even a more finitary question: for such a Hubbard model with 10^3 sites, what is the superconducting pair correlation function? Surely with all the PTIME advances going on and all the rapid reductions in exponents and all that, this question is easy?

  71. Gil Kalai Says:

    Rahul (#66): Thank you very much.

    Scott (#67): I absolutely would regard (**) as even more surprising than the failure of QM itself.

    Ok, I suppose this view applies to (***) as well.

    (***) For all attempts at universal quantum computing devices with n qubits, decoherence cannot be decreased below 1/n^3 per qubit per gate….

    Anyway, your point of view is interesting. It is consistent with things you wrote over the years. I simply disagree.

    (A good reason to refrain from decraling (**) a law of physics just yet 🙂 is that it refers to hypothetical objects. We don’t want to formulate laws of physics whose scope is *just* perpetual motion machines or quantum computers for that matter. So the issue is not so much of deriving (**) but of describing a sufficiently general law that applies not only for counterfactuals.)

  72. Joe Fitzsimons Says:

    Gil, I think the linear increase in decoherence with the number of qubits is basically impossible on physical grounds. In order to have the rate of decoherence increase linearly, you necessarily the energy levels to broaden linearly in the number of qubits (due to the Margolus-Levitin theorem). But this seems basically impossible to accomplish: the fundamental forces in nature fall off too rapidly. The electric field potential is probably the most relevant and longest ranged, and falls off as 1/r for each particle. If you assume some constant maximum density of qubits, then even this leads to sub-linear scaling. Indeed, line broadening is something that has been well documented, and for most systems of interest, you tend to basically just get some constant broadening. It seems strange to think that every possible system considered for quantum computation must necessarily behave differently.

  73. Sandro Says:

    @Scott & Rahul, Interesting aside on the discreteness of spacetime. I’ve been wondering about similar issues myself recently, particularly given ‘t Hooft’s efforts to reproduce QM with cellular automota. The infinities and continuities of physics seem ultimately to be convenient symbolic devices rather than fundamental.

  74. Scott Says:

    Sandro #73: As a computer scientist, believe me that I share your prejudice in favor of discreteness! However, it’s important to understand that what ‘t Hooft wants to do is way more radical than “merely” discretizing spacetime (as in the LQG program). ‘t Hooft wants to get rid of the entire framework of quantum mechanics, with its amplitudes used to calculate probabilities. And unfortunately, that then leaves him totally unable to account for observed facts like the Bell inequality violation—a problem that he’s written a long series of papers for the last decade trying to talk around, which to my mind have been utterly unconvincing. Personally, I’m partial to the speculation that physics at the Planck scale should look vaguely like a quantum cellular automaton—in which case, there would still be continuity, but only at the “benign” level of amplitudes, and not at the level of any observable quantity.

    Incidentally, regarding the infinities of QFT, I think almost all physicists these days accept the Wilsonian perspective, according to which the infinities are indeed “merely” an artifact of our calculational tools, and the way those tools trace out our ignorance of what’s going on at extremely small distance scales.

  75. Gil Kalai Says:

    Joe (#72): “Gil, I think the linear increase in decoherence with the number of qubits is basically impossible on physical grounds. In order to have the rate of decoherence increase linearly, you necessarily the energy levels to broaden linearly in the number of qubits (due to the Margolus-Levitin theorem). But this seems basically impossible to accomplish: the fundamental forces in nature fall off too rapidly.”

    Joe, For the context of my discussion with Scott, his arguments apply to
    (***) For all attempts at universal quantum computing devices with n qubits, decoherence cannot be decreased below 1/n^3 per qubit per gate….

    just as much as it applies to (**).

    But your argument is also incorrect. Here are two comments:

    1) The rate or noise in terms of qubit-errors scales up linearly with the rate of noise in terms of trace-distance when the errors are very correlated. Does your argument explain why errrors cannot be correlated?

    2) As I pointed out, when you consider models or claims or proposed “laws” that exclude quantum fault tolerance and universal quantum computing, and apply them just for the hypothetical case of a universal quantum computers you may get some properties that seems “unphysical.” This simply expresses the fact that (given these models/proposed laws) universal quantum computers themselves are “unphysical.”

    Here is an analogy: When you try to study why perpetual motion machines cannot work you need to enlarge the discussion to general machines.

    If you have a law of physics for general machines that leads to absurd conclusion for perpetual motion machines, the conclusion is not that the law is wrong but rather that perpetual motion machines are impossible.

    (Of course you need to carefully check your law for general machines.)

    Matt (#70): “Surely with all the PTIME advances going on and all the rapid reductions in exponents and all that, this question is easy?”

    I am not aware of P-time advances for 2-D Hubbard model. (I asked a related question on TCS overflow about approximating the ground state.) “…what is the superconducting pair correlation function?” hmm, I will need more explanation to even understand the question.

  76. Gil Says:

    One more thing: (Joe #72) “It seems strange to think that every possible system considered for quantum computation must necessarily behave differently.”

    Ok, strange it may seem! But do you share, Joe, Scott’s opinion that the validity of a seemingly strange possibility of this kind will be more surprising than the failure of QM?

  77. Joe Fitzsimons Says:

    Gil, I must admit I am rather lost. You seem to have simply asserted that I am wrong and then made two slightly odd comments, neither of which seem to point to a flaw in the argument.

    Regarding the first comment, you are conflating two different issues. The minimum timescale on which a qubit can be rotated by pi in any direction is governed by the reciprocal of the gap between energy levels for the qubit. This isn’t speculation, it’s a theorem. Thus the timescale on which a qubit decoheres is governed by the local energy scale. I cannot find a physical interaction which allows this to grow linearly in the number of qubits in spatial dimensions <=3 if the qubits have a constant packing density. Indeed, there doesn't seem to be one, since it should suffice to consider the four fundamental interactions (EM, gravitional, strong and weak).

    The correlation argument is flawed, because you are abstracting too far. By abstracting away the physics, you have basically allowed yourself arbitrarily strong arbitrary ranged interactions which are not possible in nature. You are implicitly assuming that qubits can be flipped at a rate which scales at least linearly in the system size, and this statement does not appear to be compatible with known physics.

    The second comment seems to be a rather circuitous reasoning effectively saying that I shouldn't be allowed to apply physics to reasons why QCs shouldn't work. I am not applying the argument to assumed QCs, I am applying it to any physical system: the gap between maximum and minimum energy levels of any physical system does not seem to grow linearly in the volume of the system.

    I must admit, I find it extremely puzzling that you seem content to ignore the fact that we do in fact know some physics beyond solely the Schroedinger equation (i.e. we know something about the kind of Hamiltonians that can exist which goes beyond a mere restriction to Hermitian matrices). You seem to be suggesting that your conjectures are somehow above the scrutiny of physics, although what we are currently discussing is clearly a physical hypothesis not a purely mathematical claim.

  78. Alexander Vlasov Says:

    Joe #72, I had certain difficulties with understanding of the global picture of the quantum computing battle, but I had impression that Gil Kalai suggested some new kind of dynamic (SLE?) that may prevent QC to work fault-tolerantly.

  79. Joe Fitzsimons Says:

    Gil, regarding you later post, I do not consider it possible. Allowing the possibility means ignoring knowledge we already have from many areas of physics.

  80. Gil Kalai Says:

    Hi Joe, Indeed my response to your post was a little too quick, assumed that our discussions from our quantum debate two years ago are fresh in our minds, and perhaps even assumed my interpretation for some of those discussions, that you may not share.

    For all I see, Scott’s argument and comments regarding (**) applies equally well to a statement like this:

    (***) For all attempts at universal quantum computing devices with n qubits, decoherence cannot be decreased below 1/n^3 per qubit per gate….

    In other words, Scott regards quantum fault-tolerance as some sort of logical consequence of quantum mechanics. I disagree with this.

    My specific conjectures are no so relevant to the discussion I had with Scott.

    Coming back to the two points I raised and some of the point you have raised:

    1) We had a long discussion in the QC debate about my correlation conjecture which implies also scaling up of errors (measured in terms of qubit-errors but not in term of trace-distance). So my “correlation argument” was simply to tell you that we discussed it already. You made similar arguments to the one you made here (even with more details) and we looked at it very carefully. In particular, references to the basic four forces of nature and to the standard model that you made then and now do not seem to be that relevant to quantum computers. But I admit that your argument now is different. Maybe you have now a new “killing” argument, but I don’t see it. I will be happy to discuss it further with you privately.

    2) Here is the logical structure of my argumet:

    “It is quite possible that any attempt by us (or by nature) to build a universal quantum computer, or even just a system that realizes the error-correcting codes needed for quantum-fault tolerance, will lead to error rate scaling up linearly, and hence to failure . Tha failure will occur much before you will witness any violation of known physics that you talk about.”

    If this logical possibility violates QM or some other basic physics principle I will be very happy to know about it!

    If it just “seems strange” to you, then I can join you with this sentiment.

  81. Joe Fitzsimons Says:

    Hi Gil, I apologise, I somewhat misunderstood. I do not recall the claim about linear scaling of decoherence before, just a claim that we could not go below the level necessary for fault-tolerance at least at large scales, which is far weaker. I think we are going to disagree about the status of my prior comments about that conjecture. However, leaving that aside, the new claim that there should be a linear increase in the decoherence rate seems fatally flawed for the reasons I mentioned above. While these comments may sound similar to comments about previous conjectures, it is not the same.

    To get a linear increase, you need the local energy scale to increase linearly in qubit number. Do you agree with this? It is simply a restatement of the Margolus-Levitin. For this to be true for quantum computing devices, it must be true for at least some physical arrangement of matter. However, it does not appear to be possible to construct such an arrangement of matter. Perhaps I am missing something, and if you can describe any arrangement of matter where the local potential grows linearly in the number particles, I will happily drop this objection. However, as things stand, I believe that my argument is correct and you cannot get such a linear increase without postulating new fundamental forces, and that way lies madness.

  82. Gil Kalai Says:

    Hi Joe, I will be happy to think about possible relevance of Margolus-Levitin theorem to my point of view. (My feeling is that it would not be relevant, as my conjectures excludes quantum fault tolerance already for a small number of qubits, but given that, I don’t remember hearing about this theorem before, let me think about it off line.)

    “I find it extremely puzzling that you seem content to ignore the fact that we do in fact know some physics beyond solely the Schrodinger equation”

    This is precisely where we fundamentally agree. (But Scott’s point of view puzzles me.) I don’t regard any prediction allowed by the basic framework of QM (such as QC) as a “prediction of QM whose violation means the failure of QM or something more surprising.” There are layers of physics laws on top of the basic QM frameworks. One of the purposes (at least my purposes) of the long discussion two years ago was to see if my conjectures conflict “known physics beyond the Schrodinger equation,” and also to examine in view of the conjectures some “physics beyond the Schrodinger equation,”- specifically how to model quantum noise. Beyond that, I suppose that we disagree on the details.

    Regarding the earlier discussion ( #36, #45). What is the “extra axiom?” My short answer is “no quantum fault-tolerance.” Will it be more surprising than the break down of QM? Of course not, I don’t see any reason to think so. Will it be surprising/important? You bet.

    In my view “no quantum fault-tolerance” will be less surprising than quantum computers, but we move here again to the area of counterfactuals. So the way I see it, quantum fault-tolerance will be more important to physics than quantum computation, and its importance to physics will be similar to the importance of non-deterministic computation in the theory of computing: their importance is that they cannot be achieved.

  83. Rahul Says:

    One point that annoys me in this discussion is often how academics brush away the question of “Will we have a working computationally superior QC in a certain time frame?” to be an irrelevant aside that only some moronic materialistic philistine would ask.

    I wish one wasn’t forced to be so defensive about asking practical questions every once in a while. I know John Sidles keeps harping on his “QC Roadmaps” but perhaps for once he’s right & someone needs to lay out some projections or expectations. Maybe we won’t meet them but let’s at least see the current plan of attack?

  84. Rahul Says:

    I think I agree with Scott that this is fundamentally a chasm of viewpoints.

    Case in point is something Scott wrote on the blog a while ago:

    “When I talk to physicists like the ones at BNL, they often find it faintly ridiculous that anyone would doubt quantum mechanics. But people certainly do — even when they don’t admit that that’s what they’re doing — when the alternative is accepting that integers are efficiently factorizable in the physical world. “

    Indeed even to me it is intuitively far easier to accept integers being efficiently factorizable than QM being wrong. I wonder where Gil’s intuition stands on this question.

  85. John Sidles Says:

    Rahul  comments “John Sidles keeps harping on his “QC Roadmaps” but perhaps for once he’s right & someone needs to lay out some projections or expectations.”

    An undoubted virtue of roadmaps in general (perhaps their greatest generic virtue, as it seems to me) is that ten years on, they provide concrete foundations for discussing “what went right and what went wrong.”

    That is why (as it seems to me) the chief value of the 2002 and 2004 QIST Roadmaps (reports LA-UR-02-6900 and LA-UR-04-1778) will be realized only when the initial QIST intent “to revisit these important issues in future updates” is followed-up.

    A thought-provoking essay upon the topic of “revisiting roadmaps” is MajGen (and PhD historian) H. R. McMaster’s recent How militaries learn and adapt. The isomorphism \(\text{militaries}\leftrightarrow\text{scientists}\) turns out to be natural; for example the Yaakov Weinstein/Mitre preprint (of #48) is exemplary of good “revisiting” quantum information science.

    Specifically in regard to simulation, QIST 2004 (LA-UR-04-1778) accurately foresaw that

    “Quantum simulation represents, along with Shor’s and Grover’s algorithms, one of the three main experimental applications of quantum computers.”

    What was not foreseen by the 2002 and 2004 QIST Panels was the subsequent decade of thousand-fold capability increases and thousand-fold cost decreases in quantum simulation by classical/PTIME algorithms.

    Obviously it’s an open question — and a strategically/medically crucial one — whether comparable rates of progress in quantum simulation capability can be sustained in coming decades; moreover everyone appreciates that QIST-related math-and-physics insights are essential to answering this question “in width, in depth, and in context” … to borrow the language of McMaster’s essay.

    Needless to say, it is neither necessary, nor feasible, nor desirable that everyone think alike in these matters. And that is why I am grateful to researchers like Gil Kalai, Aram Harrow, John Preskill, and the many folks who contribute to Shtetl Optimized, and especially to Scott Aaronson for his enduring effort and commitment, that sustain the collegial diversity-of-thought, upon which rests the shared hopes, that we all cherish, of continued progress in quantum computing and quantum simulation capability.

  86. Alexander Vlasov Says:

    Sorry, it is partially my quilt for displacing the discussion from the commenting of long awaited paper about D-Wave benchmark that would be more interesting for me also. Relating Gil Kalai conjecture I did not remember explanation of a reason for such conjecture, e.g. if processes like suggested by Gil Kalai were observed earlier in the Nature.

  87. Scott Says:

    Rahul #83:

      One point that annoys me in this discussion is often how academics brush away the question of “Will we have a working computationally superior QC in a certain time frame?” to be an irrelevant aside that only some moronic materialistic philistine would ask.

    If I get as annoyed by people asking this question as you do by people like me getting annoyed about it, the reason is that I’ve learned through experience that the demand for “QC roadmaps” is nothing more than a three-ring circus.

    In the first ring are the pointy-haired bosses who conceive of quantum computing as basically a next-generation iPad, and whose idea of “fundamental research into the nature of reality” is something that might take as long as 5 years to hit the store shelves.

    In the second ring are the shortsighted academics who say to the pointy-haired bosses, “wait, so all I have to do is promise you a useful QC in five years, and you’ll give me money now? wow, count me in! I can promise anything!”

    In the third ring are the QC skeptics like Dyakonov, who watch gleefully as the shortsighted academics inevitably fail to deliver on schedule, and exclaim “aha! we told you so! QC must be fundamentally impossible!”

    Meanwhile, far away from the din of the circus tent lies the actual truth of the matter: that we’re in more-or-less the same situation with QC that Charles Babbage was with classical computing in the 1830s. We know what we want and we know why the laws of physics should allow it, but that doesn’t mean our civilization happens to have reached the requisite technological level to implement it. With unforeseen breakthroughs, maybe it could happen in 20 years; without such breakthroughs, maybe it could take 200 years or longer. Either way, though, I’d say that the impact QC research has already had on classical computer science and on theoretical and experimental physics, more than justifies the small number of people who work on it (fewer, I’d guess, than the number of people whose job is to add pointless features and bugs to Microsoft Office) to continue doing so.

  88. Rahul Says:

    John Sidles #85:

    “An undoubted virtue of roadmaps in general (perhaps their greatest generic virtue, as it seems to me) is that ten years on, they provide concrete foundations for discussing “what went right and what went wrong.””

    Indeed! They also impose a sort of discipline on the experimental program. People focus on the key bottlenecks (e.g. decoherence, scaling, error correction, optics etc.) rather than fritter away time & resources on frills & second order problems.

  89. rrtucci Says:

    The Google Empire (or Neven’s Boys) Strikes Back
    https://plus.google.com/+QuantumAILab/posts/DymNo8DzAYi

  90. Rahul Says:

    Sadly I suspect a lot of readers will take this snippet from Google’s BlogPost as the take home message:

    “At 509 qubits, the machine [D-Wave] is about 35,500 times (!) faster than the best of these solvers [popular of-the-shelf solvers — Tabu Search, Akmaxsat and CPLEX]….”

  91. Sandeep Says:

    Hello Scott,
    I found the discussion started by Rahul quite interesting. I was reading one of the old Feynman papers and was completely impressed with what he said about discreetness.


    “All these things suggest that it’s really true, somehow, that the physical world is representable in a discretized way, because every time you get into a bind like this, you discover that the experiment does just what’s necessary to escape the trouble that would come if the electric field went to zero, or you’d never be able to see a star beyond a certain distance, because the field would have gotten below the number of digits that your world can carry.
    If we talk about discretizing.., of course I pointed out that we’re going to have to change the laws of physics.Because the laws of physics as written now have, in the classical limit, a continuous variable everywhere, space and time.”

    I wanted to ask you about the discrete theories you mentioned. Are there discrete equivalent theories of Maxwell’s theory or Quantum Mechanics?

  92. John Sidles Says:

    On his own weblog, Gil Kalai has posted (what seems to me) to be one of the very best commentaries of the entire Harrow/Kalai debate.

    Gil Kalai comments  “Many thanks for defending my postulates! Of course, I also thank [critics] for criticizing my postulates! best Gil”

    That was well-said, Gil Kalai!

    With similar intent to Gil’s, please let me defend the QIST roadmaps (of LANL reports LA-UR-02-6900 and LA-UR-04-1778) from Scott’s three charges (of #87).

    Charge 1  “Pointy-Haired Bosses” chartered QIST, and then

    Charge 2  “Shortsighted Academics” wrote QIST, and then

    Charge 3  “Gleeful Skeptics” exploited QIST.

    A plain reading of the QIST reports — which readers of Shtetl Optimized are encouraged  — provides ample grounds for immediate dismissal of Scott’s three charges, upon the following grounds:

    Finding 1  Responsible (and admirably foresighted) agencies chartered the QIST reports with an explicit goal of increasing the appreciation of quantum information science — by scientists and nonscientists alike — “in width, in depth, and in context.” And this is Good.

    Finding 2  The QIST reports accurately (and with admirable clarity) reflected the best then-available appreciation of quantum information science (of 2002 and 2004) “in width, in depth, and in context.” And this is Good.

    Finding 3  The best available skepticism has drawn upon the QIST reports to increase our collective appreciation of quantum information science (while amplifying our mathematical and medical ambitions for it) in ever-deepening “width, depth, and context” … and this deepening process continues to the present day. And this is Good.

    Concluding Proposition  A retrospective evaluation of the 2002 and 2004 QIST roadmaps, that was accompanied by a prospective evaluation of the prospects for feasible BosonSampling experiments, accompanied by Weinstein / Mitre-style simulations of those experiments, would: (1) be nominally unimpeded by considerations of state-secrecy and/or trade-secrecy, upon the concrete grounds that BosonSampling has “a grand total of zero known practical applications” (in Scott’s memorable phrase); and (2) contribute substantially to deepening the “width, depth, and context” of quantum information science, upon which the career-hope of young quantum scientists, mathematicians, and engineers chiefly depends, along with all our our shared interest in beautiful mathematics, surprising scientific discoveries, and transformative practical applications.

  93. Scott Says:

    Sandeep #91:

      I wanted to ask you about the discrete theories you mentioned. Are there discrete equivalent theories of Maxwell’s theory or Quantum Mechanics?

    It depends a lot on what you’re trying to discretize.

    1. You can take any continuous field theory and “discretize” it by putting everything on a lattice. You can then hope to recover the continuum theory, in the limit as the lattice spacing goes to 0. This is what’s actually done in practice with Maxwell’s equations, QFT, GR and more. The main problem is that a lattice violates what we think are the symmetries of physical law (e.g., rotational and Lorentz symmetry), so this sort of thing is considered “non-fundamental,” just a calculational tool.

    2. In quantum gravity, on the other hand, it’s believed that space and time themselves should become “quantumly foamy” at the Planck scale of 10-33 cm or 10-43 sec. And some physicists—not all!—think that foaminess might actually correspond to spacetime being “discrete” (in some sense) at that scale. This is what the loop quantum gravity and spin foam programs try to realize, for example. Note that, because of the symmetry problem mentioned above, the form the “discreteness” takes would need to be much more subtle than just a lattice (like in Conway’s Game of Life).

    3. Much more radically, some people want to get rid even of the amplitudes in quantum mechanics (which are continuous complex numbers, but which are never directly observed, only used to calculate the probabilities of discrete outcomes). My personal view, however, is that no one has managed to get anything sensible that way. And one reason I’d give is that the unitary group has no “nice” enough discrete subgroups: one thing we learned early on in quantum computing is that most discrete sets of unitaries actually generate a dense subgroup of the full unitary group.

  94. Sol Warda Says:

    Scott: See if you can poke any holes in this Google argument: https://plus.google.com/+QuantumAILab/posts/DymNo8DzAYi

  95. Clive Says:

    From: http://www.bbc.co.uk/news/science-environment-25787226

    Mr Hilton commented: “An important element of D-Wave’s technology is our roadmap and vision. We are laser focused on the performance of the machine, understanding how the technology is working so we can continue to improve it and solve real world problems.

    He added: “Our customers are interested in solving real world problems that classical computers are less suited for and are often more complex than what we glean from a straightforward benchmarking test.”

    D-Wave says it currently has a 1,000 qubit processor in its lab and plans to release it later in 2014.

    “Our goal with the next generation of processors is to enhance quantum annealing performance, such that even benchmarks repeated at the 512 qubit scale would perform and scale better. We haven’t yet seen any fundamental limits to performance that cannot be improved with design changes,” Mr Hilton explained.

  96. Rahul Says:

    D-Wave says it currently has a 1,000 qubit processor in its lab and plans to release it later in 2014.

    For all my skepticism about D-Wave one bit I appreciate is they do provide concrete expectations. At least we have some waypoints to hold them accountable for.

  97. Douglas Knight Says:

    Rahul, why should I care that whether the meaningless numbers they announce for a “new machine” match the prediction they set now? That’s like complaining that Apple released the iphone 4S rather than the iphone 5. Which was a common complaint, because people are idiots.

  98. Scott Says:

    Douglas #97: The common thread in almost ALL of Rahul’s comments on this blog, for maybe a year, has been a refusal to accept quantum computing for the basic research field that it is, and an insistence on treating it like an applied engineering project that can then be faulted for failing to produce “deliverables” on schedule. No matter how many times I or others try to explain the real situation to him, he just keeps on reverting back to that view.

    The reason D-Wave is able to promise 1000 qubits by such-and-such date is, of course, that they’re scaling up a technology that already exists. The “only” problem is that that technology (nanosecond coherence times, stoquastic Hamiltonians, no error-correction) does not appear to suffice for a speedup. Hence all those dithering, non-roadmap-driven academics who persist in trying to invent better technologies.

  99. Rahul Says:

    @Douglas Knight

    You should care because this announcement makes it possible for Matthias Troyer or some other bench-marker to compare the 1000 qubit machine against a classical competitor and yet again burst DWave’s hype bubble that they really do have a fast, useful, scalable Quantum Computer that the world ought to be buying for 10 million dollars.

    I appreciate DWave specifically for facilitating the benchmarks to prove DWave wrong. For giving yet another chance to falsify the claim that the black box contains a real, working QC. The unscrupulous ones don’t usually make it so easy (look at the guys promising fusion energy for example).

  100. John Sidles Says:

    Scott critiques the critics: “A common thread [is] a refusal to accept quantum computing for the basic research field that it is, and an insistence on treating it like an applied engineering project that can then be faulted for failing to produce ‘deliverables’ on schedule.”

    The American president Bill Clinton was celebrated — and reviled too — for his pragmatic ingenuity in triangulating problems that ideology-driven pundits (equally on the left and the right) deemed irreconcilable.

    We can all hope that Nature, being no less ingenious than Bill Clinton, will triangulate the various issues associated to quantum computation. One such happy triangulation, which would be strategically, economically and technologically transformational, *and* plenty fun for young mathematicians, scientists, and engineers, might be as follows.

    Acknowledgement  The following “quantum virtuous cycle” scenario owes much to the International Roadmapping Committee’s (IRC) sensible and cheerful analysis in the More-than-Moore White Paper (2010).

    (1)  D-Wave exhibits an heterogenous \(k\)-member class of simulation and/or optimization problems \((S_i; i\in 1,k)\), that are chosen for their practical importance.

    (2)  D-Wave demonstrates (empirically) that “real world” instances of problems in each set \(S_i\) can be solved with \(n^{k_i}\) D-Wave qubits.

    (3)  New generations of D-Wave machines, having ever-increasing \(n\), are embraced by eager customers, in a paradigmatic virtuous cycle of quantum technology development.

    (4)  Simulationists meanwhile improve their classical algorithms, such that “real world” instances of problems in the D-Wave problem-set \(S_i\) can be solved with (let us say) \(n^{k_i+1}\) classical operations.

    (5)  New generations of classical algorithms also are embraced by eager customers, in a parallel paradigmatic virtuous cycle of classical technology development.

    (6)  A crucial contribution of (what Scott calls) “basic” quantum computing research is to provide the mathematical foundations and physical insights that sustain both the classical and quantum virtuous cycles.

    (7)  Progress toward the research objectives of error-corrected quantum computing and/or scalable BosonSampling continues at its historical and stately (but vital) one-doubling-per-scientific-generation pace … with no artificial pressure to accelerate that pace.

    Conclusion  It is plausible (even likely?) that Nature already has triangulated the debate between quantum computing advocates and skeptics, by sustaining a “virtuous cycle” of classical-quantum synthesis that already is generating transformational advances in simulation and optimization capability, such that we already live in a world in which the Extended Church-Turing Thesis (ECT) is effectively true.

  101. Douglas Knight Says:

    “yet again burst DWave’s hype bubble”

    What does that metaphor mean? When I burst a soap bubble, it stays burst. Do you mean like how lots of people burst Madoff’s bubble?

  102. Gil Kalai Says:

    #54 Scott: “Incidentally, thanks for reporting Robert Alicki’s speculation; it helps to clarify his views for me. But I don’t see his idea as a separate possibility: rather, I consider it 100% compatible with the classical Extended Church-Turing Thesis.”

    Yes this is 100% compatible with the thesis but still a separate possibility. There is a difference between the two ways that ECT can be satisfied in the context of quantum field theory computations.

    One possibility is that certain computations from quantum field theory are computationally hard, gives precise description of how nature works, but in order to probe the outcome, either experimentally or computationally we need exponential resources.

    Another possibility is that certain computations from quantum fiels theory are computationally hard, and this indicate (because of the ECT) that they are simply becoming irrelevant to the physics (error terms are becoming larger than main term) for complex systems.

    I still think that the second possibility is more likely, but I missed the first possibility, before Robert mentioned it to me.

  103. John Sidles Says:

    Gil Kalai postulates  (**): “For all attempts at universal quantum computing devices […] decoherence will scale up linearly with the number of qubits.”

    Scott Aaronson opines  “I absolutely would regard (**) as even more surprising than the failure of QM itself.”

    Daniel Gottesman postulates  (***): “No geometric restrictions on gates [in particular] interacting arbitrary pairs of qubits, no matter where the two qubits are physically located.”

    Reference  The Gottesman postulate is stated in his recent (excellent!) e-print “What is the Overhead Required for Fault-Tolerant Quantum Computation?” (arXiv:1310.2984v1).

    Observation  Nature imposes upon us the restriction that qubit interactions (both long-range and short-range) are mediated by QED field theory. And within the restricted world of QED, Gil Kalai’s (**) postulate is comparably plausible — from experimental experience, and from physical expectations, and from abstract mathematical arguments too — to Daniel Gottesman’s (***) postulate.

    Similar considerations apply to scalable BosonSampling:

    Scalable BosonSampling’s independence postulate  (****) “If we observe high-coherence \(n\)-photon emission into low-efficiency detectors, and we observe high-efficiency \(n\)-photon detection from low-coherence emitters, then we can attain high-coherence and high-efficiency performance by coupling the photon-sources to the photon-detectors.

    The teaching of quantum optics (per the “Mordac” argument of #60) is that (****) is the least plausible postulate of all.

    Conclusion  The Kalai postulates may or may not prove to be correct — we don’t know as yet — however our present understanding of QM provides no obvious reason(s) for us to regard them as implausible.

  104. Rahul Says:

    Scott #98:

    I think that criticism is absolutely unfair. I have no problem with QC as basic research (ok, asides of how much we ought to be spending). But are you really saying that it’s me who’s pushing to treat QC as applied engineering? And not the hundreds of others out there (including DWave, Google, NASA, media, academics etc.) who are selling it as a prototype of an just-around-the-corner useful technology?

    Sure, I gloat with glee when the ones promising it as a solution fail to deliver & in that respect I like the ones who quantify their claims better since that sort of hype is easier to refute than some vague hype. I’d gloat as much over the failure of someone promising a car that runs on water or a fusion generator, or a perpetual motion machine or some such.

    There’s some like Scott who are frank & honest about QC being basic research & I admire that greatly. But if you are claiming no QC enthusiast / article / firm is flogging QC as a potential revolutionary technology to crack codes or elucidate protein structure etc. then you’ve got to be kidding me.

  105. Scott Says:

    Rahul #104: Dude, a large part of what I’ve been doing on this blog for almost a decade—and with my book, and with my New York Times and Scientific American articles, etc. etc.—is trying to fight the view of QC as applied engineering (and all sorts of other related misconceptions, like that QCs can solve NP-complete problems with exponential parallelism). I try to redirect people’s excitement to what I see as the real reasons to be interested in this field, which have to do with enormous open questions both in complexity theory and in quantum foundations.

    And no, I don’t blame you for the prevalence in the wider world of the “applied engineering” view, the thing I’ve been fighting all these years: I blame the hype circus that I described in comment #87. I’m only focusing on you because you’re here on this blog! 🙂

    Regarding decreasing the amount of funding for basic research in QC, let me submit the following thought: when scientists can’t get funded for making an honest case—“we want to try this and see where it will lead, and we don’t actually know; we only know that people asking these sorts of questions has been pretty damn successful for the last 400 years”—that’s precisely what pushes some fraction of scientists into making dishonest, hype-laden cases. Which doesn’t excuse their behavior, but which suggests that, if you’re against QC hype, then it’s in your own interest to see that people can get funded just by promising to explore the frontiers of computer science and physics, and not promising a useful device in X number of years.

  106. Fred Says:

    #93
    “Much more radically, some people want to get rid even of the amplitudes in quantum mechanics (which are continuous complex numbers, but which are never directly observed, only used to calculate the probabilities of discrete outcomes).”

    It’s quite interesting that while QM offers fundamental “quanta” which are just the perfect to represent “pure” bits for classic digital computers, QC is really about taming mysterious continuous and non-directly-observable amplitudes…
    Seems like a throwback to the 1920’s and the whole particle/wave duality debates between Bohr, Pauli, Einstein, Heisenberg, Schrodinger, etc 🙂

  107. John Sidles Says:

    Fred (#106) quotes Scott (#93)  “Much more radically pragmatically, some people want to already get rid even of the amplitudes in quantum mechanics.

    Students (especially) may wish to consult the burgeoning 21st century literature that amply justifies the maps \(\text{“radically”}\to\text{“pragmatically”}\) and \(\text{“want to”}\to\text{“already”}\). Two starting-points are Carlton Caves’ on-line notes Completely Positive Maps, Positive Maps, and The Lindblad Form (2000) and Howard Carmichael’s Statistical Methods in Quantum Optics 2: Non-Classical Fields (2007).

    Carmichael’s final chapter Quantum Trajectories III: More Examples especially is recommended; it begins:

    “Much more could be said about methods of calculation and simulation within the quantum trajectory approach [and there is] therefore a great deal more that could be done. There is only one chapter left, however, in which to do it, and many things must be set aside.

    In this final chapter we explore just a small selection of the possible extensions and applications of quantum trajectory theory.”

    Shtetl Optimized readers are invited to verify for themselves that Caves’ Lindbladian unravelling(s) of Carmichael’s quantum trajectories are specified wholly in terms of the local structures (metric & symplectic) of quantum state-space, with no need for particular assumptions regarding the global structure (or even dimensionality) of that state-space.

    History provides plenty of precedent for radical ideas in physics and mathematics to originate humbly, in (what Scott calls) “applied engineering”. Shtetl Optimized readers are invited to consult Nathaniel Bowditch’s New American Practical Navigator of 1807 for a startlingly modern exposition of the key principles of differential geometry:

    In sailing on [any course] the surface of the globe may be supposed to be divided into an indefinite number of small surfaces, as square miles, furlongs, yards, &c. which on account of their smallness, in comparison with the whole surface of the earth, may be esteemed as plane surfaces, and the difference of latitude and departure (or meridian distance) made in sailing over each of these surfaces, may be calculated by the common rules of Plane Sailing, and by summing up the all the differences of latitude and departures made on these different planes, we shall obtain the whole difference of latitude and departure nearly.”

    A subsequent generation of geometers (Gauss, Bolyai, Lobachevsky, Riemann, etc.) merely formalized the non-Euclidean mathematical concepts and efficient computational algorithms that Bowditch (a fanatical educator) had already communicated to tens of thousands of common sailors.

    Conclusion  The doors already stand wide-open for 21st century researchers to radically pragmatically and routinely “get rid even of the amplitudes in quantum mechanics.”

    Conclusion  In researchers like Carlton Caves and Howard Carmichael, the 21st century quantum research community already has its Nathaniel Bowditches … as a vigorous cohort of “tensor practitioners” (in Joseph Landsberger’s kindly phrase) who already are embracing radically post-Hilbert quantum mathematical ideas in service of pragmatical engineering objectives.

    Our 21st century’s Carl Gauss, János Bolyai, Nikolai Lobachevsky, and Bernhard Riemann of (post-Hilbert) quantum dynamics have not yet appeared on-stage … but surely we can appreciate Gil Kalai as her/his/their prophet!

  108. rrtucci Says:

    [Luke]trying to fight the view of QC as applied engineering …
    the real reasons to be interested in this field, which have to do with enormous open questions both in complexity theory and in quantum foundations.

    [Yoda] That is why you fail, Luke. Quantum Computer others like China will get first if you persist in your complacency. You must unlearn what you have learned.

    Luke: I want my lamp back. I’m gonna need it to get out of this slimy mudhole.
    Yoda: Mudhole? Slimy? My home this is!

  109. quax Says:

    rrtucci #107, well at least we know that in this drama, Geordie cannot turn out to be Scott’s father. The age difference doesn’t allow for that.

  110. quax Says:

    Scott #98, I thing at the end there you meant to write “quantum speed-up”, instead of speed-up.

    From a simplistic applied engineering point of view, any practical speed-up will suffice quite nicely to ensure D-Wave’s business success.

  111. quax Says:

    “I thing therefor I am?”

    Despite the beautiful preview function I still manage to make spelling mistakes …

  112. Scott Says:

    quax #110: No, I meant what I said. As I explained in the post, there’s currently no evidence for ANY speedup of the D-Wave Two over a decent classical algorithm, regardless of its source (slightly outperforming simulated annealing on an unpredictable 10% of instances is totally consistent with the “random noise” you’d see were one classical heuristic compared against another). Furthermore, it seems to me that a quantum speedup WILL ultimately be needed, because that’s the only kind that will be non-ephemeral and Troyer-resistant.

  113. Rahul Says:

    quax #110

    “From a simplistic applied engineering point of view, any practical speed-up will suffice quite nicely to ensure D-Wave’s business success.”

    Even a practical non-quantum speedup seems impossible. Remember that black box costs $10 Million dollars! DWave’s practical competition is a $10 Million dollar conventional cluster. That’d probably be ~100 TeraFlops.

  114. Fred Says:

    One of my favorite observations from Feynman was that at any given point in space the light/EM fields from all the stars in the observable universe are in superposition (and they do not interfere). That’s about 10^20 fields in superposition. And we’re able to tune out all of them and focus on just one (with a telescope).
    If I understand correctly, a QC with a thousand qbits would require the simultaneous superposition and manipulation of 2^1000 amplitudes, and unlike the example with light, those amplitudes interfere with each other.
    That number is just mind-boggling (what is the underlying hardware nature is using to do all this?!). Maybe the amplitudes can be replaced by something else, but the size of the state would stay the same regardless, no?

    On one hand QM works beautifully to explain many physics experiments, but on the other hand we’re talking about taming 2^1000 amplitudes.
    Isn’t this a bit like saying that Einstein theory of gravity is predicting black holes and worm holes, and then jumping to conclusions about the actual feasibility of space travel across the galaxy?

    As another example, going back to the Feynman, electrodynamics is very successful in understanding how we can manipulate the fields from 10^200 stars or send radio signals across the earth, but the theory leads to all sorts of embarrassing dead ends when trying to apply it to figure the effects of an electron on itself:
    http://www.feynmanlectures.caltech.edu/II_28.html#Ch28-S4
    (btw, maybe this has been solved since Feynman wrote this, I have no idea).

    The beauty of a classical “digital” computer is that we are able to build a bottom-up approach by building strongly independent abstractions layers: transistors are used to implement robust on/off switches. Switches are used to implement logic gates. At that point we’re in the realm of pure logic and the underlying hardware can be fully ignored (until you have to reboot your computer of course).
    But with QC it doesn’t seem that obvious that we’ll ever be able to “abstract out” the manipulation of 2^1000 continuous amplitudes in a similar way.
    In many aspects QC are more like “analog” computers where errors due to noise and non-linearities have to be tamed throughout the whole system – I guess that’s where error-correction theories come in, but they don’t seem to convince everyone yet.

  115. Gil Kalai Says:

    A few responses to questions addressed to me:

    1) Alexander Vlasov (#62): “Gil Kalai #59, I could understand your trouble about problem with QC due to noise or error correcting problem if we would have working quantum computing circuits with many qubits failing with Shor’s factoring algorithms only due to errors or noise.”

    This is an important point: My prediction is that we will not have working quantum computing circuits with many qubits at all. Understanding the relation between evolution and noise for general quantum devices holds the key for understanding why this is the case.

    2) Matt (#70): These are good questions and I have a few further things to say about them. But it is not clear that this is a good place and time for it and I don’t want to stretch Scott’s hospitality. You are welcome to ask me privately or on my blog.

    3) Alexander Vlasov (#86): “Relating Gil Kalai conjecture I did not remember explanation of a reason for such conjecture, e.g. if processes like suggested by Gil Kalai were observed earlier in the Nature.”

    Again, this is an important question, but I don’t think we should discuss it here and now. The short answer is that I expect that all processes we observe in nature, or create ourselves satisfy my conjectures. (Again, we can discuss it privately.)

    4) John Sidles referred #92 to a comment where I thanked him for defending my conjectures and Joe for attacking them as “one of the very best commentaries of the entire Harrow/Kalai debate.”

    Ha ha, I take this as a compliment – knowing John who while often subtle is never never malicious. 🙂 To the point, Joe’s contributions in the debate and his criticism on my conjectures were very helpful to me, and indeed I am very thankful.

    5. Rahul(#84): ” (quoting Scott) ‘When I talk to physicists like the ones at BNL, they often find it faintly ridiculous that anyone would doubt quantum mechanics. But people certainly do — even when they don’t admit that that’s what they’re doing — when the alternative is accepting that integers are efficiently factorizable in the physical world. ‘

    Indeed even to me it is intuitively far easier to accept integers being efficiently factorizable than QM being wrong. I wonder where Gil’s intuition stands on this question.”

    I also regard it much more likely that integers are efficiently factorizable than QM being wrong. As far as I am concerned, QM being wrong is not the issue in the QC debate. In my opinion, “Scott’s alternative:” that either QM is wrong or QC can be built is simply incorrect, and is based on incorrect understanding of QM. (I am still truly curious to understand this point of view.)

    Greg Kuperberg’s offered in some earlier discussion on this blog a different “alternative” that we can refer to as “Greg’s alternative: Either a) QC will be built, or b) a new law of thermodynamics will forbid QC. Greg wrote that he does not believe that a new law of thermodynamics that forbid quantum computers will be established.

    I regard “Greg’s alternative” as more reasonable than “Scott’s alternative,” but, here, I disagree with Greg’s judgment: From (just) these two possibilities, that QC will be built or that a new principle of thermodynamics will forbid QC, I regard the second possibility as more likely.

  116. Rahul Says:

    Gil Kalai #115:

    Either a) QC will be built, or b) a new law of thermodynamics will forbid QC.

    I’m putting my money on ( c) No fundamental obstacle exists to building useful QCs, but in fact useful QCs will never be built.

    To misquote Scott: “QC will end up being one of the vast number of possible things that were never in fact done.” This may not be intellectually satisfying but reality is not obligated to always provide us such satisfaction.

    People may discover lots of interesting things along the way, but no major pillar shall fall & eventually interest & money in actually building a useful QC will peter out more out of frustration / attrition and so it shall stand. Again, that’s only my speculation.

  117. Scott Says:

    Fred #114:

      On one hand QM works beautifully to explain many physics experiments, but on the other hand we’re talking about taming 2^1000 amplitudes.
      Isn’t this a bit like saying that Einstein theory of gravity is predicting black holes and worm holes, and then jumping to conclusions about the actual feasibility of space travel across the galaxy?

    That’s a very interesting analogy, because in the latter case, we actually more-or-less know the physical reasons why it wouldn’t work. Namely, at least in classical GR, wormholes can’t be maintained unless you have matter with negative energy density. And in quantum field theory, negative energy does sometimes get introduced as a “bookkeeping device,” but you never see it in the form that would be needed to keep a wormhole open.

    Note also that, if traversable wormholes did exist, they would let us violate causality (i.e., build closed timelike curves), so we have a strong a-priori theoretical reason why the laws of physics should conspire to prevent them.

    With quantum computing, the situation is very different: no one has managed to articulate any clear, principled obstruction to building one, and we also don’t know any principle in any area of physics that they would violate.

    Your comment also illustrates something I’ve noticed many times: how QC skepticism is tied to exaggerated views of what a QC could do if we had one. Saying that you’d have to “tame 21000 amplitudes” does make it sound pretty implausible! But in fact, you don’t have to do that in a much stronger sense than a classical randomized algorithm would “tame 21000 probabilities.” In both cases, the actual operations that you perform are simple to describe and act on only 1 or 2 bits at a time. The amplitudes or probabilities only arise at a more abstract level of description—namely, if you want an object (a “state”) that you can use to predict the probability of any possible measurement outcome.

      In many aspects QC are more like “analog” computers where errors due to noise and non-linearities have to be tamed throughout the whole system – I guess that’s where error-correction theories come in, but they don’t seem to convince everyone yet.

    You could very easily get a skewed impression by reading this blog! Among people who really understand quantum error-correction, I’d guess that there are 99 who are completely convinced for every 1 who isn’t. (Most people would say that, because of quantum error-correction and other insights, we now understand that a QC is fundamentally not like an analog computer: instead, it’s much more like a probabilistic digital computer, except with the probabilities replaced by amplitudes.) It’s just that the few people who aren’t convinced are disproportionately subjects of or participants on this blog! 🙂

  118. John Sidles Says:

    Gil Kalai praises  “Greg Kuperberg’s Alternatives:  Either (A) QC will be built, or (B) a new law of thermodynamics will forbid QC.”

    Gil’s long post (#117) overlapped with my long post (#118), and so please let me briefly endorse Gil’s admirably concise framing of the discussion in terms of Kuperberg’s Alternatives, and join with Gil in setting forth reasons why Alternative B is comparably plausible to Alternative A.

    For me, the plausibility of Kuperberg Alternative B is illuminated by the century-old and still-ongoing struggle to reconcile microscale quantum dynamics with macroscale thermodynamics.

    Nature requires this reconciliation both when we journey (in our scientific imagination) to black hole event horizons, and when we seek to reconcile (in practical engineering devices) quantum-coherent spin-imaging channels with the messy, macroscopic, thermodynamic realities of the human biology under observation.

    Important Conclusion  Every quantum researcher owes Scott Aaronson a debt-of-gratitude for so patiently and tolerantly hosting this fine open discussion, in which many diverse contributors have set forth many cogent points.

  119. quax Says:

    Rahul, #113 this black box costs $10M now. They assemble them the way cars were build before Mr. Ford came along. Every single one is hand crafted at this point.

    Scott, I certainly have the greatest respect for Mathias Troyer but to the extend that semi-conductor based CPU integration density is reaching maturity, yet super-conducting ICs are just coming out of the gate, my bet is not solely based on the ‘quantumy’ but the underlying hardware potential as well.

    Your argument surely will be that you can just as well implement classical superconductive computing. And you are right of course, yet when it comes to building these kind of chips D-Wave amassed a huge know-how, so that this is also playing to their strength as a business venture.

  120. Gil Kalai Says:

    Let me add to #115, #118 my own thanks to Scott. While Scott rarely refers to the specifics of my work, Scott’s papers, writings (and blog) provide valuable input to my own work however worthless it may not or may be.

  121. John Sidles Says:

    Scott claims  “Among people who really understand quantum error-correction, I’d guess that there are 99 who are completely convinced for every 1 who isn’t.”

    Scott, doesn’t common-sense tell us that any estimate of this ratio (however large it may be) requires a very considerable correction for self-selection effects?

    After all, how many folks are motivated to critically consider recent work — Daniel Gottesman’s “What is the Overhead Required for Fault-Tolerant Quantum Computation (FTQC)?” (arXiv:1310.2984v1), for example, or Yaakov Weinstein/Mitre’s recent Quantum Error Correction During 50 Gates (arXiv:1312.1245) — without prior conviction of the feasibility of scalable FTQC?

    History, cognitive science, and common-sense all teach that neither intelligence nor training are substantially protective against prejudice (scientific or otherwise). An inexhaustible source of humor is that everyone perceives this fault more clearly in other folks than in themselves, such that Feynman’s double-edged aphorism receives universal acclaim:

    “The first principle is that you must not fool yourself — and you are the easiest person to fool.”

    Fortunately for science, reasoned debate is moderately protective against prejudice, and the evaluations of the market also can provide dispassionate grounds for the reconsideration even of cherished scientific beliefs.

    Conclusion  Jeff Bezos’ repeated rounds of investment in D-Wave and Bill Gates’ repeated rounds of investment in Schroedinger LLD supply dispassionate grounds (to young researchers in particular) for (re)assessing the wonderful (as they seem to me) scientific plausibilities, mathematical naturalities, engineering potentialities, enterprise possibilities, and global strategic realities that are inherent within (what Gil Kalai so nicely set forth in #115) “Kuperberg Alternative B.”

  122. Mike Says:

    quax,

    It may turn out that D-Wave does build a better mouse-trap –classical computer — that is more cost effective than, say, classical superconductive computing, and there may be a market for it as well.

  123. quax Says:

    Mike, #122 I doubt D-Wave will give up on Quantum Annealing, but since the D-Wave machine requires classical processing for set-up and read-out, it’ll make perfect sense to include this into the fridge.

    I think the most likely development, is what Scott dubs the ‘stone soup’ scenario. He kinda seems to imply this is a bad thing, I on the other hand like it a lot, because it’ll draw more industry R&D money and incrementally improve on an entirely new computing platform, until it’ll be quantum enough for Scot’s taste. All the cheer leading and sales pitching that’ll go on along the way will probably make him very cross 🙂

  124. quax Says:

    Meant of course to write ‘Scott’s taste’ really need to work on my speed typing skills …

  125. rrtucci Says:

    quax said
    “I think the most likely development, is what Scott dubs the ‘stone soup’ scenario. He kinda seems to imply this is a bad thing, I on the other hand like it a lot, because it’ll draw more industry R&D money and incrementally improve on an entirely new computing platform, until it’ll be quantum enough for Scot’s taste. All the cheer leading and sales pitching that’ll go on along the way will probably make him very cross”

    Henning, this has happened before in recent history. For example, the Star Wars Defense initiative, first proposed by Reagan in 1983, is an idea which is considered ridiculous by MANY smart, honest scientists but which continues to be funded to this day. God knows how many hundreds of billions of dollars the US has poured into it even though it’s so error prone and easy to foil (which reminds me of quantum cryptography, a mini star wars initiative being foisted by some complexity theorists)

  126. rrtucci Says:

    I wonder how many D-Wave computers the NSA has. Heck, they probably spend 10 million in electricity in just a week. I saw in some newspaper (NYT?) aerial photos of the cooling plants for the computer farms in the NSA Utah facility, and it looks like something that could cool the entire planet.

  127. Gil Kalai Says:

    Rahul(#116) : I’m putting my money on ( c) No fundamental obstacle exists to building useful QCs, but in fact useful QCs will never be built…Again, that’s only my speculation.

    Rahul, this is certainly a possibility, and, in fact, Michel Dyakonov who is now visiting Jerusalem and giving today a condensed matter seminar, largely holds this view (contrary to what Scott attributes to him in #87). But I disagree that “you can put your money on it,” since I do not see how such an opinion can be verified. (At least if “fundamental principle” means “definite principle,” or just “principle.”)

    This is a reminder that for a useful scientific statement we need that the statement is falsifiable and also its negation is, and a scientific theory needs to be falsifiable as well as verifiable

    I also don’t regard your evaluation as a speculation. To say that quantum computers will be built is a speculation, to suggest as Scott does that it will be easier to demonstrate superior computational power with non-useful machines is a speculation, to say, as Gordie does that we will see useful optimization with quantum annealing machines is a speculation, to say as John does that in ways we cannot understand at present both conflicting views in the quantum computer debate are right is a (sweet) speculation, to say as I do, that our future understanding of why quantum computers cannot work is going to change large areas of quantum physics is a speculation, but to simply say that we are going to fail, is a reasonable opinion but I won’t call it a speculation no matter how unlikely it may or may not be.

  128. Scott Says:

    rrtucci #125:

      quantum cryptography, a mini star wars initiative being foisted by some complexity theorists

    That’s an absurd characterization: QKD has essentially nothing to do with complexity theory; in fact the whole point of it is to sidestep the need for complexity-theoretic cryptography. Is “complexity theorist” just your term for anyone you dislike?

  129. Scott Says:

    rrtucci #126:

      I wonder how many D-Wave computers the NSA has

    My guess: somewhere in the vicinity of 0.

  130. Had Says:

    Some statements I’m putting out there for criticism and correction.

    The annealing schedule is fixed for all of these random instances. Troyer et al. talk about being honest with classical algorithms, but this is (in a sense) being a little dishonest with the AQO algorithm. Everyone knows that having a stupid annealing schedule will never get you a speedup, and in fact it takes some care to get the quantum speedup (theoretically) in the case of Grover search (I’ll get you the ref if you want it)

    To me, this is the overarching problem of AQC. We’ll never get speedups unless we come up with a good way to do the annealing schedules. And then, of course, it’s up to D-Wave to make their hardware capable of implementing arbitrary schedules…

    So what am I missing here?

  131. Had Says:

    BTW, I meant to note that I’m talking about nonlinear annealing schedules, as opposed to strictly linear ones.

  132. Rahul Says:

    Gil #127 says:

    “But I disagree that “you can put your money on it,” since I do not see how such an opinion can be verified. “

    True. Well, I could amend it slightly to: “( c) No fundamental obstacle to building useful QCs will ever be discovered, but in fact useful QCs will never be built in the next 100 years”

    (I’m tacitly assuming 100 years of failures would be enough to dissuade most people)

  133. Rahul Says:

    Scott #117:

    “Among people who really understand quantum error-correction, I’d guess that there are 99 who are completely convinced for every 1 who isn’t.”

    Stupid question. Convinced of what exactly?

  134. rrtucci Says:

    Okay Scott, I stand corrected, that was an unfair characterization of complexity theorists

  135. quax Says:

    Had #130, that’s a good point, heard a lot about error correction in the adiabatic regime, but I am not aware of any work with regards to how to improve the annealing schedules.

  136. Scott Says:

    Rahul #133: Convinced that scalable QC is possible in principle, and that it wouldn’t at all be analogous to analog computing—that that comparison is purely a symptom of one’s not having sufficiently thought through the linearity of quantum mechanics and what it entails.

  137. Rahul Says:

    Scott #136:

    Thanks! No argument against that.

  138. Rahul Says:

    “Daniel also wanted me to mention what he saw as an important point in the paper: namely, all the ratios of simulated annealing time to D-Wave time are normalized by 512/N…..In this way, one eliminates the advantages of the D-Wave machine that come purely from its parallelism”

    Another way to have a fair comparison might be to have D-Wave compete versus a N-core-parallel conventional algorithm?

    Or is that too much of a hassle to implement?

  139. Fred Says:

    Scott #136: 🙂 I guess it’s time I read again my copy of
    http://www.amazon.com/Quantum-Computing-Computer-Scientists-Yanofsky/dp/0521879965

  140. Mike Says:

    Scott@Update (Jan.23):

    There you go again. You may delude yourself into thinking that clear, logical and straightforward analysis will carry the day, but unfortunately you are (i) once again confusing fundamental science and capitalism, (ii) failing to appreciate that any practical speed-up will suffice quite nicely to ensure D-Wave’s business success, (iii) being blind to the fact that the underlying hardware potential is what matters, not whether or not there is any quantum speed up, and (v) not understanding that this has happened before, with the Star Wars Defense initiative; here too the idea is considered ridiculous by many smart and honest scientists like yourself, but it continues to be funded to this day. Wait a second, somehow I don’t think (v) is such a good argument. Well, anyway, it’s very clear that you just don’t get it 😉

  141. quax Says:

    Mike #140, of course quantum speed-up is still the ultimate price.

    If you want to break it down into a black and white issue, there is Scott’s camp that you summarized as considering D-Wave’s approach ridiculous. I.e. the whole thing should be scrapped. This is apparently your read on his stance, not sure if he’d actually go that far.

    And then there are the D-Wave hopefuls, such as myself, who consider their approach as building a bridge to universal adiabatic quantum computing.

    Scott fears they fizzle out and damage the entire QC endeavor, I don’t think that’s going to happen. My expectation is the opposite there as well, I expect that D-Wave’s activities will get more commercial R&D money flowing into the field.

  142. Curious Says:

    quax@141,

    Yeah, I wish Scott had an edit function because after posting I realized that I was putting words in his mouth (for effect), and I agree that that does not seem to be his take — mea culpa.

  143. John Sidles Says:

    Mike deplores “the Star Wars Defense initiative [which is] is considered ridiculous by many smart and honest scientists like yourself”

    Mike, please let me commend to your reading attention an outstanding survey of systems engineering principles: Stephen B. Johnson’s The Secret of Apollo: Systems Management in American and European Space Programs (2002). In Johnson’s fore-matter we read:

    In a hotly contested Cold War race for technical superiority, the extreme environment of space exacted its toll in numerous failures of extremely expensive systems. Those funding the race demanded results.

    In response, development organizations created what few expected and what even fewer wanted — a bureaucracy for innovation.

    To begin to understand this apparent contradiction in terms, we must first understand the exacting nature of space technologies and the concerns of those who create them.

    Systems approaches emphasize integrative features and the elements of human cooperation necessary to organize complex activities and technologies. Believing that humans are irrational, I find the creation of huge, orderly, rational technologies almost miraculous. I had never pondered the deeper implications of cooperative efforts amid irrationality and conflict, and this project has enabled me to do so.

    I sincerely hope that this work helps others recognize that the ‘systems’ in which we all take part are our own creations. They help or hinder us, depending upon our individual and collective goals.

    Regardless of our feelings about them, they are among the pervasive bonds that hold our society together.

    Here we see that the Apollo program shares a great virtue with BosonSampling: going to the moon and estimating permanents equally serve no useful purpose … *except* Stephen Johnson’s grand purpose of fostering “cooperative efforts amid irrationality and conflict.”

    Which is a plenty good purpose for both classical and quantum systems engineering (as it seems to me).

    As for the strictly technical feasibility of missile defense, no-one whose family-members have sheltered under Iron Dome or C-RAM/Phalanx-B doubts it.

  144. quax Says:

    Curious #141, you are confusing me, are you posting under two different handles i.e. Mike == Curious?

  145. Scott Says:

    John Sidles #143:

      As for the strictly technical feasibility of missile defense, no-one whose family-members have sheltered under Iron Dome or C-RAM/Phalanx-B doubts it.

    My understanding is that currently, missile defense systems are just barely good enough to partially protect a very small area (like Tel Aviv) from a very crude adversary (like Hamas). Which is no mean achievement! But they’re nowhere near good enough—nor will they be for the foreseeable future—to protect an area the size of the continental US from an adversary like Russia or China. And the reason is basically the one Carl Sagan and all those other carping scientists pointed out in the 80s: an adversary who’s even slightly sophisticated can take trivial countermeasures, like sending lots of decoy missiles.

  146. John Sidles Says:

    Scott, your analysis (#145) indefensibly assumes that the primary role of missile defense is defense against missiles. Shtetl Optimized readers should be aware of analyses (like Stephen Johnson’s cited in #143) that embrace a broader perspective.

    You wouldn’t claim that the primary role of BosonSampling research was to estimate permanents, would you?

  147. Scott Says:

    John #146: The primary role of BosonSampling is to solve the BosonSampling problem. What, in your view, is the primary role of missile defense, if not to defend against missiles? To make people feel safer (even though they aren’t)? To make them feel like they came together to solve a hard problem (even though they didn’t solve it)?

    Let’s also note that you seem to have switched without even blinking from your first argument, which concerned, as you put it, the “strictly technical feasibility” of missile defense.

  148. Rahul Says:

    @John Sidles: “Scott, your analysis (#145) indefensibly assumes that the primary role of missile defense is defense against missiles.”

    What is the primary role of missile defenses in your opinion? I am confused.

  149. quax Says:

    Always assumed that it was common wisdom that the primary role of missile defense program was to line the pockets of the defense contractors. Guess I was mistaken.

    Must say I am now looking nostalgically at those times when the wars stayed mostly cold. Much rather have an overinflated defense budget spend on dubious science (which is still science after all) than actual war.

  150. John Sidles Says:

    In reference to Scott’s #151 and Rahul’s #152, the plain teaching of Stephen Johnson’s The Secret of Apollo (p.7) conveys valuable insights in regard to QC research:

    “These groups [developing space programs] found that changes in organization and management were crucial. […] The unique technical problems of spaceflight posed difficulties requiring social solutions — changes in how people within organizations related to one another.”

    How might these systems-level “social solutions” work concretely, in the context of quantum research? One approach is to read works like Colin McLarty’s well-regarded “The rising sea: Grothendieck on simplicity and generality” (Google finds it) as a mathematical sequel to Stephen Johnson’s systems-engineering analysis; in effect we can read McLarty’s work as a disclosure of “The Secret of Grothendieck”.

    Concretely this means (for our SHS seminar) that we can systematically work to follow-up the 2013 SHS ‘Green Sheet’ seminar notes, which provide a Johnson-style geometrized account of macroscopic quantum thermodynamics, with (what we plan as) the 2014 SHS ‘Gold Sheet’ seminar notes, that (as we intend) will provide a Grothendieck-style algebrized account of microscopic quantum dynamics. Already it is clear that the geometric Green Sheet world-view is going to mesh naturally with the algebraic Gold Sheet world-view.

    It is clear too, that the the Gold Sheet notes will require rather little in the way of discovery or invention; the challenges mainly reside in mathematical translation, transcription, and notation. Here McClarty quotes Grothendieck to good effect:

    “The sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it … yet it finally surrounds the resistant substance.”

    As for individual mathematical questions and challenges that arise in regard to R&D programs similar to ‘Gold Sheet’, forums like MathOverflow and TCS StackExchange are more suitable than Shtetl Optimized, and we expect to be pretty active in these forums in 2014.

    Of course, the Green Sheet/Gold Sheet geometrization / algebrization program represents just one approach to appreciating quantum dynamical phenomena “in width, in depth, and in context” (per #38). As was said before (in #86), it is neither necessary, nor feasible, nor desirable that everyone think alike in these matters. And surely it is no bad thing for young researchers (especially) to familiarize themselves with the abundant & diverse & wonderful mathematical and historical literature on these topics.

  151. Scott Says:

    John Sidles: In light of your last comment, in which—having been caught flat-out contradicting yourself in your absurd statements about missile defense—you “answer” direct questions by changing the subject like a toddler, I regret to say that you’re once again banned from this blog for 3 months. If you don’t want the endless cycle of bannings to continue, then you’ll need to adopt a different writing style, one in which you respond clearly to what other people say to you and address the issues at hand.

  152. Fred Says:

    Besides D-Wave and all its hype, what other note-worthy private or academic efforts at practical QC are out there?
    (hopefully more open and systematic in their approach)

  153. Scott Says:

    Fred #152: It’s hard to know where to begin with that question! In ion traps, there’s David Wineland (recipient of last year’s Nobel Prize) and his group at NIST, and Rainer Blatt and his group in Innsbruck. And there’s the group centered around University of Maryland, which got a lot of attention recently because of the funding they receive from NSA. 🙂 In superconducting qubits, there’s John Martinis and his group at UC Santa Barbara, and Robert Schoelkopf and his group at Yale—both of whom now have superconducting qubits with something like 10,000 times the coherence times of D-Wave’s (!!), though obviously they haven’t had D-Wave’s resources and so on to try to scale things up. Then there are excellent optics groups at Oxford, Bristol, Brisbane, and elsewhere. And there’s plenty more—maybe experimentalist readers can fill us in (anonymously, if they like 🙂 ) on what they think are the most noteworthy experimental efforts that I missed.

  154. Rahul Says:

    I’ve always wanted to hear more from the experimental guys. It is so hard too though! Are they all reticent / modest or just too busy! It’d be nice for a change to hear about the practical aspects from someone other than DWave!

  155. Scott Says:

    Rahul #154: It’s true that experimentalists seem to have much less culture of mouthing off randomly on blogs than theorists have! One reason, I think, is that experimental collaborations are basically like mini-corporations, so there’s a question of who’s “authorized” to speak for the whole corporation, especially if the boss doesn’t have time. A second problem is the embargo policies of Science and Nature, which forbid the experimentalists from saying anything about papers currently under review. Maybe there are other factors that I’m less aware of.

  156. Rahul Says:

    Scott #154:

    Indeed. Good points. But not all the story perhaps since they can always talk about papers after embargo or comment on other people’s experimental work or even comment anonymously. Again, it’s my view that we ought to be pumping far more resources into the experimental side of QC.

  157. Rahul Says:

    As an aside Wikipedia has this factoid “As of May 2011, the largest number of particles to be controllably entangled is 14 trapped ions”

    Does anyone know what the post-2011 numbers are on this? I’m curious to know how things have evolved since 2011. I’m assuming this is one relevant metric to track on the experimental scaling side? Another experimental quantity whose yearly trend I’d be eager to know is the coherence time Scott mentioned in # 153.

    PS. Does anyone know of a website or repository that tracks such quantitative parameters for QC projects? Or are such numbers too simplistic to be relevant. e.g. The largest Shor factorized integer. Or largest Boson Sampling problem solved.

  158. rrtucci Says:

    As Scott has described, there are some outstanding university groups doing experimental research in QC. What is glaringly missing are one or more (preferably at least 2 more IMHO) private companies competing with D-wave for federal and private money and pursuing the gate model. Once this vacuum is filled, Scott will be happier because the gate model is being given some respect like Rodney Dangerfield never got, and Henning will be happy because more QC work is being conducted in the private sector, and I’ll be happy because Scott and Henning are happy

    Two other things that would IMHO be highly beneficial (is there any Aaaron Swartz spirit left at MIT?)

    (1)Demand that the NSA make all QC research, both in devices and algorithms, unclassified. I suspect that if Bruce Kane’s research at Lupus were made public, Intel, the 1000 pound gorilla, might FINALLY get interested in QCs and begin doing some R&D into semiconductor QC. The internet was developed mostly as an unclassified project (correct me if I’m wrong) and that worked pretty well

    (2)Start an X prize for quantum computing (called the Gil Kalai prize?)
    Small perturbations sometimes cause landslides.

    If things continue the way they are now, it’s very possible that QC development will degenerate into a min-Star Wars defense initiative. SDI achieved very little at an expense of hundreds of billions of dollars over many barren decades

  159. Scott Says:

    rrtucci #158: Would I be happy if a company were founded to do gate-model QC? That would depend entirely on the details of the company, and how serious/honest it was! Unfortunately, my guess is that any quantum computing company founded in the next decade (at least) will face nearly-insurmountable pressure to distort the truth, in the direction of things being much, much closer to usefulness than they are.

    Actually, QC seems to me like a textbook example of the continuing role for government-, foundation-, and corporate-lab sponsored basic research in technology. There are many, many things that startup companies do best, but I’m skeptical that feats like the invention of the first laser, transistor, spacecraft, or scalable QC are among them.

  160. joe Says:

    rrtucci #158: HRL has exactly what you’re asking for. They have been running a QC program there for at least 5 years. They also occasionally publish their research (although their publication policy is apparently “we publish the stuff we’ve done a while ago that we don’t think will be useful, or is so far behind what we have now, that we don’t mind giving it away”), e.g. http://www.nature.com/nature/journal/v481/n7381/abs/nature10707.html — supposedly their effort is supported by a $70MM DARPA grant, but I haven’t seen anything on the Internet to substantiate that figure. It doesn’t seem unlikely though, given how many people they hire, and the complexity of running a long-term experimental effort.

    Since HRL’s project is classified we don’t have much information about how they’re doing, but I’d be surprised if they were anywhere close to building a 50-(physical)-qubit machine, let alone the >1000-logical-qubit machine you need to build a useful factoring machine (where with current overheads for fault tolerance, a logical qubit probably needs ~1000 physical qubits to make a single logical qubit). As Scott mentions, this kind of effort will probably require more than a decade of (very well-funded) effort, and even then, it’s a massive gamble.

    There are a few other companies and labs pursuing gate-model QC, including a fairly serious effort at IBM (superconducting qubits), BBN (also superconducting qubits), and Sandia (ion traps). (If you count theory efforts, Microsoft Research, and a few defense contractors, also have small teams.)

  161. rrtucci Says:

    Wow, Joe, it seems that all the examples you are mentioning, except for IBM, are classified. The situation is much more serious than I realized. (I don’t think Microsoft is doing experimental work. Even if they are doing an epsilon, it certainly is way too little to accomplish the job)

    Why pray tell should the public only know who is getting funded for QC research and by how much only if they reach a breakthrough…Oh, but wait, according to the Washington Post article, if they do reach a breakthrough, then they are mandated to classify their work as top secret.

    And what about small entrepreneurs and academics that are not part of the military industrial complex or the old boys network?

    Would you say that your examples—Hughes, Raytheon BBN and Sandia—competed hard in a transparent, open, fair system to get their grants? (not to mention Lockheed Martin, MIT Lincoln Lab, MITRE, LPS, Booz Allen Hamilton…) If it’s all fair, honest competition, why can’t we find out how much money they are getting? The NSF discloses in a nice website how much money Scott is getting from them for QC research. And Scott publishes all his research in ArXiv. Why can’t we know exactly how much money each researcher at Hughes is getting from the NSA and other corrupt secret government agencies? Why should not all their research be put in arXiv?

  162. Rahul Says:

    @rrtucci:

    Indeed! Why shouldn’t the world be a nice, ideal, transparent, just place with no intrigue, corruption, politics, nepotism etc…….

  163. quax Says:

    Rahul #162, to me it seems rrtucci has a perfectly valid point. Things used to be run differently. The US over-arching obsession with security is a fairly recent phenomenon in no small part courtesy to Bin Laden. I guess the latter really hated our academic freedoms.

  164. Fred Says:

    QC is definitely tricky – how do you convince investors to put their money into producing something that only happens when absolutely no-one’s looking at it?

  165. Rahul Says:

    @quax:

    He was talking about classified research. There’s no way that’s a recent phenomenon.

  166. rrtucci Says:

    No, Rahul, I was talking about QC classified research.

  167. Rahul Says:

    What makes QC special to not be a part of the classified defense enterprise? A lot of Govt. sponsored research into potentially mega powerful technology is classified.

    Working QC does have major cryptography implications. Breaking codes & eavesdropping is a big chunk of what these agencies do.

  168. rrtucci Says:

    Rahul, I find a lot of pitfalls in your argument. Let me point out just one.

    According to the Washington Post article, the NSA wants to classify any “cryptographical useful quantum computer” for 25 years! In other words, they want to classify any QC that works well and declassify any QC that works like crap. Gee, thanks. I just wanted to use the crappy ones.

    By the way, the NSA is not even doing that because they won’t tell us how many D-wave computers they own and Scott would say that D-wave computers are in the crappy category.

    What if I have no interest in using QCs for cryptography? For example, what if I want to use them for machine learning? Suppose I have a batch of IDENTICAL QCs and I stamp half of them top secret because Captain Kirk (Little Keith Alexander) wants to use them to decode RSA and I stamp the other half unclassified because Scott promises NOT to use them to break codes.

  169. rrtucci Says:

    Since LPS (or Hughes or Sandia or MIT Lincoln Lab, or..) has not achieved a “cryptographical useful quantum computer” yet, none of the work that they have done so far should be classified, but it is. Contradiction? This is to be expected because at this point the NSA has been caught lying so many times that Pinocchio is in awe of their audacity.

  170. Rahul Says:

    @rtucci:

    I am confused. If Scott tomorrow succeeds in suddenly making a “cryptographically useful quantum computer” he can release it into public domain, or distribute it for free or start a new Microsoft or just burn the blueprints & retire to Hawaii. Right? No legal obstacle? NSA can do nothing about it.

    Now what they can indeed keep classified is merely stuff funded by them under NSA contracts which seems perfectly fine by me. Even if NSA funds super duper magic ink or an egg peeler project & decide to keep it classified; fine it’s their funeral. Once we agree on the principle that some govt. funded research programs can be kept classified I don’t see why NSA including QC under the classified tag is sacrilegious.

    Now, if you want to say, “Aha! But they use public money after all!”, OK, there’s an argument. But that’d apply generically and there shouldn’t be any classified programs at all.

    You also say,“what if I want to use them [QC’s] for machine learning? “ Sure, go ahead just build them from your own money please (or pull a D-Wave and manage to con some rich guys to fund your mega project).

  171. Rahul Says:

    rrtucci #168 says:

    By the way, the NSA is not even doing that because they won’t tell us how many D-wave computers they own

    I don’t see what’s so astonishing? Does NSA tell you (truthfully) how many conventional, plain-old, boring, non-quantum Dell servers they own? Nope. They probably won’t even tell you what brand of hand soap they buy just out of paranoia.

    I have no clue where you got the idea that the mission & idea of something like the NSA is fundamentally compatible with voluntary self disclosure of internal assets.

    Hell, that’s why we needed Snowden after all, eh?!

  172. Douglas Knight Says:

    rrtucci, no, what’s top secret whether LPS is making a serious attempt at cryptographically useful QC, or just doing preliminary experiments. The results of preliminary experiments may or may not be top secret.

    Rahul: Right? No legal obstacle? NSA can do nothing about it.

    Lots of things discovered outside NSA have become classified, like differential cryptanalysis in the mid 70s. How? I certainly don’t know. Lots of nuclear facts have been classified by force. The legal situation for nukes is probably special, but if there’s anything you should have learned about NSA last year, it’s to stop asking questions with “legal” in them.

  173. quax Says:

    Rahul, I think at issue he is the double use nature of QC, if the NSA was trying to generally keep universal QC from the public than that’ll be very problematic.

  174. Rahul Says:

    quax #173:

    if the NSA was trying to generally keep universal QC from the public than that’ll be very problematic.

    Is there any sign that they are? Have they passed any gag orders? Threatened any researchers? All I read is about how they are funding tons of QC research with their own funds and keeping these projects classified.

    I don’t see the problem.

  175. Michael Bacon Says:

    A new hat in the ring:

    http://arxiv.org/pdf/1401.7087.pdf

  176. Scott Says:

    Michael Bacon: Thanks for the link! Yeah, I saw that on the arXiv last night and started reading it. We now have what look like the makings of a real scientific dispute here, from which all sides stand to learn something! Personally, I’m withholding judgment until I study the new paper and (especially) see the Troyer group’s response to it.

  177. Michael Bacon Says:

    “We now have what look like the makings of a real scientific dispute here, from which all sides stand to learn something!

    Well, I hope we don’t have to wait for a paradigm shift. 😉

  178. rrtucci Says:

    Come on Scott, get real, it takes more than an Academic “dispute” to derail a Star Wars Initiative. It’s like announcing that you’ve calculated that spitballs are just as efficient at bringing down Russian or Chinese MIRVs as Scuds are. The military industrial complex will ignore you because they can’t line their pockets by selling spitball guns

  179. Michael Bacon Says:

    rrtucci,

    Now I am confused 🙁 I thought you supported D-Wave at least generally based on the prospect that enough science was on their side. In the last comment are you saying that they’ll keep going because like Star Wars while they didn’t have the science, the MIC can continue to line its pockets? Or, did you have another point? Or, no point except a stab at humor? Anyway, like I said, I am confused . . . .

  180. Scott Says:

    rrtucci #178: Dude, I never claimed that this dispute would derail the Wave Power. My point was more like, “this is what the terms of the debate might look like in a more rational world.”

    And Michael #179, I became better able to appreciate what humor there occasionally is in rrtucci’s snarks once I stopped looking for intellectual coherence in them!

  181. rrtucci Says:

    Michael,

    I’m for unclassified quantum computing, both in devices and algorithms, for all institutions (universities, defense labs, defense contractors, secret corrupt government agencies that spend 1 billion dollars trying to hack into my smart phone and my Angry Birds).

    I’m for an NSF style, transparent website where all QC research funding particulars (who and by how much) is reported.

    I’ve never being totally pro or con D-Wave. I criticize them for some things and praise them for others. However, I would like for there to be 2 or more private companies doing gate model QC, competing against D-wave for private and federal funding. Because the more competition in a transparent, level, playing field, we have, the faster we will develop QCs.

    I’m for the Gil Kalai X prize in quantum computing, because Gil should put up or shut up. If he is so sure QCs are impossible, then he shouldn’t mind betting his house that it’s impossible just like Scott did for Dekolinar 🙂

  182. Michael Bacon Says:

    rrtucci,

    You play Angry Birds? 🙂

  183. rrtucci Says:

    Yes. It’s good for people with anger management issues

  184. quax Says:

    Rahul #174, my point is to be wary and not to put any misplaced faith in an institution like the NSA. Academic freedom like any other one can be eroded over time. Better to be vigilant.

    And yes, this is a slippery slope argument, so feel free to ignore it 🙂

  185. Matthias Troyer on Quantum Computers - Machine Intelligence Research Institute Says:

    […] much attention recently. Our readers can get up to speed on that story via your arxiv preprint, its coverage at Scott Aaronson’s blog, and Will Bourne’s article for Inc. For now, though, […]