Umesh Vazirani responds to Geordie Rose

You might recall that Shin, Smith, Smolin, and Vazirani posted a widely-discussed preprint a week ago, questioning the evidence for large-scale quantum behavior in the D-Wave machine.  Geordie Rose responded here.   Tonight, in a Shtetl-Optimized exclusive scoop, I bring you Umesh Vazirani’s response to Geordie’s comments. Without further ado:

Even a cursory reading of our paper will reveal that Geordie Rose is attacking a straw man. Let me quickly outline the main point of our paper and the irrelevance of Rose’s comments:

To date the Boixo et al paper was the only serious evidence in favor of large scale quantum behavior by the D-Wave machine. We investigated their claims and showed that there are serious problems with their conclusions. Their conclusions were based on the close agreement between the input-output data from D-Wave and quantum simulated annealing, and their inability despite considerable effort to find any classical model that agreed with the input-output data. In our paper, we gave a very simple classical model of interacting magnets that closely agreed with the input-output data. We stated that our results implied that “it is premature to conclude that D-Wave machine exhibits large scale quantum behavior”.

Rose attacks our paper for claiming that “D-Wave processors are inherently classical, and can be described by a classical model with no need to invoke quantum mechanics.”  A reading of our paper will make it perfectly clear that this is not a claim that we make.  We state explicitly “It is worth emphasizing that the goal of this paper is not to provide a classical model for the D-Wave machine, … The classical model introduced here is useful for the purposes of studying the large-scale algorithmic features of the D-Wave machine. The task of finding an accurate model for the D-Wave machine (classical, quantum or otherwise), would be better pursued with direct access, not only to programming the D-Wave machine, but also to its actual hardware.”

Rose goes on to point to a large number of experiments conducted by D-Wave to prove small scale entanglement over 2-8 qubits and criticizes our paper for not trying to model those aspects of D-Wave. But such small scale entanglement properties are not directly relevant to prospects for a quantum speedup. Therefore we were specifically interested in claims about the large scale quantum behavior of D-Wave. There was exactly one such claim, which we duly investigated, and it did not stand up to scrutiny.

555 Responses to “Umesh Vazirani responds to Geordie Rose”

  1. Sol Warda Says:

    Dr. Vazirani: As a non-expert in computer science, I have one simple question for you, which I will get to in a second:
    As you know, the original paper of al. was challenged by your co-authors, Smith & Smolin, here:
    Then al. responded to it here:
    Apparently, Smith & Smolin were not happy or satisfied with their “comment”. My question to you is this:
    Who approached whom to re-visit the subject of this paper?
    Was it you that approached Smith & Smolin, or was it the other way around? The reason I ask you this simple question
    is because there appears to be a pattern of a concerted effort by IBM scientists to challenge any claims by D-Wave scientists and others, about speedup, quantumness…etc. of DW2. I believe this is the third or fourth attempt ,by IBM scientists, to challenge such papers. Why, so much interest from mighty IBM scientists about this “pipsqueak” upstart? Thank you.

  2. Rahul Says:

    @Sol Warda:

    You really think this skepticism is an IBM conspiracy? What about Scott, Gil Kalai, and literally hundreds of other academic skeptics who question D-Wave’s speedup & quantumness? They are IBM instigated too?

    In any case, since when did “attempt to challenge a scientific claim” become a nefarious undertaking? IBM scientists are not even 10% of the D-Wave skepticism wave really.

    Furthermore, leave all QC aside, let me ask you in return: Who chose CPLEX to showoff the D-Wave machine’s performance against? Well, if you choose to benchmark your product’s performance against a commercial IBM product, how can you then whine about the big guy picking on you? Of course he’s going to try to demolish your claims, and especially so if they turn out to be false advertising.

    There’s rock solid commercial, scientific and perhaps even ethical reasons for anyone to critique D-Wave.

  3. Sol Warda Says:

    Rahul: It’s perfectly alright for you, Scott, Gil…..etc. to critique D-Wave or anything else in an open forum such as this. Everybody knows that Scott is very generous in allowing this kind of banter & arguments…etc. on his blog. But, when it comes to highly-paid scientists of a mighty corp. going out of their way, and presumably putting their research on hold, to challenge a number of peer-reviewed papers, published in a prestigious journal, especially if they come from scientists of a small start-up co., then there is more to it than purely scientific integrity. Bye the way, ALL these challenges came in the last year! I believe futuristic commercial motives are at stake, from their point of view.

  4. Sol Warda Says:

    Scott: You should have an “edit” button on your blog, so we can correct our stupid misspellings or grammatical errors!. Thanks.

  5. Alexander Vlasov Says:

    O’K, but, eg, already in abstract of the paper we may read:

    Based on these results, we conclude that classical models for the D-Wave machine are not ruled out.

    Such a sentence can make a person with physical background rather upset, because may sound like claim, that classical models of superconductivity are not ruled out.

    May be it is reasonable to use more exact wording?

  6. Dániel Says:

    Alexander Vlasov: Classical models of a single hydrogen atom of the machine are ruled out. I think it’s perfectly clear from the context that classical models for the D-Wave machine’s input-output behavior are the issue.

  7. Scott Says:

    Sol #1: Umesh is on a plane now. When he lands, I can ask him about the genesis of their paper—but in the meantime, he asked me to answer comments for him, so let me try.

    I think your comment betrays a fundamental misunderstanding of what IBM Research (or comparable organizations, like Microsoft Research or the old Bell Labs) are actually like. They’re essentially like universities. People move back and forth between them and universities. They study more-or-less whatever they’re interested in, and they publish what they want. The biggest differences are that

    (1) they don’t have to teach,
    (2) people doing “practical” things for the company might use them as resources,
    (3) if someone visits them, the visitor has to wear a badge, and
    (4) if the parent company goes under, they could lose their jobs.

    I know Graeme Smith and John Smolin, I’ve visited IBM Yorktown Heights, and the idea that Graeme or John would be taking orders from higher-ups at IBM to “write papers crushing D-Wave” is, frankly, preposterous.

    What’s far more plausible is that researchers in a corporate lab might get inquiries from the pointy-haired bosses in HQ: “what is this D-Wave we keep reading about in the news? should we be worried about it? oh, and why are you guys mucking around with 2 or 3 qubits, if D-Wave claims it has 512 qubits?” And such inquiries might motivate the researchers to examine D-Wave more carefully. I have no idea whether anything like that ever happened at IBM or Microsoft, but even if it did, I wouldn’t consider it the slightest bit nefarious.

  8. Scott Says:

    OK, Umesh just answered my email from the airport lounge. He explains that the history is this: Seung Woo Shin, a student at Berkeley (and a TA for Umesh’s quantum computing course), got interested in writing classical codes to try and outperform the D-Wave machine. Umesh helped out in an advising capacity. Later, at the IBM conference “What can we do with a small quantum computer?,” Umesh and Seung Woo talked to John Smolin, and discovered that he and Graeme Smith had been trying out similar ideas. So, given that the earlier Smith-Smolin paper was a major inspiration for what Umesh and Seung Woo were doing, they invited John and Graeme to join their paper.

  9. sflammia Says:

    What is the big deal even if IBM is giving such orders to “crush” D-wave? They are a company in direct competition with D-wave for customers for optimization software. If IBM thinks that D-wave is misrepresenting its product, then they are very well justified in directing their researchers to publicly debunk those claims. If IBM were lying or smearing D-wave then there would be a scandal. But in each of the papers that I’ve read from IBM there appears to be no malicious intent, but rather just standard scientific inquiry.

  10. Mike Says:

    ” . . . the idea that Graeme or John would be taking orders from higher-ups at IBM to “write papers crushing D-Wave” is, frankly, preposterous.”

    Unfortunately, it’s the sort of thing one hears when the speaker is unwilling or unable to argue the science on the merits.

  11. Rahul Says:

    Is the success probability histogram on a random problem set a typical way to fingerprint these algorithms?

    I was a bit surprised that the test of quantum-vs-classical hinged on comparing bimodal vs unimodal distributions of success probability data of DWave vs Simulated Annealing etc.

    Further, is the bimodal shape of say Simulated Quantum Annealing or Spin Dynamics approaches expected by logical considerations? Why?

  12. Alexander Vlasov Says:

    Dániel #6, I think, discussion above may illustrate — it is not perfectly clear.

  13. Scott Says:

    Alexander #12: I agree with Dániel. If you wanted to be willfully obtuse, you could say that even a classical computer (like your laptop) has no “classical model,” since you need QM to explain how the individual transistors work! For that reason, it seems clear from context that Shin et al. are talking about classical models for the higher-level behavior of the D-Wave machine—e.g., models that don’t involve any global entanglement.

    (And in any case, since Umesh explicitly clarified in the last paragraph of this post that that is what they meant, maybe we should move on… :-) )

  14. rrtucci Says:

    Rahul, I think that what Sol is saying is that IBM is suffering from classic Freudian penis envy

  15. Alexander Vlasov Says:

    Scott #13, I think sometimes, maybe between reasons to use superconducting design instead of more compact solid state a-la Kane was a hope to avoid comparisons with laptops.

  16. Scott Says:

    rrtucci #14: Sometimes a superconducting qubit really is just a superconducting qubit.

  17. Joe Fitzsimons Says:

    And IBM has much better qubits…

  18. Sol Warda Says:

    Scott: Thank you for answering my question to Dr. Vazirani. By the way, who is the “leading author” of that paper. Isn’t it Dr. Shin? And if so, why didn’t he see fit to answer Dr. Rose, instead of Dr. vazirani?. Dr. Rose NEVER mentioned Dr. Vazirani’s name in his blog.
    I just found it very curious that these two scientists from IBM should spend close to year trying to poke holes, or even shoot down this particular paper!. Remember D-wave scientists, USC scientists & others have put out close to 70+ peer-reviewed papers in the last ten years, and nobody ever heard from IBM. Yet suddenly, in the last year at least three attempts were made by IBM scientists to discredit a couple of papers. All this came on the heel of the announcement by D-wave of their sale(lease?) of a DW2 to a consortium of blue chip names, i.e. Google, NASA, USRA…etc. Was it a pure coincidence? Perhaps, but I doubt it. I think it more likely, that it’s a story of David & Goliath, and you know the rest of it.. I thank you once again.

  19. Scott Says:

    Sol #18:

      By the way, who is the “leading author” of that paper. Isn’t it Dr. Shin?

    The order of the authors’ names is alphabetical. In theoretical computer science, we don’t normally have “lead authors.” (And not that it matters, but Seung Woo Shin is not yet a “Dr.,” I don’t think.)

      And if so, why didn’t he see fit to answer Dr. Rose, instead of Dr. vazirani?. Dr. Rose NEVER mentioned Dr. Vazirani’s name in his blog.

    Err, yes he did:

      As an aside, I was disappointed when I saw what they were proposing. I had heard through the grapevine that Umesh Vazirani was preparing some really cool classical model that described the data referred to above and I was actually pretty excited to see it.

      When I saw how trivially wrong it was it was like opening a Christmas present and getting socks.

    One other thing:

      I think it more likely, that it’s a story of David & Goliath, and you know the rest of it..

    It seems to me that, when it comes to influencing the public perception of quantum computing, D-Wave/Google/NASA is Goliath (they were just on the cover of TIME magazine, for crying out loud), and we academics are David. (And I include the researchers who happen to work at IBM or MSR as “academics.”)

  20. Seung Woo Shin Says:

    Sol Warda #18: I don’t understand your comments about me. I think the point of a scientific debate is not who answers whom, but what are the facts and what are not.

    John and Graeme are scientists, and investigation of D-Wave machines is merely one of the many projects that they are working on simultaneously. D-Wave machines obviously stirred up a lot of excitement in the scientific community last year and it is only natural that scientists working in the same discipline would take interest and think about them. That is how I started working on this problem myself and I don’t think there is any reasonable reason to believe that John’s and Graeme’s intent was any different from mine.

  21. Mike Says:


    While I tend to think you are suffering from a little paranoia, let’s assume you’re right and there is some kind of IBM-inspired conspiracy afoot where certain scientists are induced to focus their efforts on discrediting D-Wave for IBM’s commercial purposes. I have a couple of questions:

    1. D-Wave and IBM are both commercial enterprises. Is your sole sociological objection based on the David and Goliath analogy? Or, is either D-Wave or IBM, or one more than the other, intentionally distorting evidence to help achieve their respective commercial goals?

    2. Apart from the possible, but in my opinion, highly unlikely accuracy of your sociological complaints, do you have any comments at all regarding the adequacy of the science? Although they didn’t test all possible benchmarks, did they choose unreasonable ones? Were the test conducted inadequately? Have the results been intentionally skewed to support puppet master, IBM’s, commercial goals?

  22. Scott Says:

    Seung Woo #20: Thanks so much for chiming in! And look forward to meeting you when I visit Berkeley for the Simons semester next week.

  23. Bram Cohen Says:

    Is there any known quantum adiabic algorithm which reduces to, or at least empirically seems to be able to solve, a problem which is believed to be difficult for classical algorithms? Without that, building an actual adiabic quantum machine seems rather like building a skyscraper and making the frame out of depleted uranium because you think there might be some revolutionary advantage to that material which you discover at some point in the future.

  24. Sol Warda Says:

    Scott: I stand corrected on Dr. Vazirani’s name being mentioned by Dr. Rose. The reason I asked “who is the leading author”, was because it’s my understanding that the “corresponding” author is the one designated to answer any queries about the paper or its contents. Dr. Vazirani has a “stellar” reputation as scientist and whenever a paper comes out with his name on it, people take notice. I think Smith & Smolin took advantage of that. They could have easily collaborated with a couple of scientists from the ranks of the IBM itself!. Last time I looked, IBM had over 430,000 employees! A goodly portion of them are eminent scientists, even though most people don’t know who they are, and what they do. Scott, remember that Smith & Smolin’s first attempt didn’t get very far, and NOBODY noticed. With Dr. Vazirani’s name appearing on this paper, IS the very reason we are discussing it here. Otherwise, in all probability, it would have gone unnoticed.

  25. Mike Says:

    “Dr. Vazirani has a “stellar” reputation as scientist and whenever a paper comes out with his name on it, people take notice. I think Smith & Smolin took advantage of that.”

    Wow, you really are a conspiracy kind of guy aren’t you?

  26. Michael Marthaler Says:


    I can only speak for myself but I found the IBM paper interesting because of its content not because of its author. Until today I did not actually know the name Vazirani.

  27. Rahul Says:

    rrtucci Says:

    Rahul, I think that what Sol is saying is that IBM is suffering from classic Freudian penis envy

    Envy or not, so long as we know D-Wave’s real size, that’s what matters.

  28. Sol Warda Says:

    Mike #21:

    I refer you to this article where Dr. Collin Williams of D-Wave gives a much better answer than I ever could:

  29. Rahul Says:

    Sol Warda Says:

    Remember D-wave scientists, USC scientists & others have put out close to 70+ peer-reviewed papers in the last ten years, and nobody ever heard from IBM. Yet suddenly, in the last year at least three attempts were made by IBM scientists to discredit a couple of papers.

    Well, as far as I know, DWave’s claims pre-2011 or so were rather tame and not so sensational? Once they started strutting around claiming they had a useful QC worth $10Million which was so many times faster than CPLEX etc. it’s obvious that people took more notice.

    Why don’t you also ask why did TIME take notice of D-Wave only now whereas DWave and USA were publishing papers for 10 years already?

  30. Sol Warda Says:

    Mike #25, Michael #26:

    If you don’t believe me just ask BBC, or even better, Scott!.

  31. Vadim Says:

    Sol #27,

    Isn’t Dr. Collin Williams a D-WAVE employee? By your standard, we shouldn’t be taking what he says terribly seriously on account of his obvious conflict of interest. After all, who has even more of a stake in this debate than IBM? That’s right, D-WAVE.

  32. Mike Says:


    The article you cite says little more than “[a] successful theory needs to explain all the existing experimental results, not just a narrowly selected subset of them . . .”

    However, as Umesh says above, there were “. . . a large number of experiments conducted by D-Wave to prove small scale entanglement over 2-8 qubits and [Rose] criticizes our paper for not trying to model those aspects of D-Wave. But such small scale entanglement properties are not directly relevant to prospects for a quantum speedup. Therefore we were specifically interested in claims about the large scale quantum behavior of D-Wave. There was exactly one such claim, which we duly investigated, and it did not stand up to scrutiny.” Do you disagree with Umesh on this point? Why?

  33. Sol Warda Says:

    Rahul#28: It wasn’t because they “started strutting around claiming they had a useful QC worth $10million”!. It was because they SOLD or LEASED one of their DW2 to “stellar” names, namely, Google, NASA, USRA…etc. That’s what peaked IBM’s interest.

  34. Rahul Says:

    Bram Cohen: Comment #23

    This hunting for the “right” benchmark exercise does seem a little weird to me.

    It’s like DWave has made something, and we have tremendous faith it is going to work great as a hammer, if only we could find the right sort of nail.

  35. Sol Warda Says:

    Vadim#30: it’s true Dr. Williams is a D-wave employee, but he is also a first-rate physicist who wrote the first book on quantum mechanics in 1999. he is eminently much more qualified than I to answer any technical queries you guys may have.

  36. Mike Says:

    Look Sol, the folks who you are accusing of being cahoots with IBM to crush D-Wave are first-rate, including Umesh, who even you refer to as ‘stellar’. What do you say about his comment that I quoted in my prior comment regarding Williams’ criticism? Oh, and I somehow don’t think the first book on quantum physics was written in 1999. There were a couple written before I think ;)

  37. Sol Warda Says:

    Mike#35: As I said at the beginning of my question to Dr. Vazirani, I’m NOT a scientist or in any way qualified to answer technical questions about the paper or any other matter. I believe Dr. Williams’s book was the first text book about “Quantum Computations”. You can easily find out the actual title of his book. it’s a trivial matter!.

  38. Sol Warda Says:

    OK, guys: That’s enough for me. My first question to Dr. Vazirani was to possibly find out the real motives of IBM in this whole sordid affair, and I got some answers from Scott on behalf of his former supervisor for his(Scott’s) PhD thesis. All your other questions that do not pertain to that do not interest me!. I thank you all, and in particular, Scott himself, for allowing this spin on his blog. Bye.

  39. Mike Says:


    Sorry to see you go, but I have to say that I’m not buying that your “. . first question to Dr. Vazirani was to possibly find out the real motives of IBM in this whole sordid affair . .”

    Instead of being interested in discovering true motives, you were intent on trumpeting your own view that the scientific criticisms of D-Wave’s ever changing claims were part of the grand conspiracy orchestrated by IBM. And, in the process of espousing that view, you had no qualms about denigrating honest folks by inferring that they were dupes (or worse, willing participants!) in the “sordid” (as you call it) affair by trying to unfairly discredit D-Wave and pave the way for continued IBM dominance. I think that’s a fair summary.

  40. Scott Says:

    Sol Warda #37: The book by Colin Williams that you’re referring to is Explorations in Quantum Computing. I know the book well, because I bought the first edition of it way back in 1998, when I was a 16-year-old just trying to learn what quantum computing was all about. The book was sort of OK, but it didn’t go very deep, it was missing Grover’s algorithm, and it said all sorts of things that even as a 16-year-old just seemed wrong to me. For example, it claimed that Simon’s algorithm proved that P is a proper subset of QP (by which it meant BQP). I thought, “that can’t possibly be right! that would prove P≠PSPACE!” (And indeed, Williams had confused oracle separations with unrelativized separations.) It also claimed that one of the main benefits of QCs was that, being reversible, they would use less energy than classical computers, and would therefore be environmentally friendlier. I thought, “c’mon, seriously?” (And indeed, any realistic QC proposal—certainly D-Wave’s—uses massively more energy than a standard classical computer, for refrigeration, classical control, etc. etc.) Later I found this review by Ronald de Wolf, which explains some of the book’s weaknesses with Ronald’s trademark precision and clarity. Anyway, I heard almost nothing about Colin Williams for the next 15 years. Then, a couple years ago, I read that he’d become a director at D-Wave.

  41. asdf Says:

    Sol Warda seems to be trying harder to discredit D-Wave than IBM is.

  42. Scott Says:

    Bram #23:

      Is there any known quantum adiabic algorithm which reduces to, or at least empirically seems to be able to solve, a problem which is believed to be difficult for classical algorithms?

    It’s an excellent question with a complicated answer:

    1. There’s something called the conjoined trees problem, which you can use to produce a black-box problem that an “adiabatic-like” algorithm can provably solve exponentially faster than any classical algorithm. But (a) it’s not quite the adiabatic algorithm, since it requires jumping up from the ground state to the first excited state, (b) no one has any idea how to produce a “conjoined trees” fitness landscape from any actual constraint satisfaction problem, and (c) even if you could do it, the resulting CSPs would presumably be so special and bizarre that they’d have no practical relevance.

    2. There’s another 2002 paper by Farhi, Goldstone, and Gutmann, where they give relatively-natural examples of fitness landscapes for which the adiabatic algorithm provably reaches the global minimum exponentially faster than simulated annealing does, basically because of quantum tunneling. (They don’t mention it in the paper, but there are also natural fitness landscapes for which simulated annealing converges exponentially faster than the adiabatic algorithm does!) However, this is only comparing the adiabatic algorithm against simulated annealing; it says nothing about other classical algorithms like Quantum Monte Carlo. And indeed, for the particular landscapes that Farhi et al. study in that paper, if you were told in advance what the landscape looked like, then it would be trivial to find the global minimum classically.

    3. If you could implement adiabatic quantum computation with an arbitrary final Hamiltonian (not merely a diagonal one), then a 2004 result of Aharonov et al. says that you could simulate a universal quantum computer—so, in particular, Shor’s factoring algorithm. However, because of the funky non-diagonal final Hamiltonian, this requires going outside the framework of adiabatic optimization as I would define it. More relevant for practice, it incurs huge polynomial blowups, and would require way better coherence than anything D-Wave is aiming for. Indeed, if you had such good coherence, then probably you should just implement Shor’s algorithm outright, without encoding it in “adiabatic” form (with the large polynomial blowups that that entails)!

    4. For the “standard” adiabatic optimization algorithm, applied to practically-relevant optimization problems, I’d say we currently have no good evidence that it ever yields any practically-relevant speedup compared to the best classical algorithms. On the other hand, there’s also no compelling theoretical or experimental case right now that it doesn’t yield a speedup, or that there’s no practically-relevant set of instances for which it does. So, as I’ve said many times, I think the experiment is well worth trying! :-) (Of course, the first step in doing such an experiment would be to build a QC that implements the adiabatic algorithm! Alas, the failure of a device that does noisy, semiclassical quantum annealing doesn’t really tell us what the adiabatic algorithm itself would have done.)

      Without that, building an actual adiabic quantum machine seems rather like building a skyscraper and making the frame out of depleted uranium because you think there might be some revolutionary advantage to that material which you discover at some point in the future.

    LOL, I love that analogy!

  43. Douglas Knight Says:

    Bram, to answer your precise question, the class of adiabatic quantum algorithms is as powerful as any quantum computer. But you probably mean quantum annealing, in particular applied to NP complete problems. That’s a tricky question because it is not expected to accomplish exponential speedup. There is some hope for a polynomial speedup, which would be very valuable. But it’s a lot harder to detect polynomial speedups, so there isn’t much evidence and it isn’t very surprising that there isn’t much evidence.

    Your analogy would be reasonable complaint if dwave actually had a quantum annealer. But it is premature: the question is whether they have any depleted uranium, or even if they have a Geiger counter.

  44. Jay Says:

    Maybe Sol Warda works for IBM?

  45. JAM Says:

    Taking fight with a Jiujitsu warrior? This discussion is on a martial attack level (pun intended).

  46. Raoul Ohio Says:

    Hate to see Sol Warda go. Scott will never have half that much fun debating John Sidles.

  47. quax Says:

    Rahul #34, it wouldn’t be the first time a technology or product was introduced and nobody had any idea what they were good for.

    When the first LASER was assembled it was widely thought of as a scientific oddity without practical applications.

    I remember talking to a veteran career chemist when fullerene were discovered, he dismissed them as pure basic research and did not expect anything useful to come of them.

    On the other hand Google seems to be quite happy to use the DW2 for machine learning. And yes, they could do this with conventional hardware, but my point is there may very well be a nail in this haystack. Not looking for it won’t gain anything, so this research makes perfect sense, unless there is mathematical proof that the nail does not exists.

  48. Rahul Says:

    quax says:

    it wouldn’t be the first time a technology or product was introduced and nobody had any idea what they were good for.

    ……has there been a product that nobody had any idea what good it was for and yet it sold for 10 million dollars a pop? :)

  49. Rahul Says:

    Other than DWave are there groups / companies trying to build a QC the adiabatic algorithm route?

  50. quax Says:

    Rahul #48, the first commercial laser company Trion was founded 1961, early enough that there really wasn’t much of an identified use case for these machines. No idea what they would have cost in 2014 dollars, but I imagine they weren’t cheap.

    The DW2 is obviously a radical departure from how our hardware looks these days. Frankly, I am amazed that anybody finds it at all surprising that a company like Google would spend a pretty penny on investigating such a machine.

  51. Darrell Burgan Says:

    Dumb lay question here, but isn’t the proof about DWave in the pudding? If they have a multi-million dollar machine that is outperformed by a multi-thousand dollar cluster of commodity equipment, does it really matter if it is truly quantum or not? Either way it is a commercial failure ….

  52. Michael Marthaler Says:

    Scott #43: I thought that Grovers Algorithm can be used on an adibatic quantum computer? Is that wrong?

  53. Scott Says:

    Michael #52: It turns out that, even to get the Grover speedup from the adiabatic algorithm, you need to be careful with the annealing schedule. Moving from initial to final Hamiltonian at a constant rate doesn’t work; instead (IIRC) you need to slow down at a certain point, and where to slow down might depend on detailed knowledge about the problem instance. And the end result of all this would only be to get a quadratic advantage over exact classical algorithms, like DPLL. If you’re comparing against (say) simulated annealing, then it remains unclear whether the adiabatic algorithm can even give you a Grover-type speedup in practice, let alone an exponential speedup.

  54. Scott Says:

    Darrell #51: D-Wave’s supporters would reply that it’s not a “commercial failure” because customers (in this case, Google and Lockheed) are buying it. Like many debates, this one quickly gets mired in boring terminological disputes. But as soon as you specify which thing you’re asking about, I don’t think there’s that much serious disagreement about the current answers:

    1. Does it exhibit quantum entanglement? At a small scale, probably yes; at a large scale, we’re not sure.

    2. Does it yield a speedup over classical computing? Experimentally, it’s looking more and more like the answer is “no,” consistent with what many of us guessed from the beginning on theoretical grounds.

    3. Are people buying it? So far, Lockheed and Google have. Whether there’s a significant further market for “vanity QCs” remains to be seen…

  55. Rahul Says:

    Darrell #51:

    I’m myself a D-Wave skeptic, but I’ll play devils advocate and speculate on one difficulty while tasting the pudding in such cases.

    How does one evaluate a technology where someone admits it is no better than a competitor at present but comes up with reasons to claim it has some inherent potential to perform much better than status quo, given the chance (i.e. money, time) to refine his technology.

    I don’t know the generic answer. I usually go with my intuition.

  56. rrtucci Says:

    Rahul said:
    “has there been a product that nobody had any idea what good it was for and yet it sold for 10 million dollars a pop? :)

    30 sec Superbowl commercial: $4 million
    1 predator drone (without extras like missiles): $4.3 million
    Hollywood movie: $3-300 million
    US black budget: $52 billion


  57. Dániel Marx Says:

    Scott #54:
    > 2. Does it yield a speedup over classical computing?
    > Experimentally, it’s looking more and more like the answer
    > is “no,” consistent with what many of us guessed from the
    > beginning on theoretical grounds.

    I think the discussion of “speedup” is confusing, because there are at least four separate questions here:

    1. Can the current D-wave compute any *artificial* problem faster than a laptop/supercomputer?

    2. Can the current D-wave compute any *practical* problem faster than a laptop/supercomputer?

    3. Is it expected that when a very large DWave is built, it will be faster than a laptop/supercomputer on an *artificial* problem?

    4. Is it expected that when a very large D-wave is built, it will be faster than a laptop/supercomputer on a *practical* problem?

    Ok, that’s actually 8 questions. It is difficult to follow arguments if people are talking about different questions, sometimes on purpose. My understanding is the following:

    1. It seems that the current D-wave is not significantly faster on its native problem than a laptop with a suitable program, let alone a supercomputer.

    2. No, since the answer to (1) is no, and in fact solving a practical problem requires a mapping that slows things down even more. So currently D-wave is just a research experiment, and surely no one is using it for any practical purpose (despite the media claims).

    3. This is what all the experiments with scaling, bimodal distributions, etc. are about. There is a scientific debate about the interpretation of the experiments and what they predict. But at least we can say that there is no convincing evidence for “yes.” Although I think eventually a D-wave computer will be the best available solution for at least one task: namely, simulating a D-wave computer :)

    4. I haven’t heard much discussion about this. For one thing, I would be very interested in how they expect to map problems to the native problem in an approximation-preserving way (or do they expect to be able to solve the native problem optimally?)

    Please let me know if I misunderstand something.

  58. Scott Says:

    Dániel #57: Yes, thanks very much for pointing out those important distinctions! The only reason I didn’t make them in comment #54 is that, right now, it’s looking more and more like the answer is basically “no” to all 8 of your questions!

    I.e., we’re not yet seeing any good evidence for an “artificial” speedup, let alone for a “practical” one; nor for a speedup over a decent laptop, let alone over a supercomputer. And we’re also not seeing better scaling behavior, of the sort that would lead one to predict that such speedups should appear at larger numbers of qubits.

  59. Rahul Says:

    Naive question possibly: If we had to analyse D-Wave strictly on the basis of its hardware design, what component(s) determine if it will show enough large scale entanglement or classical annealing behavior.

    Is it the qubit element design, the couplers, the spacing etc.? I mean from a design analysis aspect is it possible for a seasoned experimentalist to estimate if they’d expect to see the sort of behavior DWave is claiming?

    Or is this impossible to tell and the only way out is analyzing it’s performance empirically? In other words, in principle at least, could one reasonably expect the DWave approach to work? Or is it just a long shot or worse.

    Alternatively, let’s assume the current processor is non-quantum beyond doubt, is the idea, at least, salvageable by tweaking some spacing, connections, temperatures or other parameters? Or unlikely.

  60. Sam Hopkins Says:

    Scott: if you don’t consider yourself an experimentalist, why do you get so involved in these debates anyways?* When D-WAVE (or the NSA, or anyone else for that matter) can break RSA, it will be apparent. We clearly are not there yet.

    *I consider the debates you’ve had with Gil Kalai, e.g., to be of a very different nature in that they were really about theory.

  61. Scott Says:

    Sam #60: Firstly, D-Wave says explicitly that they’re not interested in breaking RSA, but only in doing adiabatic optimization, and not necessarily in getting an exponential speedup but just in getting any speedup. That’s basically why they’ve been able to exist in this twilight zone for years, where there’s no clear evidence for any quantum speedup over the best classical algorithms like simulated annealing, but still they can generate just enough confusion and uncertainty about the issue to keep their supporters optimistic.

    Now, as for why I “get so involved in these debates,” let me give you 7 reasons:

    1. Because people keep emailing me or coming up to me and asking for comments about D-Wave—like, several times per week.

    2. Because like it or not, this is the thing that’s driving the public perception of quantum computing right now. The attitude of many of my colleagues—”if we ignore it, it will go away”—has been tested and found to be inaccurate.

    3. Because, if and when D-Wave fails, the same entities (TIME Magazine, etc.) that are now touting them will turn around and declare that quantum computing as a whole has failed. As I’ve said before, while a success would bring credit to D-Wave alone, a failure would inevitably be blamed on the entire QC research community. So it’s pretty damn important to be able to say that a lot of us called this from the beginning.

    4. Because having this blog arguably gives me a special kind of responsibility. If a few thousand people interested in QC read this blog, and if I don’t use it to respond when D-Wave gets cringeworthy claims onto the cover of TIME, etc. etc., then some people would accuse me of being personally complicit in the hype-ification of our field.

    5. Because of a snowball effect: once I’ve gotten even a little involved, people attack me, I have to respond to the attacks, etc., and I get drawn further and further in like Michael Corleone in The Godfather.

    6. Because many of the questions at the core of this debate actually don’t require much experimental background at all. They’re questions like: what do we mean in calling something a “quantum computer”? What sorts of tests are relevant to distinguishing quantum from classical computing? What are quantum computers good for, anyway? What are the strengths and weaknesses of the adiabatic approach to QC, compared to other approaches? How much speedup should we expect the adiabatic approach to give for NP-complete problems, either in principle or in practice? If I can’t comment on questions like those (even just to explain where the answers aren’t yet known), then I don’t deserve my job.

    7. Finally, the most important reason. D-Wave has, in essence, bet $150 million and hundreds of person-years that complexity theory doesn’t matter. In other words, that you can just throw together a bunch of qubits with nanosecond coherence times and hope for an asymptotic speedup, without any good theory explaining why you should get a speedup that way, and what theory we have suggesting that you won’t get a speedup. (Indeed, Geordie Rose has repeatedly denigrated theory; he once claimed in a talk at MIT that the company only started getting anywhere once they “fired all the theorists.”) So, the facts that (a) D-Wave has placed such a massive bet against complexity theory, (b) they got so many others to go along with the bet, and (c) they now appear to be losing the bet, are all naturally of interest to me as a complexity theorist.

    Having said all that, I am sufficiently diffident about my involvement in these debates that I’ve now resigned twice as Chief D-Wave Skeptic! :-)

  62. Alexander Vlasov Says:

    Scott #61, Could you elaborate 7? How areas of computational complexity theory rather natural for “traditional” quantum algorithms could be applied for description of adiabatic optimization?

  63. Roielle Hugh Says:


    I am curious about a one single point (are they really doing QC).

    Assuming they do, then why the fuss about the speed up? It is QC, just that it is not perfect and there is no advantage yet over classical.

    Of course, if they are not doing QC, there naturally would not, intuitively, be any advantage. (We can reserve the gray area till another day.)

    Now assume they are not doing QC, then it is a bit puzzling to me.

    It is not just claims. They published (pier reviewed) papers; they have patents; and you also have been to their facility, I suppose. Why can we not know if they are DEFINITELY not doing QC? Why can we not find holes in whatever theories which they QM is based on, instead of trying to show whose empirical data is more credible?

    It would be hoax, wouldn’t it, if they claim building QM but there is no QC? How could that be? I am really puzzled.

  64. Scott Says:

    Roielle #63: We’ve covered a lot of this ground before on this blog, but briefly, I think the fundamental flaw in your comment is the binary, either/or view—that something either “is” a QC (in which case, who even cares about speedup?), or else “is not” a QC (in which case, it must be a hoax). By D-Wave’s own admission, they’re not even trying to build a universal QC, which would be capable of running the prototypical quantum algorithms like Shor’s. Instead, they’re trying to build a special-purpose QC for quantum annealing (a “noisy,” finite-temperature version of the adiabatic algorithm), which could hopefully get some advantage sometime over what you could do with classical simulated annealing.

    Now, everyone agrees that there are some quantum effects in D-Wave’s current device—but that by itself is a very low bar; after all, quantum mechanics is needed even to explain the transistors in your classical computer. The questions are:

    1. What kinds of quantum effects are present in D-Wave’s devices?
    2. Are those effects of a kind that could plausibly lead to any speedup over classical computing, either now or in the future?

    So far, it’s looking like D-Wave’s devices exhibit entanglement at a “local” level (say, 2-8 qubits), and possibly also some entanglement at a “global” level (we’re not sure yet). But it’s also looking like, even if a small amount of global entanglement is present, the decoherence is high enough that the whole thing can be efficiently simulated classically, using Quantum Monte Carlo. If so, then simply scaling up the current design to more qubits should not lead to a speedup over classical. And this matters, because it’s directly counter to what (e.g.) Geordie Rose has been loudly declaring, and what many of D-Wave’s supporters accept as fact.

  65. Scott Says:

    Alexander #62: There’s no denying that our theoretical understanding of the adiabatic algorithm is primitive, compared to our understanding of other quantum algorithms. But there are a few things we do know. Most importantly, we know that NP-complete problems are a fundamentally different animal from problems like factoring, for which polynomial-time quantum algorithms are known. BBBV shows that there’s no efficient “black-box” quantum solution to NP-complete problems, and even in the non-black-box setting, it seems extremely likely that NP⊄BQP. We also know that it’s not hard to construct instances of constraint satisfaction problems for which the adiabatic algorithm (at least with linear interpolation) needs exponential time—that was a result of Farhi et al. and of vanDam-Mosca-Vazirani from 2002 or so. And we know that we don’t yet know any compelling reason why the adiabatic algorithm should outperform (let’s say) simulated annealing in practice—and that that makes the adiabatic algorithm an extremely different sort of thing from (say) Shor’s factoring algorithm. Finally, we know that for quantum annealing—the “noisy” version of the adiabatic algorithm that D-Wave implements—so long as the temperature is high enough, there “ought” to be a polynomial-time classical simulation using Quantum Monte Carlo (as seems to be borne out by Boixo et al.’s experiments).

    D-Wave’s gamble, of course, is that none of the above matters, and that a good approach is just to plunge ahead and see what happens (“where there’s a will there’s a way!”). Which is fine, as long as we’re talking from the beginning about a physics experiment! And as long as you’re prepared for the possibility that the result of your physics experiment will be “no speedup, consistent with theoretical expectations”—as seems to be happening now.

    This is an extremely different approach from most of the QC research community’s, which is to start with the theory telling you where you actually have good reasons to expect a quantum speedup, and then use that theory to guide which architectures you try to build, which decoherence rates and so on you aim for, and which algorithms you try to implement.

  66. Michael Says:

    How can they not know whether or not they’ve constructed a quantum computer? If you construct a machine that performs certain tasks, this doesn’t happen accidentally. You don’t just put some random things together and get a computer. If they knew enough to make the machine that works, surely wouldn’t they know how it produces its output?

    Another question… you seem to admit that the D-Wave computer does use quantum computing to some degree. Why all the hostility then? Isn’t their machine a major step in quantum computing, even supposing it can’t outperform classical computers (right now), even if it turns out there’s not much going on (right now) on large scales? It’s a new technology, it’s not immediately going to obliterate everything else. I agree there’s hype and hoopla, but it does seem to me they are doing something interesting that wasn’t done before.

  67. Raoul Ohio Says:


    Have you seen any Science Fiction Movies where a mad scientist wires up a contraption with blinking lights and claims that it is a time machine? Do you think it works?

  68. Roielle Hugh Says:

    Scott, (#64)

    Kind of anticipated my errors being pointed out that fast (and thanks), therefore I put in “reserve the gray area till another day”. :)

    Looked up a bit and saw the 8 questions by Daniel. Relevant.

    So there likely are no 8 questions. There may be just one: scalability.

    Now if they did not claim universality, still the same question, why the fuss?

    They can not have it and eat it too. Say speed up comes from classical side and they admit, no fuss, right?

    They must have claimed it (read that as some of it) being from the Q side, then the debate may be interesting. But still speedup is uninteresting because the real thing is whether it is QC.

    Repetitive: if it (say any improvement) is from QC (regardless if there is speedup), look at QC side and verify; if it is from the classical side, …

    My point is/was, why the fuss on empirical data about speed up which just isn’t conclusive, imho?

    Put it another way, the classical side is never within consideration. Therefore, it is just black and white. I may have missed previous discussion that already made such black/white clarification and excluded the classical side.

    Sorry to perhaps be dragging us deeper than should. Did they say it is a QM for special purpose (“only solve whatever instance I select”), or a general purpose QM (“only turned on for instances of my interest”), or a box that does partly Q and partly C, or one that does both as it wishes and no one knows how it does it, etc.? Just wondering. Unfair for you to answer perhaps.

    By the way, still the same line: if they knowingly attribute something on the classical side to that of quantum side, would people consider that hoax?

    If they really had break-through (consider baby-step incremental in up-scaling as such even), I think they would be eager to show the world. If not, then the entire thing, imho, may be way beyond scientific. We should, I humbly assume, reserve for another day.

    Thanks for the detailed explanation and I apologize for possibly missing some of the discussion.
    Rioelle Hugh

  69. Michael Says:

    Raoul, they’re arguing over whether or not it does better than a classical algorithm, and the extent to which quantum computing is going on… even the biggest detractors don’t say it’s some crackpot machine.

  70. Roielle Hugh Says:

    Scott, (followup on mine #68)

    It is not “who cares about speedup”.

    It is “who cares about speedup if it is not QC” (in this debate).

    We do care about QC even if we do not get the full speedup (if that is a possibility). But one can hold that if it is QC, the speedup should be a given.

    Roielle Hugh

  71. quax Says:

    Michael #66/#60: Thank you for stating the obvious. Seems to me point (7) on Scott’s list really must have stung. Certainly makes it clearer to me why we see this level of engagement.

    Investors probably like to hear salt-of-the-earth dismissals of girly-man theorists, but I rather subscribe to Kant’s POV: Nothing is as practical as a good theory.

  72. quax Says:

    Speaking of theory, Scott would you agree that current theoretical understanding makes this a valid statement:

    “Quantum annealing is less prone to decoherence than the gate model is since the former follows the inherently-stable ground state. “

  73. Darrell Burgan Says:

    Scott #54/Rahul #55 – I definitely get the point that there is immense scientific interest in showing a quantum computer that offers any measurable speedup, from a basic research standpoint. And I totally get that based on this research, it might yield computers of astonishing price/performance ratio.

    But DWave seems to be in the business of selling quantum computers, not doing basic research. Until they can sell something that outperforms anything classical, or at least something that has better price/performance ratio than classical, I don’t understand how they can sell anything at all.

    Until then, I’d say their products fit the definition of vaporware. Guess all those computer salespeople I interact with have made me cynical. :-)

  74. Darrell Burgan Says:

    Darrell #73 – I’m replying to myself because I just realized that there is one scenario into which they’d be willing to pour millions of dollars even if they never sell anything practical: if they acquire enough basic patents, and someone else finally makes an actual commercially viable quantum computer product, they could piggy-back on that success through patent licensing fees.

  75. Darrell Burgan Says:

    Michael #69 – One doesn’t have to say it is a crackpot machine; in fact I personally am not qualified to even evaluate their technology. My point is that it doesn’t matter. When one can get 1000 times the power by purchasing 1000 of those laptops instead of one DW2 … well it doesn’t take a quantum physicist to evaluate that equation. :-)

  76. Michael Says:

    Darrell: The point is that it’s a drastically new technology. Even if now it compares unfavorably to your laptop, it could do great things in the future. I remember just a few years ago when it was considered a major achievement for a quantum computer to factor 21. Now they’re debating whether or not it performs your laptop. This seems like rapid progress, and something worth pursuing. I just don’t get the level of hate. Of course it might not revolutionize computing… but you can never tell in advance, and given the progress they’ve made shouldn’t one be curious how far they can take their technology, rather than launch on tirades about anything and everything that might be wrong with the D-Wave machine?

  77. Darrell Burgan Says:

    Michael #76 – Don’t get me wrong. I have zero skin in the game and certainly no hate whatsoever for DWave. I’m actually quite fascinated by quantum computing and hope someone turns it into a reality that actually can be used by people.

    But being a lay person QC-wise, all I have to go on is price and performance. If the laptop has 1000 times better price performance, then DWave has a long way to go. If you’re right and they’ve made as much progress as they claim they have, that’s great. But they won’t impress me until they offer something that actually alters the real world of computing in which I make my living.

  78. Alexander Vlasov Says:

    Scott #65, my own outdated opinion about some NP and BBBV issues may be found in quant-ph/0512090, but it can be not relevant, because I am not sure that attack on NP should be considered as ultimate purpose of D-Wave team.

    They often wrote about hope to simulate something like natural intelligence. Except extremely rare exclusions, human being even may not simulate simplest Turing machine without piece of paper and so computational complexity classes may not be directly applied to most of us. Yet, current theoretical approach to optimization used by D-Wave is in mainstream of TCS and talk about NP mountain could hardly be avoided on this way.

  79. Joe Fitzsimons Says:

    Alexander, human brains are ultimately finite, so discussing them in terms of computational complexity theory makes little sense. Clearly I can simulate a brain with sufficient classical or quantum computing power, provided I have a sufficiently good understanding of the rules governing the dynamics and a decent initial state. Sure, it may be too much for current computers, but there is some finite amount of RAM and some finite speed for the processors which would allow real-time simulation.

  80. Alexander Vlasov Says:

    Joe #79, I simply mean that convenient complexity classes for systems like human brain may have other structure and in such a case advantage for quantum devices may be described without claims about resolution of NP.

  81. Scott Says:

    Michael #66: Your question “how can they not know whether or not they’ve constructed a quantum computer?” would be valid if we were talking about a universal QC, capable of (e.g.) Shor’s factoring algorithm. Alas, we’re not. We’re talking about a special-purpose device for quantum annealing. And unfortunately, quantum annealing has the property it can “work,” even though it’s totally unclear what role (if any) large-scale quantum effects are playing in its working. And compounded with that uncertainty, we don’t know whether quantum annealing is an approach that ever gives you speedups over classical algorithms like simulated annealing, even assuming that there are large-scale quantum effects present. So, those are the issues that make this so difficult, and that have given D-Wave so much “gray area” to hide in.

    Your question “why the hostility?” could just as well be asked of the other side: have you read Geordie’s comments trashing theorists, and the “gate-model” approach to QC? :-) At least on my part, the “hostility” (well, I prefer to call it skepticism) is because I think that D-Wave has been steering our whole field off the path of intellectual honesty—the only path that can ever get QC anywhere worth going.

    To explain what I mean by “going off the path of intellectual honesty,” let me quote your comment #76:

      I remember just a few years ago when it was considered a major achievement for a quantum computer to factor 21. Now they’re debating whether or not it performs your laptop. This seems like rapid progress…

    Let me explain to you why that appearance is misleading. However unimpressive it looked to outsiders, the factoring of 21 was using some version of an algorithm (Shor’s algorithm) that we have excellent reason to believe will actually outperform classical computers, if and when it’s scaled up. Conversely, however impressive they look to outsiders (“competitive with your laptop!”), D-Wave’s devices implement an approach (quantum annealing with stoquastic Hamiltonians and nanosecond coherence times) that we have no reason to think can ever outperform classical computers, no matter how much it’s scaled up.

    So it’s as if, at Los Alamos in 1943, some defense contractor went up to General Groves and said: “look General, I can make a way bigger explosion than those eggheaded scientists can make, just by exploding a really big conventional bomb! And unlike them, I can deliver today! And I’ll tell you what: I’ll even stick some non-enriched uranium inside the bomb, so that we can call the bomb ‘nuclear’! “

  82. Michael Says:


    So you are saying it is possible to put the parts together for a computer, with the goal of performing quantum annealing, have the output agree with quantum annealing, when in reality the computer is performing classically? How can the exact same set of components have the possibility of doing two so drastically different things?

    You say that D-Wave is “steering our whole field off the path of intellectual honesty”. Wouldn’t this be in contradiction to the above? It sounds like it’s just unknown whether or not their computer is doing what it says. Surely they believe it does, and even if turns out that they’re wrong, being wrong is not intellectually dishonest.

    You say “we have no reason to think can ever outperform classical computers, no matter how much it’s scaled up.”
    Having no reason to think something is true doesn’t mean there’s not a way to make it happen that you didn’t think of.
    Obviously D-Wave disagrees, and furthermore if they are wrong it will be determined eventually. Shouldn’t the evidence, over time, be the determining factor in whether they are believed? In other words, isn’t it too early to draw conclusions as you have been doing for years?

  83. Scott Says:

    quax #72:

      Speaking of theory, Scott would you agree that current theoretical understanding makes this a valid statement:

      “Quantum annealing is less prone to decoherence than the gate model is since the former follows the inherently-stable ground state.”

    No, I would not agree to that statement without massive qualifications. It all depends on what you’re trying to do: quantum annealing might keep you in the state that you’re “supposed” to be in, but if that state is highly mixed and not useful for getting a speedup, then how much does it matter?

    More relevantly, I don’t agree that quantum annealing is clearly a more promising path than the gate model for getting a quantum speedup. I think that remains to be determined. And I think that either way, you almost certainly can’t escape the need for quantum fault-tolerance if you want a scalable speedup.

  84. Rahul Says:


    Loved that uranium enriched conventional bomb analogy. That’s so apt.

    I’d say they can’t even make a bigger-than-conventional-bomb explosion. So what you do is make a tiny explosion (charge $10Million for it in 1943 money ), then show people that the blast is radioactive, hence we obviously have a nuclear bomb. Before the eggheaded scientists got there.

    So of course, give us more money and we’ll scale it up and it is “obvious” that we will eventually make the most spectacular explosion ever.

  85. Rahul Says:

    Michael says:

    So you are saying it is possible to put the parts together for a computer, with the goal of performing quantum annealing, have the output agree with quantum annealing, when in reality the computer is performing classically? How can the exact same set of components have the possibility of doing two so drastically different things?

    I think this may answer your question (stealing Joe Fitzsimons’ words)

    “There is an interesting subtlety as regards noise: If you add noise to the adiabatic algorithm, it degrades gracefully into one of the best classical algorithms for the same problem [classical annealing]. Thus you can obtain the same result either way, and the only difference is in the assymptotics for large systems (which are obviously not observables). Thus even if they produce a valid answer for every problem you throw at such a device, this is not enough information to determine if you are actually performing quantum computation.

  86. Joe Fitzsimons Says:

    In fairness, those comments don’t apply when you start looking at relative success probabilities.

  87. Rahul Says:

    @Joe Fitzsimons:


    As an aside, are relative success probabilities a good fingerprint for an algorithm?

  88. Joe Fitzsimons Says:

    I suspect it is being used because it is accessible on the DWave devices, rather than any stronger reason.

  89. Rahul Says:

    If you had a black box that either ran Classical Annealing or Quantum Annealing but you didn’t know exactly which, what are the sorts of possible tests that you could run to distinguish?

  90. Scott Says:

    Michael #82: Based on current evidence, my guess is that the D-Wave Two is performing quantum annealing—but quantum annealing of a sort that can be simulated on a classical computer using the Quantum Monte Carlo algorithm, and that’s therefore incapable in principle of giving a speedup over classical computing. And if it’s not doing that, then the likeliest alternative possibility is that it’s performing something closer to classical annealing, which of course is also incapable of giving a speedup.

    And yes, I did say “closer to.” Whether you like it or not, this isn’t either/or: a given annealing process can be “more” or “less” quantum. And yes, it’s entirely possible that you’d set out to do quantum annealing, but because the temperature or the decoherence rate were too high, you’d instead end up doing classical annealing, or a form of quantum annealing that’s efficiently classically simulable. And those would still seem to work pretty well at solving optimization problems—i.e., quantum annealing “degrades gracefully” into them—but there’s no way you’d ever get a speedup over classical computing with them. As I keep trying to explain, this is the situation that we’re in, and it’s what makes it so complicated to assess the “quantumness” or lack thereof of D-Wave’s devices.

    Regarding intellectual honesty, I can hardly do better than to quote Geordie Rose, from last year:

      When D-Wave was founded in 1999, our objective was to build the world’s first useful quantum computer. The way I thought about it was that we’d have succeeded if: (a) someone bought one for more than $10M; (b) it was clearly using quantum mechanics to do its thing; and (c) it was better at something than any other option available. Now all of these have been accomplished, and the original objectives that we’d set for ourselves have all been met.

    “Better at something than any other option available”: this wasn’t true at the time he said it, it’s not true today, we have no evidence at present that it’s going to be true, and Geordie knows all that perfectly well. And this is completely characteristic of the statements he makes, the media repeats, and the public believes. Can you not see how these sorts of honesty-challenged statements will ultimately damage the reputation of QC as a whole? Why does D-Wave get a free pass to claim such things? I never get such a free pass…

    Finally, you think we skeptics are claiming D-Wave could never get a quantum speedup anytime in the future, whereas D-Wave is merely claiming that they might get one, and need more time. But the truth is the opposite: I don’t know anyone who’s claimed it’s impossible that D-Wave will ever get a quantum speedup. The future is a long time, and a lot could happen between now and the heat death of the universe! For example, D-Wave could improve its technology, or they could co-opt ideas like error-correction that they’ve been pooh-poohing for years. What provokes us is that Geordie is claiming to have quantum speedups right now, and much of the rest of the world now accepts that as fact. In short, we’re not the ones drawing premature conclusions here—they are! But as always, the rules are different for them; the net gets raised only when it’s our turn to serve.

  91. Raoul Ohio Says:

    Michael says “even the biggest detractors don’t say it’s some crackpot machine.”

    When in doubt, I tend to bet on the skeptical position (usually a good bet), and present a possible scenario. Can someone more knowledgeable than me help with the details?

    1. Is it correct the the D-Wave system has a quantum component that produces some signal, and then classical computers process this signal to provide a solution in standard form?

    2. Is it correct that all annealing algorithms have some sort of random input, or computed pseduo random numbers, as an essential part of the calculation?

    3. Are the classical computing resources used in (1) >= the resources needed for classical annealing?

    4. If the answer to (3) is yes, is is possible that the quantum component is acting only as a random noise generator?

    5. If the answer to (4) is yes, could something cool like a lava lamp ( (about $20 at your neighborhood hippie outfitter) replace the D-Wave quantum contraption? Of course, the price of lava lamps is probably going up, due to increased demand in Colorado and Washington.

  92. Michael Says:


    With regards to “Better at something than any other option available”. It seems like it takes the combined efforts of researchers from places like Berkeley, IBM, and so on to produce classical algorithms that compare. Wouldn’t it make more sense that Geordie Rose really does have faith in what the D-Wave machine is doing? You always see these technology corporate leaders, from Steve Jobs or Elon Musk on down, make bold, and sometimes overconfident statements about what their products do and have the potential to do, with varying levels of accuracy. I’ve even seen skin care products being touted as revolutionary. I just don’t see people like Geordie Rose as being dishonest. It may be that it will turn out that the D-Wave is a lot of hype, or maybe it won’t, but really I don’t think they are being intellectually dishonest. It takes a certain level of self-confidence to try something like this, so yes, you’d expect them on occasion to make bold statements that turn out to be debatable.

  93. Anonymous Says:

    Michael, if you’re claiming that Geordie is not dishonest, then I will claim that Scott is not being hostile. Calling Geordie dishonest is not hostile, since Scott believes that statement :P

  94. Michael Says:

    @Anonymous #93

    He makes numerous hostile statements on the blog, not just that ok? ;)

  95. Slipper.Mystery Says:

    It is good that Dr. Scott (a.k.a. Mr. Corleone) has effectively unretired as D-Wave skeptic a second time: the clarifications in this thread regarding the limitations of quantum annealing (not provably providing a speed-up over classical even if fully quantum, and difficult to distinguish from classical, …), have been informatively restated. Better to remain unretired.

    There is a short article from a week ago: arXiv:1402.1201 “Using Simulated Annealing to Factor Numbers”, by an Altschuler and Williams, which seems to suggest that quantum annealing could be used to factor numbers. Is the above article simply wrong, or would it be possible to use an adiabatic quantum algorithm to get a quantum speedup for factoring, but only a polynomial speedup?

  96. Michael Marthaler Says:


    I think it is about classical annealing. But it could probably be mapped onto quantum annealing. However it says nothing whatsoever about speed-up. So we can assum that it has the standard scaling behaviour.

  97. quax Says:

    Scott, my apologizes that this comment is a bit harsh, but it’s not like you expect a free pass ;-)

    What is simply astounding to me, is how you can berate D-Wave’s intellectual honesty, and in the very same comment put forward your nuclear bomb analogy, without seeing the irony.

    Your analogy clearly implies that D-Wave tries to trick the world into believing that conventional technology can pass as quantum computing.

    I am sorry, but *that is* intellectually dishonest.

    It is quite clear that D-Wave tries to implement quantum annealing and frankly, as you well know, this means the DW2 has nothing in common with established hardware. There are no transistors not even semiconductors on the chip.

    The only relevant questions are:

    1) Do they actually achieve good quantum annealing?
    2) Can quantum annealing offer a quantum speed-up?

    From a practical standpoint I also like to add another question:

    3) Regardless of (2) can this technology be scaled up faster than conventional chip technology?

  98. Slipper.Mystery Says:

    @quax#97 “my apologizes that this comment is a bit harsh, but it’s not like you expect a free pass”

    The answers are already detailed in the above thread. To paraphrase, it is not dishonest (intellectually or otherwise) to point out that:

    a) there are no theoretical reasons to believe that adiabatic quantum algorithms will provide a speedup over classical algorithms

    b) there are theoretical reasons to believe that quantum gate algorithms can provide such speedups

    c) D-wave currently benefits from the public/journalist/investor inability to distinguish between [a,b]. D-wave does not have any footprint in [b].

    d) If none of the promises of [a] are realized in the next few years, then that will unfairly case negative aspersions on [b], due to [c]

    Scott has repeatedly expressed his concern about [d], and while conceding that there be may nothing within his power to avert it, it would irresponsible not to at least try. This has nothing to do with any bias one way or another regarding D-wave’s commercial or intellectual success.

    (I would overwhelmingly prefer that D-wave succeed, but sometimes technological success depends on identifying and surmounting obstacles, rather than ignoring them.)

  99. Anonymous Says:

    Michael #94, maybe you could give us some examples of when Scott was being unreasonably hostile?

    quax #97, I don’t think a colourful analogy counts as intellectual dishonesty.

  100. Rahul Says:

    quax: I see no dishonesty in that analogy at all.

  101. quax Says:

    Rahul #100, Anon #99 To spell out why I find the analogy dishonest:

    Chemical explosives == established technology

    Established technology in computer hardware means semiconductor based transistors, this is obviously not what D-Wave is doing.

    The nuclear bomb analogy hinges on an intentional fraud scenario, and I am getting tired of people insinuating that D-Wave is fraudulent.

    Scott knows better, and probably didn’t mean to imply this, but it easily comes across as such.

    Slipper.Mystery #98: Of course these questions have been discussed here I just stressed them because IMHO this is the crux of the matter, and it would be nice to at least have a consensus around that.

  102. anon Says:

    David Poulin’s comment from Rose’s blog was apparently pithy enough to pretty much stop the discussions over there. I repost here for the benefit of those who don’t read Rose’s blog:

    This blog entry makes a fallacious use of Occam’s razor. Science is about good explanations, but the explanation can change depending on the scale at which the experiment is performed. If I zoom in enough on a bicycle, I will start seeing electrons bound to nuclei and will correctly conclude that I need quantum mechanics to explain what I see. Moreover, quantum mechanics can correctly predicts the behaviour of the bicycle at a human scale. Should I conclude that the bicycle is quantum mechanical on a human scale because it is the only theory that ‘perfectly describes every single experiment ever done’? The logic used in this blog entry suggests you should.

    Experiments on a few qubits have shown quantum mechanical effects, but the point of Shin et al. is that on large scales (where there is a potential for computational applications) there is an alternative classical explanation.

  103. Scott Says:

    quax #97, #101: In my nuclear bomb analogy, I didn’t mean to suggest that the military contractor was being intentionally fraudulent. Indeed, to make that clearer, let me now expand the analogy, just for you.

    It turns out that the conventional bomb with uranium packed into it does spray radioactive waste all over the place—i.e., it’s what would later be known as a “dirty bomb.” Experiments are done, and they do indeed detect moderate levels of radioactivity after a test explosion. As a result, the contractor really, genuinely believes that he’s built the first practical nuclear weapon. After all, it really is a bomb, it really does explode, and it really does use nuclear effects, not just chemical ones!

    Oppenheimer and Bethe try to explain to General Groves that, no, that’s not what they meant by “nuclear bomb,” while on the other side of the room, the military contractor accuses the academic scientists of meaningless hairsplitting. And besides, where’s the academics’ nuclear weapon that’s so much better? All they’ve built so far are some hollow imploding spheres of conventional explosives, allegedly to be filled at some later time with “plutonium,” an element that doesn’t even exist yet. Thus, wouldn’t a faster route to nuclear weapons be if we all worked together on improving the contractor’s dirty bomb, e.g. by packing more uranium into it?

    Again, the contractor really, genuinely believes all this: he might be dishonest in certain specific statements that he makes to General Groves, but his overall plan is at worst misguided, not “dishonest” (even if adopting his plan would have the effect, as I put it, of steering the Manhattan Project “off the path of intellectual honesty”). And since he’s not dishonest, I claim that this analogy isn’t either.

  104. Michael Says:


    If you want your analogy to be valid, at some point people would have to build a vastly more powerful quantum computer than the D-Wave. Otherwise, comparing yourself and your colleagues to Bethe and Oppenheimer is far more hype than anything they’ve done.

  105. Scott Says:

    Michael, the entire point of my analogy was to show how General Groves would have gone wrong in 1943, if he’d completely ignored the underlying theory, and made the contest about “who can make the most impressive-looking explosion today, which somehow involves some nuclear effects?” By ignoring what we understand about the theory, and looking only at the explosions, you’re making exactly the mistake I was warning against. Today, the situation in quantum computing is that know the theory; we know what the laws of physics predict is possible; but we’re not even close to being ready for a “Trinity test.” Whether we ever get there depends on a whole bunch of factors, including the evolution of technology, whether someone decides to invest “Manhattan Project”-level funding to make it happen, and whether the world decides to abandon the goal—let’s say, because the military contractor’s dirty bombs are good enough, or (conversely) because their lack of any clear advantage over conventional bombs shows that “nuclear weapons are bunk.”

  106. quax Says:

    Scott #103, sorry still doesn’t work for me.

    If we equate ‘nuclear’ with ‘quantum’ in your parable, then D-Wave clearly was to build something that is ‘nuclear’ in nature, i.e. if your contractor came up with a bomb that somehow utilizes the weak nuclear force rather than the strong one there’d be some sort of valid parallel. After all, you don’t content the point that they try do implement true *quantum* annealing.

    Also D-Wave understands what a gate based QC is, but concluded it is unrealistic to build one that’ll be useful at this point.

    So the company is by far not as befuddled as the military contractor in your analogy.

  107. quax Says:

    Scott, #105 so let’s cut to the chase, what underlying theory are they ignoring?

    They don’t go with gate based QC but rather quantum annealing for pragmatic implementation reasons.

    You wrote several times that there is no theory that indicates if quantum annealing can yield a quantum speed-up.

    You also wrote that for that reason it is essentially an experiment.

    If there is no theory for this particular experiment how can D-Wave at the same time blatantly ignore it?

  108. Scott Says:

    quax #106, #107: For starters, they’re ignoring that with stoquastic Hamiltonians and finite temperature, everything should be efficiently simulable using quantum Monte Carlo. That’s the theory, and Boixo et al. appear to have validated it experimentally.

    And obviously, any analogy whatsoever only works until it doesn’t work. If you like, though, I can flesh things out still further:

    The military contractor understands what a fission bomb is, but concludes that it’s impractical to build one at this point (and maybe they’re even right … suppose this is 1928 instead of 1943). And they point out, correctly, that the radioactive waste spread by their dirty bomb really does use the weak nuclear force. And yes, theory suggests that this bomb won’t produce a nuclear chain reaction, the way the “purists’ ” bomb would, but (say the contractor and its supporters) how will we really know there’s no chain reaction until we try it out? And even if we do try, and it doesn’t produce a chain reaction … well, how do we know that it won’t do so in the future? Maybe we just need to pack in some more low-grade uranium…

    (In which case, I’d say the same that I say about D-Wave: fine, OK, just call it a physics experiment! Don’t pretend that you know any theoretical reason why we should expect this route to produce a chain reaction.)

  109. quax Says:

    Scott #108, so where do these guys go wrong?

    This paper suggests that there are significant differences in the dynamic of quantum annealing and annealing of a Quantum Monte Carlo simulation.

    These guys suggest a Quantum Speedup by Quantum Annealing.

    From the outset it seems there is plenty of theoretical ambiguity to justify the experimental D-Wave approach.

  110. Roielle Hugh Says:

    At this junction, it seems that analogy is fading as a fad, and classic-quantum continuum should become the topic of the day.

    sub-q and/or sup-c, to me, may give people less distraction and something to see eye to eye with. I prefer things that are vague possibilities to things that I strain my neck to see the relevance.

    Scott, thank you so much again for your kind time. I know very well, and I think people realize that, walking around this world, it is not easy to find one so patiently explain things to people in such candid exchanges. Truly, I am grateful.

    I see obviously that a lot of things said are just for a bit of fun, just that the receiving end may have a different type of enjoyment. Arguments well intended, however, may just be slightly sharpened human dialogue.

    Roielle Hugh

  111. Jay Says:

    Questions for Seung Woo Shin, if he stills read.

    As far as I understand your paper, you compared three things: a) a quantum simulation of Dwave II; b) a classical model of the same machine; c) the actual output of this machine on a set of “random” instances as published last year.
    For sociological reasons all the fuss is about b) versus c). Actually, I find the stratospheric correlation between a) and b) even more interesting. Two (series of) questions please:

    1) I was surprised you were able to compute a) using a classical computer. Isn’t that supposed to be exponentially difficult and 100+ already too hard? How many qbits could you simulate this way?

    2) Have you tried brute force to identify a set of instances (appart the published ones) for which a) behaves differently from b)? Could this strategy, if it fails for Dwave II’s present architecture, allow you to evaluate the potential of upcoming or proposed modifications of the chip?

  112. Scott Says:

    quax #109: I’d have to study the first paper before forming an opinion; in any case, it doesn’t seem directly relevant to speedup claims.

    The second paper is about the conjoined-trees example, which I already discussed in comment #42. If you don’t know, that example involves two complete binary trees whose leaves are connected by a random matching, and which is then embedded into a Boolean hypercube in a random way to produce an oracle separation. I don’t know how heavily D-Wave supporters want to lean on that example, since it’s extremely contrived even by the standards of us complexity theorists! :-) (I don’t mean that as a criticism.) Indeed, to date we have no example of any constraint satisfaction problem that realizes anything like the conjoined-trees behavior: we only know how to realize it using abstract black boxes. One could say: “if this is how much special structure you need to get a speedup using quantum annealing, I guess it’s not too practically relevant, then…”

  113. quax Says:

    Scott, thanks for pointing me to #42 missed that one. No doubt the practical value of this particular problem is rather limited but if there’s a pin in our haystack maybe we can find a nail for the DW2 hammer after all.

    As to the first paper, just wanted to stress that there seems to be enough differences that an experimental exploration appears justified to me.

    Getting back to the subject of the Times article. It got me to contemplate the way blogs are changing the scientific discourse. Frankly, there is one rather big mis-characterization in the Times article, describing you as just “one of the closest observers of the controversy” is not doing you justice.

  114. Raoul Ohio Says:


    Amazingly, Scott finds time to do other stuff than observe the D-Wave controversy.

  115. Seung Woo Shin Says:

    Scott #22: I very much look forward to meeting you, too!

    Jay #111:
    First of all, let me clarify that our results concern D-Wave One the 128-qubit machine, not D-Wave Two the 512-qubit machine. At this time we unfortunately do not have access to experimental data regarding D-Wave Two. Now to your questions:

    1) We did not run these simulations ourselves but used the data from the Boixo et al. paper which Matthias Troyer kindly shared with us. The reason that such a simulation is possible is that D-Wave machines implement what is called stoquastic Hamiltonians, for which efficient simulation methods known as Quantum Monte Carlo are known to generally work well.

    2) This could be an interesting question, but we haven’t really gotten to it yet. At the same time, note that a quantum annealer will be computationally interesting only when it can achieve certain things that no efficient classical algorithm, including Quantum Monte Carlo, can achieve.

  116. Gil Kalai Says:

    As I mentioned in Seung Woo’s nice talk in Berkeley, Boxio et al.’s statistical evidence for long term quantumness looked very weak to start with. Thinking a little more about it now here is a way to look at these matters:
    We have here 4 algorithms A,B, C, and D working for some optimization problem depending on a parameter N. We choose 10 random instances for every value of N between 9 and 108 and for each of these 1000 instances we run 1000 experiments and compute the probability of success. So we get 1000 probabilities. Now we draw the histograms of these probabilities and notice that in cases B, C and D we have two bumps near zero and one while for A the histogram is flatter looking a little like a flattened reverse parabola. C and D looks especially similar and their similarity is supported by a correlation test.

    This is regarded as an evidence that algorithm D behaves like C and that D manifests long term quantum interaction!!!

    What is going on? the simplest explanation is that for algorithms A the sucess rate declines roughly linearly with N while for algorithms B, C and D there is a sharper decline at some specific value of N. Just that. This is a very very weak evidence to start with concerning algorithmic similarity between C and D not to speak about fantastic claims about quantumness.

    On top of this, the comparison methodology is bizzare and grouping all probabilities together ignoring the values of N is mistaken. Also, a test examining how different algorithms perform on same instances could also add relevant information for a comparison. (I.e., to compare the sequances of 1000 probabilities rather than the sets of 1000 probabilities.).

    The SSSV paper treated the methology of Boxio et als. as a blackbox and uses their method to show that a classic algorithm E is even “closer” to D than C is, and also that E and C are very close. This nicely demonstrates that Boxio et al’s conclusions are invalid without an attempt to open the black box. In fact SSSV cautiously suggest that E and C indeed behave similarly (and perhaps also E and D.). However, it would be better to examine SSSV’s interesting statement “One way to view our model is as a classical mean field approximation to simulated quantum annealing,” using correct methods. It looks that here (as almost always) a 0.99 correlation is primarily an artifact of a wrong methodology.

  117. Rahul Says:

    @Gil Kalai

    Thanks for clarifying. I’ve always been puzzled that the quantumness-vs-classicality question is being decided by using the bimodality of the success rate distribution as a fingerprint. Intuitively I found that a bit too crude.

    A follow-up question to you: If you had a black box that either ran Classical Annealing or Quantum Annealing but you didn’t know exactly which, what are the sorts of possible tests that you could run to distinguish?

  118. Seung Woo Shin Says:

    Gil #116: Hi Gil! Let me make one clarification as there seems to be some misunderstanding regarding the methodology of Boixo et al. In fact, all of the one thousand instances used in the histogram/correlation analysis of Boixo et al. were of the same size, N=108. Smaller instances were used in the scaling analysis.

  119. Gil Kalai Says:

    Hi Seung Woo! Thanks! This weakens, of course, my specific explenation for the shape of the histograms :), and critique of Boxio et als method. It still looks that similarities in the shape of histograms of success probabilities may reflect things with little relevance to the similarity of the algorithms and certainly no relevance to quantumness. (E.g., linear dependence vs “sharp-threshold” dependence on some combinatorial parameter of the instance.)

  120. Roielle Hugh Says:

    Scott and all,

    I am wondering if it might help, even a tiny bit, to take a slightly different perspective.

    Can we solicit people to give a view from a different angle? Not just a different way of interpreting the statistics?

    Gil Kalai #116: Thanks for the post.

    Roielle Hugh

  121. Michael Says:

    Scott: With regards to #105… So your concern is that if D-Wave fails, then people will not take the field seriously? I think people would realize this is just one company’s approach and differing approaches might have better success. You hear about various technologies that take numerous attempts. For example if Nissan’s autonomous cars never materialize then other companies would still try for them. It’s only if everyone gets stuck over a long period of time then people start to wonder.

    You make it sound like serious quantum computers are a long way off, and might never happen. Just out of curiosity, in your opinion where is the best progress being made right now towards legitimate quantum computers?

  122. Michael Marthaler Says:

    Seung Woo Shin:”The reason that such a simulation is possible is that D-Wave machines implement what is called stoquastic Hamiltonians, for which efficient simulation methods known as Quantum Monte Carlo are known to generally work well.”

    I hadn’t previously fully appreciated this point. I thought that simulated quantum annealing would give us access to larger quantum systems, but finally/asymptotically also scale exponentially. But then… it means that even the one experiment showing large scale quantum effects already shows that there is very probably be no quantum speedup.

    (Yes Scott pointed this out much earlier. I had not put this in context of the Boxio et al. work.)

  123. Scott Says:

    Michael #121: I certainly hope you turn out to be right!

    Serious progress on experimental QC is being made all over the world right now. Even if I were totally caught up on what’s happening (which I’m not), I couldn’t possibly give you a full list, but a few things spring to mind. In trapped ions: David Wineland’s group at NIST (which was recently honored with the Nobel Prize in Physics), Rainer Blatt’s group in Innsbruck, and the group centered at the University of Maryland. In superconducting qubits: Robert Schoelkopf’s group at Yale, and John Martinis’s group at UCSB, both of which now have superconducting qubits with about 10,000x the coherence times of D-Wave’s qubits (!!!)—though of course, just a few at a time so far and not with controllable couplings. In general, if you want to understand the real progress toward QC, I’ve learned that you need to stop asking the sorts of questions laypeople always ask:

    “How many qubits do you have? How big a number can you factor?”

    Again, this is like asking people trying to get to the Moon how high they’ve managed to jump so far on a pogo-stick. The better questions are things like, what decoherence rates (“T1 and T2 times”) can you achieve? How far are you from the fault-tolerance threshold? And there, the basic situation is that (a) the experimentalists are still quite far from “going critical” (i.e., getting below the fault-tolerance threshold in a scalable way), but (b) the progress since they started in the 1990s has been by orders of magnitude.

  124. Michael Says:

    Scott, maybe a better nuclear analogy would be with nuclear fusion. There are techniques available which you can use to create nuclear fusion. High school kids have even used them in science fair projects. The thing is, they use highly endothermic reactions, and there’s no real chance they can ever be used to create nuclear weapons. So yes, it’s cool they can do this, but no one would ever confuse it with the real thing.

    But from what you are saying, quantum computing has a disadvantage, which is no one really knows what its true potential is. Maybe I’m biased being from pure math, but in my field people keep trying and trying to solve the big problems, and someone coming up with a half-baked approach hardly slows anyone down. If what you’re saying about the D-Wave project is correct, I think you really don’t have to be worried. As long as the potential exists I think people are going to keep trying to make a serious quantum computer, because the possibilities are so huge.

  125. Scott Says:

    Michael #124: Thank you for the interesting comment! I also spend most of my time doing what many people would call “pure math,” and I think the biggest difference with D-Wave is that there’s actual money involved. I.e., this isn’t some postdoc trying to prove the Riemann Hypothesis with a glint in his eye, but a company that’s spent $150 million, employs maybe 100 people, and has sold devices to Google and Lockheed for millions of dollars. So, when and if it turns out that you indeed don’t get a speedup from quantum annealing, I can easily foresee the next round of TIME magazine headlines: “Quantum Bamboozle” / “The Box Is Open and the Cat Is Dead” / “Quantum Computing: Has It Finally Jumped Schrödinger’s Shark?” :-)

    Indeed, let me go further: I predict that, just as today I’m attacked for being too critical of D-Wave, years later I’ll be attacked for not having done enough to warn people about the impending trainwreck!

    (By analogy, and to return to pure math: three and a half years ago I was heavily attacked, for betting $200,000 against the validity of Vinay Deolalikar’s claimed P≠NP proof without having identified the flaw. Then, after multiple fatal flaws had emerged, I was again attacked, but this time for not having come out against the proof unequivocally enough!)

  126. Rahul Says:


    Yes, but someone coming up with a half-baked approach in pure math does not get $150 Millon of funding. Big difference.

  127. Rahul Says:

    In hindsight there’s a lot of parallels between the Vinay Deolalikar saga & D-Wave.

  128. Roielle Hugh Says:

    Scott #123 & Michael #122 & #114,

    Scott, sorry! (I think you know what I mean).

    So pogo-stick:
    Are we using pogo-stick?
    1. yes we are, then we can not blame people asking about height related to the stick
    2. no, then tell people what we are using
    (I am simplifying the situation, but you get the idea.)

    Also, people may not be knowledgeable about the internals of the pogo-stick or whatever instrument that is. If the impression is that it is the stick that matters, then they are not going to ask about ‘decoherence’, etc. that just go with the stick. Is this a reasonable way of looking at the thing?

    Back to the point of letting people give different perspectives. I think Michael is offering something.

    Anything half-baked would be very good, but that I expect to go far beyond, for example, to at least half explain why it is possible (or just apparently plausible) that ‘a person is perfectly honest but not telling the truth’, so to speak.

    So I ask again, can we solicit people to offer different perspectives?

    Michael, I think that the point(s) you made are very good.

    I believe I get your point. However, “no one would ever confuse it with the real thing”, a very crucial pivoting point maybe, depends on what IS ‘real’ (in broader sense).

    Roielle Hugh

  129. Roielle Hugh Says:

    “In hindsight there’s a lot of parallels between the Vinay Deolalikar saga & D-Wave.”

    That, I, Roielle Hugh, absolutely failed to find any agreement at whatever level.

  130. Mike Says:

    Is John Sidles now posting under the name Roielle Hugh? :)

  131. rrtucci Says:

    A better analogy than Deo-whatever is
    Loop Gravity, Perimeter \sim Adiabatic QC, D-Wave
    String Theory, Lubos \sim Gate Model, Scott

  132. Scott Says:

    rrtucci: I wonder how one could possibly explain, on your proposed analogy, why Perimeter and IQC are hotbeds of D-Wave skepticism?

  133. Alexander Vlasov Says:

    rrtucci #130, and, finally
    Geant4, CERN \sim TRNSLTR, NSA

  134. quax Says:

    Scott, obviously you should have immediately seen the shortcomings in Vinay Deolalikar’s claimed P≠NP proof, but then again maybe you did, and rather tried to cash in with your $200K bet?

    Anyhow, for $100K hush money I won’t tell anybody that I figured out that ruse :-)

  135. Scott Says:

    quax #134, I offered $200K (i.e., much of my life savings) if I was wrong, while getting zilch if I was right. And that’s what wasn’t a strong enough stance for some people’s satisfaction. See, in particular, this comment by Matt Hastings:

      You at least came out against [Deolalikar's proof], but personally I think you were way too cautious. I thought it should have been obvious to anyone that there was nothing there.

    My favorite rejoinder was that, instead of offering “merely” $200K, maybe I should’ve offered self-immolation! (Incidentally, it goes without saying that Matt himself said nothing in public while the affair was in progress…)

  136. quax Says:

    Scott, #135 well good thing, that at least at this point in time nobody could have asked for you first-born!

  137. Seung Woo Shin Says:

    Gil #119: Yes. I agree with you that the “difficulty” of an instance could perhaps depend more on the combinatorial properties of the instance than on the properties of the algorithm being used to solve it.

    However, I think the approach of Boixo et al. could still have been interesting. As far as I understand, the question that they were trying to address in that paper is “Is D-Wave performing quantum annealing or classical annealing?”

    They identified that quantum annealing exhibits a behavior that classical thermal annealing cannot: namely, the bimodal signature of the histogram. They then observed that D-Wave also exhibits this bimodal signature. At this point, I believe one can reasonably conclude that D-Wave is not doing classical thermal annealing. Or one could say D-Wave is closer to quantum annealing than thermal annealing.

    The problem is that the conclusion of the paper was that D-Wave is doing quantum annealing. If they somehow knew a priori that quantum annealing and thermal annealing were the only possible alternatives, this argument would have been valid. But since we don’t, one would need to rule out all reasonable models of classical annealing to make this claim. Indeed, we show in our paper that there exists a classical model which retains both the bimodal signature and high correlation with experimental data.

    Perhaps the word “quantum” is often much too general in this context. What is a quantum computer? It seems that everyone defines it differently. For instance, there are studies that support the existence of quantum effects using a small number (2-8) of qubits. While these experiments are interesting in the physics perspective, I don’t think such demonstrations give us a quantum computer. If you had a gate model quantum computer which have one hundred qubits only a few of which can be entangled at a time, would you call it a quantum computer?

    To deserve the name “quantum computer”, D-Wave will need to demonstrate an algorithmic behavior that is (a) scalable and (b) distinctly different from that of any model of classical annealing. If they can do this, I’m sure everyone in this thread will jump on their feet in excitement.

    Rahul #117: I hope the above partially answers your question too.

    Michael Marthaler #122: The point is that according to Boixo et al. and more recently Rønnow et al., D-Wave does not seem to be free of this exponential scaling either. It appears that what D-Wave can do in a reasonable amount of time, Quantum Monte Carlo can also do in a reasonable amount of time.

  138. Roielle Hugh Says:

    “Is John Sidles now posting under the name Roielle Hugh?”

    Definitely, no.

    Who is John Sidles? Sounds familiar. :)

    “I offered $200K (i.e., much of my life savings) if I was wrong, while getting zilch if I was right. And that’s what wasn’t a strong enough stance for some people’s satisfaction.”

    Things are moving in the right direction and out of hand. :)

    Stay tuned.

  139. Roielle Hugh Says:

    Scott and all,

    Sorry for my error in #128.

    “Scott #123 & Michael #122 & #114,” should really be “Scott #123 & Michael #122 & #124,”. Very sorry indeed for any confusion.

    This post is in response to #125.

    I think we are again a bit removed from the real issue. Therefore, I also choose to stay a bit off tangent here.

    The thing gets clouded when other things get mixed in. We can talk about the amount of funding, or even conspiracy theoretic aspects, but such mixing-in, in my opinion, does little to help. One probably can go a full, complete cycle on this, but it will add next to nothing I expect. I am not doing it.

    If, say, we want to draw an analogy or a parallel, we at least need to know the essentials about the target. Do we have that kind of knowledge now about D-Wave? I doubt. My simple logic is that if we did, we would be more definitive than merely or largely empirical (both ways). Maybe I am wrong.

    Let me take a turn but stay still off tangent. I was very critical of the $200k bet Scott placed (but never uttered a word on it). I still am, but for another, or a slew of other reasons.

    I am sometimes unconventional and ‘paradoxical’, and because of that, I may do things that turn out to be very stupid. And I now perform one unconventional and ‘paradoxical’ act.

    I challenge Scott to bet that $200k again with me on the solution of “P vs NP”.

    I will do slightly better than Vinay, who offered, if I am correct, nothing. I offer $200. (The amounts are all in USD, for which I have to figure out my local currency. :)) I think I can offer more, but money is not the issue here.

    By convention, odds are stacked hugely against me, even though the 1000:1 ratio in amounts gives the appearance of it being in favor of me. But I will improve it, in favor of Scott.

    Scott, you will not be hospitalized. Really, take this as just a little piece of fun. On my side, things will be arranged also. So no worries. My family will be on water, air and meditation for some period. :)

    So what is the bet? I bet that within a few days (fix that to be 30 days) after we finalize the bet, have funds deposited at the escrow, which we will decide, I will present the proof for the solution to “P vs NP” here on this blog. By the way, I will send out the check for $200USD in two to three days after we decided on the escrow.

    How do we decide who has won? I trust Scott to be honest and the world to be impartial.

    How long will it take for the world to verify? If it is following my format, it will be instantaneous (due to standard, simple math).

    Now, let me improve the odds for Scott and remove the ‘paradox’ at least partially.

    Scott does not need to deposit the entire $200k. He can do the same as me with $200.

    Further improvement: if I win, Scott can decide how much he pays me (but that must be at least one dollar, payable to “The Winner of the P vs NP Bet”) and I reserve the right to refuse any amount above one US dollar.

    Scott, is that a deal?

    By the way, this bet is open to the entire world, and I may do slightly better than the resolution to this one containment question.

    Roielle Hugh

  140. Jay Says:

    Seung Woo #115,

    Didn’t know about Quantum Monte Carlo efficiency. This change my perspective, thanks!

    About the running time, how does your algorithm compares to QMC and how many “qubits” can you simulate this way? (e.g. does both QMC and your spin model still play nice if you increase the size of the problem/architecture?)

    SWS #137

    How clear and enlighting! ;-)

  141. Scott Says:

    Roielle #139: I accept a deal wherein, if you solve P vs NP by 30 days from now, I pay you $200,000, and if you don’t, you pay me $200. But I won’t place money in escrow, etc. as part of the arrangement — simply because it isn’t worth the hassle for me to do so.

  142. Raoul Ohio Says:

    Roielle Hugh has one characteristic of John Sidles, namely enthusiasm.

  143. Roielle Hugh Says:



    That format works for me as well, actually, if one thinks about it, better.

    No, I will absolutely not accept your $200k. I have to reserve the right to refuse any amount above $1.

    Since you agree, things are much simpler. I try to get things ready so the 30-day deadline is met.

    We will follow my format which is simple. The time required from you will be on answering a few simple questions. We have to confirm and agree on steps we establish so that we do not get into arguments back and forth.

    Roielle Hugh

  144. Mike Says:

    Raoul Ohio#142,

    Say what you want about John, but he would have never bet $200 on this . . . proof positive that Roielle and John are not one in the same. :)

  145. Scott Says:

    Roielle #143: So if I understand correctly, what you’re proposing is that you pay me $200 if you don’t manage to solve P vs. NP by Wednesday March 12, while I pay you $1 if you do manage to solve it?

    If so, then I accept the deal!

    For logistics, how about the following: by Wednesday March 12, if you believe you’ve solved the problem, then submit the solution to Journal of the ACM, while also sending me a copy of your submission. We’ll consider the problem “solved” if your submission gets accepted, and “not solved” if it gets rejected.

  146. Fred Says:

    Given that the theoretical speed-ups of QCs aren’t even that obvious (save for Shor’s factoring), isn’t the main risk that QC technology will always be sharply behind classic computers in terms of practical performance?
    Because in the meantime classical computers are still progressing at a fast pace and getting cheaper (we’ve hit over-clocking limits for single CPU, but we’re putting more and more cores in parallel… or brute force data mining with gigantic server farms, etc).
    We’re still far from anyone demonstrating a modular way to keep adding qbits – we’re more in a situation where building a practical QC is exponentially complex/expensive with the number of qbits.

  147. Scott Says:

    Fred #146: Well the hope, of course, is that eventually we’ll get past the QC fault-tolerance threshold. After that, one can add more qubits in a scalable way, without the expense increasing exponentially with the number of qubits. And at that point, for the specific things where a QC gives you a special advantage (like factoring and quantum simulation), classical computers will no longer be able to keep up at all. And that’s even assuming that Moore’s Law continues for a while longer—when in reality, the simplest versions of Moore’s Law (e.g., about transistor density and cycle time) have already basically stopped!

    On the other hand, for those problems where QC doesn’t give you a special advantage—i.e., for 99% of what we do with our computers on a day-to-day basis—my guess is that QCs will never be competitive with classical computers. Anyone who claims otherwise is likely to be selling snake oil.

  148. Fred Says:

    Scott #147
    Yes, I guess it will ultimately depend on the patience and faith of investors – chances of a substantial return on investment seem pretty slim short/medium term (hence the snake oil effect?). Unless someone comes up with a really cheap way to achieve scalable fault-tolerance.
    Seems to me that early QCs should really focus on doing pure quantum simulations and show a speed up this way.

  149. Roielle Hugh Says:


    I am copying your message below, to be sure there is no ambiguity, My comments follow.

    >>>>> Copy of Scott’s message

    Roielle #143: So if I understand correctly, what you’re proposing is that you pay me $200 if you don’t manage to solve P vs. NP by Wednesday March 12, while I pay you $1 if you do manage to solve it?

    If so, then I accept the deal!

    For logistics, how about the following: by Wednesday March 12, if you believe you’ve solved the problem, then submit the solution to Journal of the ACM, while also sending me a copy of your submission. We’ll consider the problem “solved” if your submission gets accepted, and “not solved” if it gets rejected.

    <<<<< end of Copy

    Effecitively, yes.

    Remember, I am unconventional. So we do it my way.

    It is that you bet $200k, keeping your confidence about the matter, but you have the choice of paying as low as one (1) US dollar, AND I have the right to refuse any amount above $1. I think that is clear. Let me know if you understand otherwise.

    Concerning the deadline. Today is the tenth of Febuary, it should be right about the March 12 date. I will likely do that earlier. So prepare for some of your time earlier than that.

    I will give you an update very soon, laying out some details and state when I want to do it. I will let you pick a convenient date so that this niusance will not affect your time with work or family.

    If you are proposing a change to the format, to me, that is a show of lack of confidence. I do not mean it in any bad way.

    Now to alliviate your lack of confidence, if any, I give you all the time you want, to check with the leading experts you want for any questions I ask. But that way, I have no control over the time for its completion. I have to put a controllable end to this. Please tell me, how much time you think you will need for consultation, Also, if there is really the need for additional time for you to do the math part, I can grant. The world will know if that request is reasonable. But there has to be a controllable end. If you or anybody think this is not reasonable, tell me why. By the way, as I said, the math will be standard and simple.

    Again, there will be no trick involved. Remember, the world, the entire world is watching. No trick is actually possible.

    Please let me know as soon as possible if you really accept the deal.

    Roielle Hugh

  150. Vadim Says:

    Is this silly hour at Shtetl-Optimized or do we really have people planning on solving P vs. NP in 30 days and spending an inordinate amount of time hashing out the particulars of the bet?

  151. Scott Says:

    Vadim, if Roielle really wants to mail me a check for $200, I feel that accepting it would be a service to humanity. :-)

    And Roielle, your comment is confusing and rambling as usual.

    Do we agree that the bet will be settled by your sending your P vs. NP solution to the Journal of the ACM, like I proposed?

    Do we agree that, if your manuscript is rejected, you will mail me a check for $200?

    And do we agree that under no circumstances are you requesting more than $1 from me?

    If so, then that’s all I need to know.

  152. Seung Woo Shin Says:

    Jay #140: The scaling of D-Wave, QMC, and SA is the main topic of the recent Rønnow et al. preprint, which Scott summarizes here. All three algorithms are shown to yield similar scaling.

    As for our model, I cannot give you a clear answer as we haven’t performed a scaling analysis. But based on the fact that it agreed so well with QMC in our experiments, I conjecture that it would have similar scaling to that of QMC.

    One caveat I should mention is that here we are talking about asymptotic scaling and not the actual running time.

  153. Roielle Hugh Says:

    Scott #151:

    Sorry, Scott, the confidence seems disappearing.

    Your first response mentioned the escrow issue. I immediate removed that.

    I reduced your symbol of confidence from $200k to your choice of $1 as a minimum.

    You are now repetitively asking to confirm about the $1 thing.

    I started with on line proof format (see my first post proposing the bet). You did not object in your first response, then suggested with “how about” and now want to make that a mandatory thing.

    Scott, the world is watching. Where is the confidence?

    Whoever will be the ACM referee can reject and point out the error on line.

    Remember, it is just a little bet with $1 involvement. What is the issue? Where is the confidence?

    Roielle Hugh

  154. quax Says:

    Roielle, I’d be happy to round out the number and throw in $199 if that’s what it takes. Don’t want to deprive humanity of a P=NP proof over some petty cash, now do we? :-)

  155. Vadim Says:

    Roielle, I’m just curious, are you intending to prove that P=NP or !=?

  156. Bram Cohen Says:

    A pedantic point on the nuclear weapons analogy: A bomb that sprayed around uranium would technically be a ‘dirty bomb’ in that the stuff it was spraying around was radioactive, but the amount of radioactivity in it would be far from the nastiest thing about it. Uranium compounds, especially the gaseous ones, tend to be extremely toxic, for chemical reasons, and uranium can easily build up in the body and not get expelled well, because it’s a heavy metal, in fact the heaviest of all metals, at least on a per-atom basis. The effects of uranium poisoning aren’t well understood, because it hardly ever happens, but a bomb which sprayed around uranium compounds would be an extremely nasty *chemical* weapon.

  157. Roielle Hugh Says:


    That I am not revealing at this point.

    You are clever! By asking this question, I think you already have the answer. But I reveal a tiny bit more.

    The direction should be obvious due to online proof and standard, simple math.

    Thanks for the interest, which I at this point can just assume is sincere.

    Best regards,
    Roielle Hugh

  158. quax Says:

    Bram C. #156: Indeed rather off-topic but this “extremely nasty *chemical* weapon” has been used extensively in Iraq as depleted uranium is included in the armor piercing ammunition, and is indeed blamed for devastating health effects.

  159. Roielle Hugh Says:


    Thanks so tremendously!

    Money is not the issue, though some might be using that to gain a few meaningless points.

    I do not want your money, and I am ready to lose that $200. In fact, my first version before I actually hit the submit button proposed a much larger amount, but thinking it might give difficulty to Scott, I reduced it to $200.

    I already said more than enough as to the direction of the proof, so you bait is not taken. :)

    Once again, my gratitude. I hope that the world eventually will know the entire story if Destiny willed. If I disappear from the earth before revealing the entire thing? I can not predict.

    If I am not that annoying, may I ask, how it seems that you understand my rumbling?

    So the most important thing. If you are really sincere, we can carry this out online. We do not have to have money involved.

    But my bet is first proposed to Scott. He sort of started this. We were discussing D-Wave. The quite relevant $200k thing popped up. So let me pursue this the full way. If Scott is shown to have no sincerity, then ….

    Best regards,
    Roielle Hugh

  160. Oi Says:

    Not to be nasty but I vote we all collectively ignore Roielle Hugh. It is entirely clear that this proposed bet is a shenanigan, and I beg Scott not to pay it any more heed!

  161. Gil Kalai Says:

    Scott #147

    “On the other hand, for those problems where QC doesn’t give you a special advantage—i.e., for 99% of what we do with our computers on a day-to-day basis—my guess is that QCs will never be competitive with classical computers. Anyone who claims otherwise is likely to be selling snake oil.”

    This is an interesting belief but I don’t find it particularly reasonable. I see no reason to think that aside from the things that QC gives exponential speedup, for all other problems having quantum probability distributions will give no speedup at all. We witness some important speedup using randomness (even if cc common prediction is that randomness can be simulated deterministically with polynomial overhad). Why is it reasonable to think that the much wider distributions guranteed by QC cannot give any advantage?

    Is it also not clear why QC cannot give improvements in mundane tasks of searching and sorting (or linear programming) or linear algebra which are in P anyway. Improving algorithms with running time nlog n (to linear and even just the constants) will revolutionize many real life applications. Why guessing that QC cannot help at all?

    And also the firm guess that QC can give absolutely no advantage for NP-hard optimization problems (which seems to motivates in part the anti DWave sentiments) is not purticularly clear.

  162. Jay Says:

    Roielle Hugh, the day your paper will be accepted for publication in JACM, I will personnally grant you with $500, with no counterpart, even if your proof later turn out to be wrong, even if it takes you 20 years rather than two months. The only condition I have is, in the mean time you stop talking about it. Thank you very much.

  163. Rahul Says:

    Roielle Hugh, out of pure curiosity, how did you hear about this blog?

  164. Rahul Says:

    Improving algorithms with running time nlog n (to linear and even just the constants) will revolutionize many real life applications.

    Which ones, for example? What are real life applications where such speedup will lead to revolutionary changes. Especially after factoring in the cost of a QC.

    Which practical linear programming or Operations Research applications are currently hugely sub-optimal merely for want of sufficient compute resources?

    Not saying there aren’t but just want to know which. In my mind, a QC fits in to research programs & large ambitious projects not on the desktop. Maybe it’s a limit of my imagination.

  165. Anonymous Says:

    Scott, you should be careful not to get into a habit of betting $200k on everything. Check out Eliezer’s comment #77 in this thread:

    In particular, would you also bet 200k that I can’t get a proof of P!=NP into ACM within a year? Are you that confident that I can’t bribe the referees, or come up with another trick?

  166. Rahul Says:

    Seung Woo Shin :

    Thanks. But now that we have both classical & quantum algorithms that can yield a bimodal distribution, that strategy of fingerprinting itself becomes rather useless, right?

    My point is, even before you showed this, using the shape of a success rate distribution as a test for quantumness seems rather weak. Do you agree?

    So far you seem to have shown there’s both classical & quantum ways of ending up with the bimodal output distribution. So the question is essentially unresolved. My question is, what other ways are there of testing D-Wave’s box to differentiate between classical versus quantum annealing behavior?

    A follow up question: Must every instance of quantum annealing necessarily exhibit this bimodal signature of the histogram? Or not really.

  167. Scott Says:

    Anonymous #165: Yes, I saw Eliezer’s comment.

    For something like Journal of the ACM, I trust the refereeing process sufficiently that I would indeed bet $200K that you can’t get a “proof of P≠NP” into there within the next year. (Where we exclude joke or parody proofs, unrelated results that you then retroactively declare to be “proofs of P≠NP,” and any kind of silly stuff like that.) On the other hand, I might not accept one of these massively lopsided bets where you only pay me $200 if I’m right.

  168. Scott Says:

    Roielle: OK, it’s on.

    Finish your “proof” by March 12. Then, if you indeed can’t get it accepted to Journal of the ACM, and you have any honesty in you, mail a check for $200 to Scott Aaronson, MIT 32-G638, Cambridge MA 02139 USA.

    If your proof is correct and gets accepted, then don’t worry, I’ll hear about it.

    In the meantime, by popular request, you’re suspended from commenting on this blog.

  169. Scott Says:

    Gil #161, an important clarification: my “99%” comment counted Grover’s algorithm, as well as the possibility of modest, “broadly Grover-like” speedups from the adiabatic algorithm. I.e., even after we include those things, combinatorial optimization and the like are still a sufficiently small fraction of what we do with our computers on a day-to-day basis, that it’s hard to see QCs ever becoming competitive with classical ones for home and office use! As I’ve often put it to journalists, you don’t need a QC for checking your email or playing Angry Birds. And on all the other dimensions—price, miniaturization, energy consumption—QCs seem likely to be wildly uncompetitive. So doesn’t it seem more likely that, in the hypothetical future we’re envisioning, the world would have a few dedicated QC servers here and there, and people would connect to them with their classical computers over the Internet if they had one of the special problems (quantum simulation, number theory problems, possibly adiabatic optimization…) for which a QC could be expected to provide some benefit? (This, of course, is also D-Wave’s model… :-) )

  170. rrtucci Says:

    Scott, you are going to get two yellow Monopoly bills mailed to you from McLean Hospital in Belmont.

  171. Scott Says:

    rrtucci: LOL!

  172. Rahul Says:

    Know what’d be a fun game? To guess who buys the next D-Wave machine.

    Any guesses?

  173. Fred Says:

    Rahul #172
    China and Russia, with a special discount subsidized by the US gov.

  174. Vadim Says:

    Rahul, my guess would be they won’t sell another one.

  175. Rahul Says:

    Hmm…I say somewhere in the Middle East. Lots of money to throw around.

    Or some eccentric billionaire.

  176. Mike Says:

    A limited partnership formed by rrtucci and quax? ;)

  177. Rahul Says:

    Incorporation is better. That way, you can buy on 60 day credit, declare bankruptcy & no one’s worse off. :)

    Except DWave I guess.

  178. Raoul Ohio Says:


    Great game!

    My first guesses:

    (1) Someone like that Prince (?) in the Middle East who, at last report, had something like a top end Cray to help him cheat at online chess.

    (2) Roielle Hugh, who is really Brittany Spears, and can afford it out of petty cash.

  179. Vitruvius Says:

    Given that alleged quantum computers work better when it’s cold, and that DWave is Canuckian, I think they should sell their next machine to a northern magician; then maybe they really can pull a rabbit out of a hat, as explained by Jim Carrey in this video.

  180. Joshua Zelinsky Says:

    Half-way serious question: Would you take something like Roielle’s claim seriously if he suggested proving that P != PSPACE? What about P ! = Almost-PSPACE? Or what about P != PSPACE^X where X is some oracle that plausibly sits outside PSPACE (like say X accepts a triplet of numbers (m,n,b) if Ackermann(m,n)+b has an even number of distinct prime factors)? Any one of these looks difficult but not as difficult as P != NP.

    Also, note that if you get two yellow monopoly bills you are still gettting ripped off since yellows are $10s. $100s are beige I think (and Wikipedia agrees with me ).

  181. matt Says:

    Scott #135 (snidely?) remarks that it “goes without saying” that I said nothing about Deolalikar “while the affair was in progress”. It also goes without saying that the reason is that I don’t run a complexity theory blog and hence don’t publicly comment on most results or claimed results in complexity theory.

  182. Joshua Zelinsky Says:

    Matt #181, you could have easily commented on Scott’s blog entries at the time or many other locations that Deolalikar’s work was being discussed.

  183. Scott Says:

    matt #181: Look, I completely understand if you didn’t want to be one of the many people commenting on my or Dick Lipton’s or Timothy Gowers’s blogs during the Deolalaffair. What I found funny was simply that you did come to my comments section after it was over, to criticize me for not having come down hard enough on Deolalikar’s paper—what with my no-strings-attached $200K offer if the proof turned out to be right. Given that I was still (figuratively) covered with bruises and stitches, from the online beating dozens of enraged readers gave me over my arrogant, elitist bet against the proof of the century, I hope you’ll forgive me if I still remember your criticism as one of the defining comic moments in the history of this blog… (so thanks!)

  184. matt Says:

    Joshua: I could have, but I didn’t. Just like I didn’t comment on the lowering of the matrix multiplication exponent, or on conjoined trees oracle separations, or unentangled multiple provers, or any of a number of similar results.

  185. matt Says:

    Scott: if you want, you can regard my criticism of you as a form of moral support, saying to all those “enraged readers” that some people think you didn’t go far enough. :-)

  186. Scott Says:

    Joshua #180: Thanks for the clarification about monopoly money!

    In our current state of knowledge, I’d be just as incredulous about a P≠PSPACE proof as I’d be about a P≠NP proof. I don’t know about your other questions, though—I’d have to think about them more.

  187. Scott Says:

    matt #185: Thanks, then! Your moral support is eagerly welcomed. :-)

  188. Raoul Ohio Says:

    I thought Scott’s response to Deolalikar’s paper was about right.

    At the time, my guess was that the chances of the claim turning out to be correct were about E-6. And thus, once the press and online yahoos realized it was dross, they would pummel Deolalikar plenty. There is no need to pile on.

    The same thing is likely to happen if (or, when) D-Wave’s machines turn out to be chazere; Time Magazine, etc., will be on them like binary on an ALU. This is why Scott worries that the entire QC field will garner some disrepute.

    I estimate the likelihood of the D-Wave living up to its billing as about E-2, much better than the likelihood of the Deolalikar paper flying.

  189. Gil Kalai Says:

    Dear Seung Woo, many thanks for your good comment (#137). Certainly what you wrote and also a lovely email exchange yesterday with Daniel Lidar helped me understand some of the technical and conceptual issues. I think we can consider five questions about DWave system or any other:

    1) Is the system genually quantum

    2) Does the system demonstrate large scale/ long range entanglement

    3) Is the system classically simulable

    4) Does the system involve quantum error correction/fault tolerance

    5) Does the system demonstrate computational speed up

    All of these five points are somewhat vague and we can try to make them clearer.

    The way I see it:

    1) As you mentioned question 1 is  vague but  I don’t see a good reason not to regard DWave machines as genuinely quantum.

    2) I dont see any evidence or claim for a long-range/large scale  entanglement of the kind needed for quantum computing. Daniel claims nothing about entanglement beyond the 8 qubits entanglement that they report in some other paper. (Of course, this should also be checked critically). It is neither clear to me why Umesh regards Bixio et als as a serious evidence for large-scale (=?long-range) entanglement nor why Umesh claims that this have now been refuted by your paper.

    3) For me not having the possibility of quantum speed up is a very strong reason to think that what the system does is classically simulable. I don’t see the great difference in this respect between the quantum evolutions Bixio et als consider (and classically simulate) and the ones consider by  you guys. (But maybe I miss something here.)

    4) and 5). Both Daniel Lidar and I agree that DWave machines cannot lead to speed up without fault tolerance. (Daniel is working in this direction; my opinion is that quantum fault tolerance is always necessary for speed up.)

    I must admit, Seung Woo, that when you get into it the specifics, this  debate on statistical tests for quantumness of D-wave machines is more interesting than I initially thought, but overall the statistical claims and tests do not look so convincing.

  190. quax Says:

    Mike/Rahul #177/178 Thanks for your entrepreneurial inspiration! Will take your advise under consideration :-)

  191. William Hird Says:

    @Roielle Hugh: I’m betting $200 that Lilly Aaronson solves P vs. NP before you do. Are we on?

  192. Raoul Ohio Says:

    @Roielle Hugh: I’m betting $200 that Miley Cyrus solves P vs. NP before you do. Are we on?

  193. Seung Woo Shin Says:

    Gil #189: Dear Gil,

    Re 2), you may have misunderstood what Daniel Lidar said to you. Here are specific quotes from the Boixo et al. paper. They begin the paper by asking the following question in the introduction,

    In this work we report on computer simulations and experimental tests on a D-Wave One device in order to address central open questions about quantum annealers: is the device actually a quantum annealer, i.e. do the quantum effects observed on 8 and 16 qubits persist when scaling problems up to more than 100 qubits, or do short coherence times turn the device into a classical, thermal annealer?

    and end the paper with the following conclusion:

    Our experiments have demonstrated that quantum annealing with more than one hundred qubits takes place in the D-Wave One device, despite limited qubit coherence times.

    It is quite clear that they were claiming the existence of quantum effects at the scale of more than one hundred qubits.

    I very recently learned from personal communications with Daniel Lidar that they withdrew these claims in the final version of their paper, which is currently in press. However, in 2) you are making it sound like they never made such a claim and we refuted a claim that never existed. I hope you can see that you were misinformed.

    One aspect of our paper that has not been discussed here is that it leads to insights into the nature of the native problem of the D-Wave machine. Our results indicate that in the native problem the 8-qubit clusters tend to act like supernodes, and in a sense the effective problem size is the number of clusters, which is 16. This means that the 108-qubit native problem is relatively easy. As the number of qubits is scaled up to 512 and beyond (while leaving the cluster sizes at 8), the effective problem size will increase to 64 and beyond, and that is when we would expect the exponential search space to kick in, and our algorithm and the D-Wave machine to both perform poorly. Indeed, Matthias Troyer told us that the machine struggles to find the solution when the problem size scales up to 512 qubits.

    Scott, actually Gil’s mention of the small scale entanglement results by Lidar et al. makes me curious how closely you have examined these results. I saw you say in #54 that you think D-Wave probably exhibits small scale quantum entanglement. Do you understand the precise nature of the evidence that they provide?

  194. Scott Says:

    Seung Woo #193: I actually wasn’t thinking so much about Lidar et al.’s results, as about a bunch of other results on small-scale entanglement in the D-Wave device, which were summarized by Mohammad Amin in two talks that I attended. And no, I didn’t fully understand “the precise nature of the evidence”—it wasn’t anything as straightforward as a Bell violation or other entanglement witness. Rather it was that, if you accept some (weak?) auxiliary hypothesis about the functioning of the machine, then the only 2-8 qubit states compatible with their measurement results would be mixed states with some degree of entanglement. It didn’t seem like a smoking gun, but my inclination was to bend over backwards and grant D-Wave the benefit of the doubt on it, given that

    (a) the Schoelkopf group (and other groups) did demonstrate Bell/GHZ violations in 2-3 superconducting qubits, so we know it’s feasible to entangle such qubits with current technology, and

    (b) I hadn’t seen D-Wave’s claims about small-scale entanglement contested by anyone else, including by skeptics who I trust.

    So I’m curious: what do you (or other experts) think about the presence or absence of small-scale entanglement in the devices, and why?

  195. Rahul Says:

    You guys really ought to see the re-incarnation of D-Wave’s website. Some of the stuff on the new website is so dishonest it makes me want to throw stuff at my computer.

    They have this cute info-graphic juxtaposing the power consumption of their oh-so-amazing D-Wave-Two chip (13 kW) versus a “traditional supercomputer” ( 2000 kW)

    The nerve of it all! It’s like making a Toyota look fuel-economical by mentioning the fuel consumption of an Abrams tank. Here we are talking if the chip’s even as fast as a laptop on one single narrow task & their PR think’s it’s fair game to be compared against a supercomputer with thousands of cores.

    Seriously, what happened to the apples for apples comparisons, common sense thing?

    There’s so much half-truthful & dishonest material on D-Wave’s new website that it’s worth a full blog post.

  196. Alexander Vlasov Says:

    Seung Woo #193, May be it is yet optimistic opinion: If problem indeed with fixed clusters, could it be resolved with more difficult type of connections (then it would be not possible to divide whole network on such kind of supernodes)?

  197. Fred Says:

    Rahul #195
    “Operates in an extreme environment”… well, if you ever need a computer to work outdoor in Antarctica, you know who to ask!

  198. Fred Says:

    Would it be possible to just model what Quantum Annealing would look like on a QC that exhibits imperfect entanglement?
    Can this be done by mixing the results of a perfect QC with the results of a pure classical QC?
    (this is probably very naive – my only reference here is the famous Feynman double slit lesson, where he shows what happens where you shine a very dim light on the slits and end up with a mix of Q interference and classical non-interference pattern).

  199. Raoul Ohio Says:


    Thank’s for pointing out the D-Wave website. It is astounding that glossy crap marketing might actually sell some $10M contraptions.

    Here is a variation on the “who will buy the next one” game: How should D-Wave advertise? Maybe:

    (1) Pay Google to have that web site pop up whenever anyone searches on either “buy computer”, “cure cancer” or “Lamborghini”.

    (2) An ad in Scientific American, right after the ones for the “null physics” and “how water was created” books.

    (3) (re: Apple using a Charlie Chaplin avatar dancing around with a cane in a famous Superbowl ad) Have a D-Wave II AT the next Superbowl, predicting the outcome at halftime. The operator could be a Steve Jobs avatar, shooting black turtlenecks into the crowd with a “T-shirt cannon”.

  200. quax Says:

    R&R #195/199:

    When I saw the relaunched web site I certainly thought it was almost purposefully designed to make some heads explode. Truth in advertising, certainly much blog space has been wasted on it, and there is no end in sight.

    But hey, a Toyota really is much more fuel-economical than an Abrams tank. Better for the environment and world peace! :-)

  201. Douglas Knight Says:

    Scott, I’m not going to wait years; I’m going to blame you right now for being a shameless dwave promoter. Indeed, I am ashamed for not having criticized over the course of your many years of absurd promotion. Though I have to admit that you seem to have reached about the best possible outcome in your purchase advice.

  202. quax Says:

    Douglas #201, nothing brings as much publicity as a good controversy. Could Scott have been a D-Wave plant all along?

    (Am new at this but I think it’s high time I get onto the conspiracy bandwagon.)


  203. Raoul Ohio Says:


    I can only give you a C- on the first draft.

    Before you turn in the final version, see if you can work in connections to some of the big 10:

    the WTC,
    the NWO,
    the Illuminati,
    Area 51,
    the Vatican bank,
    Lubos Motl placed third in the X-Games on skateboard,
    the suppression of Roielle Hugh’s work on P vs NP,
    and the plot against A-Rod.

  204. Zelah Says:

    Hi Everyone,
    On this blog it has been mentioned that one can simulate Stoquastic Hamiltonians

    Scott Says:
    Comment #108 February 9th, 2014 at 7:54 pm
    “For starters, they’re ignoring that with stoquastic Hamiltonians and finite temperature, everything should be efficiently simulable using quantum Monte Carlo. “

    However, I believe that this quote is contradicted by this paper:
    Monte Carlo simulation of stoquastic Hamiltonians

    I was wondering if there was a reference for efficiently simulating Stoquastic Hamiltonians!

    Thanks in advance


  205. Scott Says:

    Zelah #204: The MA-completeness result in that paper is for learning properties of the ground state of a stoquastic Hamiltonian. So it’s not relevant to discussing D-Wave, given that the D-Wave machine (or even an ideal quantum computer, for that matter) won’t in general reach its ground state either.

    (Related to that, it’s already an NP-complete problem to learn properties of the ground state of a completely classical, diagonal Hamiltonian! Which doesn’t imply that simulated annealing can’t simulate itself. :-) )

    Someone else can correct me if I’m wrong, but AFAIK there’s not a theoretical result showing that QMC will always work to simulate stoquastic Hamiltonians. However, it very often works in practice, and the very fact that Boixo et al. were able to simulate the D-Wave machine efficiently using QMC to get their correlation results, suggests that at the temperature and with the decoherence rate that they’re operating at, QMC indeed works to simulate what they’re doing.

  206. aram Says:

    There is at least one example where QMC fails to simulate stoquastic Hamiltonians even where the gap is large:

  207. Bill Kaminsky Says:

    Scott #205 wrote:

    Someone else can correct me if I’m wrong, but AFAIK there’s not a theoretical result showing that QMC will always work to simulate stoquastic Hamiltonians.

    You’re right, Scott. No such theoretical result exists.

    And for good reason too! A theoretical result exists showing QMC won’t always work to simulate stoquastic Hamiltonians, even if you’re promised an only-polynomially shrinking gap! Namely:

    Obstructions To Classically Simulating The Quantum Adiabatic Algorithm

    M. B. Hastings, M. H. Freedman

    (Submitted on 22 Feb 2013 (v1), last revised 26 Feb 2013 (this version, v2))

    ABSTRACT: We consider the adiabatic quantum algorithm for systems with “no sign problem”, such as the transverse field Ising mode, and analyze the equilibration time for quantum Monte Carlo (QMC) on these systems. We ask: if the spectral gap is only inverse polynomially small, will equilibration methods based on slowly changing the Hamiltonian parameters in the QMC simulation succeed in a polynomial time? We show that this is not true, by constructing counter-examples. Some examples are Hamiltonians where the space of configurations where the wavefunction has non-negligible amplitude has a nontrivial fundamental group, causing the space of trajectories in imaginary time to break into disconnected components, with only negligible probability outside these components… [a bunch of stuff about using topological methods to construct such examples]. We present some analytic results on equilibration times which may be of some independent interest in the theory of equilibration of Markov chains. Conversely, we show that a small spectral gap implies slow equilibration at low temperature for some initial conditions and for a natural choice of local QMC updates.

    Now let me note one of my many, many shamefully not-yet-written-up results. Namely, another thing quantum coherence buys you that the classical algorithm of “quantum” Monte Carlo can’t simulate is an automatic Grover-like square-root speedup of tunneling from one minimum to another by amplifying the “tunneling cross-section” of one minimum seen by another.

    Put as a catchy, rhyming slogan: “Quantum tunneling particles sorta know which way to go!”

    If the above made no sense, let me briefly elaborate… and as elaborating is much easier to derive quasi-exactly with a particle-in-a-potential problem than a spin system problem, please permit me to do so:

    Setup: Consider two spheres of unit radius in n-dimensions. The centers of the spheres are separated by an amount s > 1. The potential is 0 within the spheres and V > 0 outside them.

    Problem: You start in one of the spheres. You’re told the separation s, and then asked to find the other sphere.

    Classical Solution: The best (actual or simulated) classical annealing can do (including “quantum” Monte Carlo) is to move your particle to the spherical surface of radius $s$ centered at the center of your initial sphere. This surface has area $$O(s^{n -1}).$$ Also, there’s a $$e^{-V/kT}$$ Boltzmann factor dissuading the initial jump out of the potential well.

    Quantum Solution: Start the particle in the ground state the sphere you’re in. Wait for quantum mechanics to do its magic and tunnel to the other sphere. If you calculate the tunneling rate using spherical Bessel functions, you will find that the dependence on the separation between the wells is no longer $$O(s^{n-1}), $$ but rather $$O(s^{(n-1)/2})!$$ This is the Grover-like square-root speedup that I mentioned above, and my reason for saying “Quantum particles sorta know which way to go!”

    (Yes, yes, there’s another s-dependent factor in the quantum case, specifically a $$e^{-2 \kappa s}$$ factor where $$\kappa = \frac{\sqrt{2mV – \hbar^2 n^2 / e^2}}{\hbar}$$… but tuning V appropriately makes it irrelevant… or, if you insist that V isn’t under your control, you could change the problem to be fair to blindly quantum walking in accord with Schrodinger evolution by removing the fact know the separation s. Once you do this and compare to blindly classically random walking, you’ll see an even bigger quantum-over-classical win despite the exponential factor above because the classical random walker—once excited to the infinite plateau of finite V—might stumble off way, way out beyond s from the starting point. The quantum walker—at sufficiently low temperature of course—instead is energetically confined to the wells.)

  208. Bill Kaminsky Says:


    1) [Minor] Aram replied with the reference to Hastings & Freedman’s paper while I was typing up my comment. Sorry for redundancy.

    2) [Slightly more major] The sentence

    If you calculate the tunneling rate using spherical Bessel functions, you will find that the dependence on the separation between the wells is no longer
    but rather

    should have had the following addendum:

    If you calculate the tunneling rate using spherical Bessel functions, you will find that the dependence of the expected first passage time to the target well upon the separation between the wells is no longer
    but rather

    The tunneling rate (or classical search rate) of course goes as the reciprocal of those quantities.

    Sorry for any confusion.

  209. Scott Says:

    aram #206, Bill #207: Thanks!! I knew of the Freedman-Hastings result, of course, but hadn’t realized that it worked for stoquastic Hamiltonians and not just for arbitrary ones.

  210. Seung Woo Shin Says:

    Scott #194: Actually I should have said “e.g. Lidar et al.” there. The reason I asked you that question was simply that I myself haven’t had a chance to study these results in sufficient detail. I am trusting those results for the same reason as you, but I was just curious if you knew more.

  211. Greg Kuperberg Says:

    Gil – “Is the system genuinely quantum”

    The short version of the emerging answer is: Yes, D-Wave’s qubits are quantum, just not very quantum. Not nearly as quantum as the world’s best qubits, made by Wineland et al; and not nearly as quantum as the world’s best superconducting qubits, made by Schoelkopf, Martinis, et al. Your other questions could be worth some exploration. But it looks strange that this fairly ordinary effort should be the best funded experimental QC in the world, and the best documented in the popular press.

    It looks equally strange that — according to their own account — they would bypass BQP, the complexity class that was invented for QC, and go straight to NP.

  212. Gil Kalai Says:

    1) Dear Seung Woo Shin (#193 Also #194,#210) Indeed, as I suggested, I think that it is quite important to critically look at the papers giving evidence for a few qubit entanglement in D-wave systems. Of course, primarily, D-wave people themselves, and researchers optimistic about D-wave should look and double-look at the evidence skeptically, but also others.

    If D-wave systems can get close to entanglement demonstrated by the Schoelkopf group this would be very very remarkable !, and therefore one should look carefully and suspiciously at the quality of the evidence. To understand what is going on it can be more useful to look at statements which are already very surprising and very remarkable, and not necessarily at the most fantastic ones. (Where the claims themselves are rather vague, and the methodology for testing them rather dubious.)

  213. Greg Kuperberg Says:

    Gil – “If D-wave systems can get close to entanglement demonstrated by the Schoelkopf group this would be very very remarkable”

    What they actually say about it is that the standard experimental groups with NSF funding — not Schoelkopf specifically but rather the entire cadre that includes his group — are setting themselves up to fail by solving the wrong problem. They clearly imply that long coherence times are not very important; in their technical presentations they hope to win without it.

    I think that at the technical level, they have been very forthright, and that their evidence does not need any special dose of skepticism. The issue is not double-checking evidence, it’s that their interpretation of the evidence sounds wrong to a lot of people.

  214. Gil Kalai Says:

    Dear Greg, we made similar comments at the same time. (Except that, as you know, I don’t like to mix the scientific questions with the funding questions.)

    I realize that question 1 has various subquestions:

    1.1 Are the qubits genuinely quantum and of what quality

    1.2 Are entanglements for a small number of qubits demonstrated

    1.3 Is the system evolution close (as D-wave proponents think) in some sense to a “genuinly quantum” evolution which is “in the neighborhood” of quantum annealing. (Or, in any case, is best described by a genuinly quantum “master equation.”)

    I think that the situation for 1.1 is probably widely known, and was described here in rough terms often. A good place for looking skeptically is 1.2. Not because it is the most fantastic claim (quite the opposite) but because the claims are already quite unplausible and yet close enough to the reality that there can be standard ways to judge them.

    “It looks equally strange that — according to their own account — they would bypass BQP, the complexity class that was invented for QC, and go straight to NP.”

    Again, you prefer to move to the super-fantastic regime and to a side of the debate which looks like a political debate with not much scientific content. Do they really say that they can efficiently solve NP-complete problem?

    In my opinion, talking about “D-Wave bypassing BQP” when you do not know if they can demonstrate entanglement between a few qubits is not a good D-wave skeptical avenue.

  215. Gil Kalai Says:

    Greg #213 Ok, here we disagree. The point you raise about what some people in D-wave think about some efforts by others toward QC are utterly of minor importance.

    “Their evidence does not need any special dose of skepticism.” **Every** claim about systems which demonstrates multi-qubit entanglements should be considered and studied very carefuly and critically . But if (as Scott suggests #210) DWave can achieve with low quality qubits something previously achieved only with much higher quality qubits, yes, this calls for a special doze of skepticism.

  216. Greg Kuperberg Says:

    Gil – The issue is not funding in isolation. The issue is why such a huge number of questions are directed at this D-Wave work and not other groups that, by standard criteria, are doing superior work. I see reasons that this is happening, but not good reasons. Although, if anyone wants to think of them as underdogs, then for that question, yes, funding matters. As measured in dollars, they are overdogs, not underdogs.

    In 2007 Geordie Rose was more blunt about the NP-hard issue than now. He said that instead of finding the best solutions to NP-hard optimization problems, which (unfortunately in his view) was probably impossible, they would find approximately best solutions. He contradicted the PCP theorem in his sales pitch.

    Now they are more careful with their promises. Nonetheless, they have at times used the phrase “NP-hard” in their publicity, to the effect that they are a tough company that works on hard problems. They are not quite saying that they can solve NP-hard problems in polynomial time. But what they have done is turned NP-hardness upside down, from a reason to be skeptical to a reason to be impressed. They have also disparaged examples of problems in BQP, for instance Shor’s algorithm, as arcane questions that are of lesser practical value compared to the optimization problems that interest them. So, without quite saying that they plan to conquer NP, they favor it over BQP as the organizing complexity class of their algorithms.

    As for the quality of their qubits: If for instance you go by this new article by SSSV, or the older work by Troyer, it doesn’t look like they have achieved anything with second-string qubits other than what you might have expected them to achieve. (Second string as measured by coherence time.) The only unusual achievements of their work are their tremendous optimism and their very high level of funding.

  217. rrtucci Says:

    Greg Kuperberg said
    “But it looks strange that this fairly ordinary effort should be the best funded experimental QC in the world, and the best documented in the popular press.”

    Small quibble, they have the highest level of private funding, but not the highest level of private and state funding combined

    Well, as Scott no doubt could have anticipated, I pin all the blame on him (and those rotten complexity theorists). Stop procrastinating Scott!. By now you should have started your own quantum computing company, “Anti-D Wave”

  218. Douglas Knight Says:

    rrtucci, who has the largest total funding?

  219. Greg Kuperberg Says:

    rtucci – I don’t know who you have in mind as having more combined funding than D-Wave; maybe IQC in Waterloo? Regardless, it’s not a parallel comparison, because IQC is much more diversified.

  220. rrtucci Says:

    Yes, Greg, after re-reading my statement, it sounds pretty ambiguous and half-baked. It’s hard to compare how much funding each group has received and from whom because money often goes to several groups at once and some funding (like NSA funding) is not reported publicly.

  221. Paul Says:

    @Greg (#216 and 211 above), to quote:

    They have also disparaged examples of problems in BQP, for instance Shor’s algorithm

    May I ask you something? How would a private for-profit company supposed to make money both *legally* AND *morally* by breaking most of currently employed encryption? “Private key recovery services”? Are you… Serious?

    Of course, Government-affiliated entities (the University you work for?) can bypass at least the first half of that question (by definition), and we might disagree about the moral part as well…

    So, how D-Wave was supposed to build a company based in breaking things (like, privacy) that most of us cherish?


    Paul B.

    Not speaking for anyone but myself!

  222. rrtucci Says:

    BQP = British Quantum Petroleum
    How about it Scott?

  223. Alexander Vlasov Says:

    Greg #216, What is the problem with D-Wave and PCP ( Probabilistically checkable proof ?)?

  224. Douglas Knight Says:

    Paul, what are you suggesting? That dwave is really building a QC, but secretly, for NSA, and is only putting on a show of being a fraud to convince people that QC is not possible at this time?

  225. Greg Kuperberg Says:

    Vlasov – “What is the problem with D-Wave and PCP ( Probabilistically checkable proof?”

    The PCP theorem implies that in many cases, an approximate solution to an NP-hard optimization problem is still NP-hard. (This is especially true of max clique, the problem solved by a D-Wave device.) In the old days, Geordie Rose said that NP-hard optimization problems look impossible, but only because theoretical computer scientists are perfectionists; he planned to find approximate solutions instead. In making this distinction, he thus contradicted the PCP theorem.

    In those days they also contradicted the Holevo-Nayak theorem by claiming to solve a Sudoku with only 16 qubits. (The theorem says that N qubits cannot store more than N classical bits.) They eventually stepped back from the contradiction by intimating that the 16-qubit device only solves little pieces of the Sudoku.

    More recently they have not contradicted the PCP theorem. Rather they have said that they are a tough company that works on very hard problems, including NP-hard problems.

  226. Greg Kuperberg Says:

    (Sorry I meant to say Alex, not Vlasov. Antiquated WordPress does not let you edit posts.)

  227. Alexander Vlasov Says:

    Greg, thank you, interesting, PCP was proved just at
    the time all that QC stuff has been starting.
    PS. It was “compiled” Sudoku with 16 qubits (c.f. “compiled” Shor’s algorithm, so D-Wave again has priority with some cool ideas, yet they did not guess then to do Sudoku with two qubits).

  228. wolfgang Says:

    @Scott et al.

    I think you guys are simply missing the point.
    As Geordie Rose has explained on his blog, D-Wave is all about Love: “If people love what we build … we have succeeded and I sleep well”.
    Apparently you do not feel him, but this is your problem not his 8-)

  229. quax Says:

    Douglas Knight #224 here we go again: “putting on a show of being a fraud.”

    Really getting tired of people here insinuating that D-Wave is fraudulent.

    Criminal law is not rocket science. Would expect you to understand what constitutes fraud.

  230. Raoul Ohio Says:


    Most people use the word “fraud” as a descriptive word, not in some legal sense, and actually do not give a rat’s behind about what lawyers argue about.

  231. Rahul Says:


    So when D-Wave claims its device “Operates in an extreme environment” or compares its power consumption to a behemoth supercomputer, in your vocabulary what’s the preferred word to characterize this situation, if not fraud?

  232. Alexander Vlasov Says:

    Raoul Ohio #230, word “fraud” looks not justified in any context. I myself can be not agree with something in D-Wave, but it is not a reason to conclude that they are wrong, and, especially, they are intentionally wrong.

  233. rrtucci Says:

    “what’s the preferred word to characterize this situation, if not fraud?”

    Rahul-like exaggeration?

  234. Alexander Vlasov Says:

    rrtucci #233 – BTW, I remember your standard complain that some people “forget” to cite your earlier results? I found my earlier results cited in dozen patents of D-Wave (of course, together with many other papers), despite I even did not know they authors. If they are honest in this particular point, why I should be biased against them?

  235. rrtucci Says:

    I agree with you Alex. I would not use the word fraud to characterize D-Wave either. Sometimes they are funny though. Like Geordie with his Gold Medal.

  236. quax Says:

    Raoul #230, hint: ‘Fraud’ both in the legal as well as ordinary sense implies *intentional* deception.

    When throwing terms around that imply criminal intend some care should be practiced e.g. when did you stop beating your wife?

  237. Scott Says:

    quax, rrtucci, Raoul Ohio: In his book On Bullshit, the philosopher Harry Frankfurt drew a sharp distinction between deliberate lying (i.e.., fraud) and bullshitting, as follows.

      [Frankfurt] argues that bullshitters misrepresent themselves to their audience not as liars do, that is, by deliberately making false claims about what is true. In fact, bullshit need not be untrue at all.

      Rather, bullshitters seek to convey a certain impression of themselves without being concerned about whether anything at all is true. They quietly change the rules governing their end of the conversation so that claims about truth and falsity are irrelevant. Frankfurt concludes that although bullshit can take many innocent forms, excessive indulgence in it can eventually undermine the practitioner’s capacity to tell the truth in a way that lying does not. Liars at least acknowledge that it matters what is true. By virtue of this, Frankfurt writes, bullshit is a greater enemy of the truth than lies are.

    I don’t know why looking at D-Wave’s new homepage reminded me of that little book…

  238. Rahul Says:


    Good points. The problem with bullshitting is that to do it convincingly you must start believing in it to some extent.

    Keep doing it often enough and you may lose track of what’s bullshit.

  239. Rahul Says:

    @Alexander Vlasov:

    They may not be intentionally wrong. But aren’t they intentionally misleading? Or was all that hype & false advertising only accidentally misleading?

  240. Scott Says:

    Rahul #238: Yes—on Frankfurt’s account, those are more-or-less the defining properties of bullshit.

  241. Alexander Vlasov Says:

    Rahul #239, I am not a good source for information about D-Wave PR. But let’s use common sense: if they are indeed intentionally misleading, why we still did not see complains of people who invest lot of money into them?

  242. quax Says:

    Alexander Vlasov #242, to take it a step further why do their customers happily go on record to push what Scott deems BS (and before anybody chimes in to dismiss this as pointy haired bosses, I think every person in this video has a PhD.)

  243. Greg Kuperberg Says:

    quax – Large, wealthy companies very often do invest in research as much for public relations as for any direct chance of success or profit. Or in some cases for internal intellectual stimulation. There is nothing scandalous about that. For example, Benoit Mandelbrot worked for IBM for most of his career. It was not because fractals were ever fundamental to IBM’s business model.

    Lockheed and Google have made clear that they each bought a D-Wave device largely or entirely for public relations. If they had actually needed it, they wouldn’t have given it away to academia, respectively to USC and to NASA Ames. The problem is that D-Wave’s specific style of public relations alienates many academics, even as it impresses many outsiders. Again, both Lockheed and Google are very large companies with fairly independent units; neither one is a monolith. (Just like universities.) I would not be surprised if at either company, the left hand has been wondering what the right hand is doing.

  244. quax Says:

    Greg #244, there is certainly a marketing value there, but if you think everyone in this video is simply acting out a script, one designed to make Lockheed look cool, then they obviously failed their calling when going into science.

    Rather Oscar worthy performances, IMHO they really come across as believing in the potential of the machine.

  245. Greg Kuperberg Says:

    quax – Your first sentence is exactly right. Everyone in the linked video is acting out a script designed to make Lockheed and USC look cool. Or the script was imposed on some of them with post-editing. I certainly would not say that the people interviewed failed in their calling. On the contrary, they are all accomplished scientists and engineers. If they had wanted to, they could have made a much better video. But yes, the video as it stands is a failure of science and engineering.

  246. quax Says:

    Greg, this is of course about marketing, customer references are the gold standard whenever you want to sell a new IT product. It’ll only be a failure if it fails in this regard, and I doubt it will.

    I am quite certain that none of these accomplished scientists and engineers had their arms twisted to appear in this video, nor that words were put into their mouths. But feel free to contact them and procure statements that’ll prove me wrong.

  247. Sol Warda Says:

    Greg#246: Please read these two short paragraphs about why Lockheed bought one of their “useless” machines, and gives us a brief opinion on their “story”. Thanks.

  248. Raoul Ohio Says:

    Alexander Vlasov #232: I do not accuse anyone (at D-Wave) of fraud. I do not understand the details well enough to make an informed opinion. My guess is that at worst they are making some wild marketing claims, much like those made everyday in diet supplement industry.

    I do respond to quax going ALS (All Legal Scholar) and expecting everyone to “understand what constitutes fraud”.

  249. Greg Kuperberg Says:

    Sol Warda – Okay, a snap evaluation:

    “Ned Allen sent D-Wave a sample problem to run on its system. It was a 30-year-old chunk of code from an F-16 aircraft with an error that took Lockheed Martin’s best engineers several months to find. Just six weeks after sending it to D-Wave, the software error was identified.”

    I don’t know what to make of this at all. Daniel Lidar has said that he proposes to use D-Wave devices for code verification and validation. I have never heard him or anyone else say that it has actually happened. I haven’t seen anything like this in the technical literature. It would look very different from what I have seen, things like computing trivial Ramsey numbers. Then again, this is a quote from the D-Wave publicity department, the same crew that claimed to solve a Sudoku with only 16 qubits. It could be stone soup.

    “Lockheed Martin plans to expand research into its potential for solving challenges ranging from designing lifesaving new drugs to instantaneously debugging millions of lines of software code.”

    Lockheed Martin is an airplane company. If they also plan to cure cancer with the D-Wave device, then it looks just as I said: an experimental computer. I haven’t said “useless”. I have said experimental and I have said second-string (by standard measures of quality of qubits). That does not necessarily mean that it can’t be used for anything.

  250. fred Says:

    Right, Lockheed Martin, manufacturer of the infamous F35 Joint Strke Fighter… 7 years behind schedule at with a cost of 400B$ for the US tax payers (yes, 400 billions).

  251. Scott Says:

    quax #242, 244, 246:

    Of course D-Wave’s customers would give glowing testimonials! I’m sure Rolex’s customers also give glowing testimonials about how Rolex watches are more durable, reliable, etc. than cheaper watches, even if it’s utterly false. Whenever you have a luxury status good, it sets up a situation of collusion between buyer and seller—since after all, the reason the buyer bought the good in the first place was to look sophisticated, cutting-edge, etc., not to look naïve and gullible. (And since the buyer was never interested in the first place in what the product did, but only in what it was, there’s no reason for the buyer to have been surprised or disappointed by what they got.)

    Also, if I’ve learned anything in life, it’s that having a PhD does not in any way, shape, or form preclude being a pointy-haired boss. (I’ve actually seen both cases: those who grew pointy tufts at some point after finishing their doctorates, and those who managed to obtain PhDs despite already sporting nascent tufts at the time.)

    Finally, I have to say, your going along with the charade that “customer testimoninals” have anything to do with the intrinsic quality of the D-Wave device does nothing to help your credibility on any other issue. You presumably understand perfectly well that, if it’s true that D-Wave’s engineers found an “F-16 code error” faster than Lockheed’s did (and of course, we have no idea of the details), then the reason is that D-Wave recognized the publicity value, spent manpower on it, etc., and/or that Lockheed had already simplified matters by boiling things down to a well-defined optimization problem. All the real scientific studies we have, lead to the clear prediction that whatever they did, it would’ve been equally effective if a classical simulated annealing code had been blindly substituted for the D-Wave device. This sort of kindergarten recklessness about cause and effect—this stone-soupery, this bullshit in Frankfurt’s technical sense—might work on journalists, investors, potential customers, and parts of the public. But why are you even bothering with it on this blog?

  252. Sol Warda Says:

    Greg#249: Thanks for the reply. In a recent Google “Hangout”, Dr. T. Ronnow was holding a Q & A session about their joint recent paper about finding no speedup in DW2. He was asked a question by a reporter about this particular story, and Dr. Ronnow said he knew nothing about it, even though he had heard of it. The reporter informed him that he had actually talked to Dr. Daniel Lidar himself, over lunch, and that Dr. Lidar had confirmed it personally. In addition, I myself had seen an interview, about two years ago, with Dr. Ned Allen of Lockheed, in one of those “online” journals, and Dr. Allen also confirmed the story. I suppose, short of writing these two men and asking them for a written confirmation, that is the closest we can come to believing it. Thanks again.

  253. quax Says:

    Greg #249, it’s rather myopic to think that Lockheed Martin is just about planes (save to assume you don’t peruse the business sections much?). They’ve been evolving their business model for quite some time, diversified into IT and invested in a broad portfolio of emerging technologies including life science.

  254. quax Says:

    Scott #251, But why are you even bothering with it on this blog?

    Because it seems to me that many posters on this blog, and maybe you as well, have a hard time accepting that others may define the ‘intrinsic value’ of the D-Wave along other dimension, but solely quantum speed-up. Although the latter remains most crucial.

    Also the question who may be the next customer of D-Wave has been brought up in this thread. And whoever that’s going to be, the Lockheed Martin testimonial will be key to the next sale.

    It seems to me that you are arrested in a binary state on the matter. Assuming I am hearing you correctly, then you essentially argue that If there’s no quantum speed-up *now*, then the only other purchasing reason is of the ‘vanity, prestige’ variety.

    And frankly, this is also BS.

    But I am also getting tiered of the pointless back and forth, this thread is already pretty long and the arguments are stale. So this post will be my last on this matter for now.

  255. Darrell Burgan Says:

    Sorry, another layperson post here, but I’ve reread the Time article several times and read several other interesting links (thanks Scott), and I still have a question.

    Say the DWave comes up with an answer to an optimization problem. How does one verify it actually is the global optimum answer?

  256. Scott Says:

    Darrell #255: One doesn’t, in general. NP-complete problems are not believed to have “short certificates of global optimality”—which is one of many ways in which they differ from, say, factoring, where as soon as you’ve found an answer you know you’ve found the unique correct one.

    For large, randomly-generated instances of D-Wave’s “QUBO” problem, all you can do (and all that is done) is basically to compare the best answer found by the D-Wave machine against the best answers found by classical algorithms like simulated annealing. And if you discover that they both eventually find solutions of roughly equal quality, then you can compare how much time they needed to find them. (The Boixo et al. papers have much more detail about the actual metrics that they used for comparison.)

  257. Darrell Burgan Says:

    Scott #256: interesting. If so, how would one know if a quantum computer that really has a large number of actual qubits is working properly? I guess you’d have to have a quantum computer of a different make/model run the same problem and see if it comes up with the same answer?

    Also, does this have a relationship to the famous unsolved “P != NP” problem?

  258. Sam Hopkins Says:

    [ NP-complete problems are not believed to have “short certificates of global optimality” ] – probably this should say NP-hard, right?

  259. Scott Says:

    Darrell #257: If you go purely by input/output behavior (and not by the physical internals), then probably the clearest, most obvious, and most famous test for QC is whether you can quickly factor enormous numbers. As I mentioned in my last comment, factoring is free of most of the conceptual problems that adiabatic optimization suffers from:

    (1) There’s a unique right answer.

    (2) The right answer can be recognized efficiently.

    (3) The quantum factoring algorithm (Shor’s algorithm) looks totally different from known classical factoring algorithms, so there’s no serious possibility of setting out to implement Shor’s algorithm but actually implementing one of the classical algorithms.

    (4) We know that Shor’s algorithm runs in polynomial time—always—and we know that this represents an exponential speedup over any currently-known classical algorithm.

    (5) It would be theoretically unsurprising if it represented an exponential speedup over any possible classical algorithm. (Though proving that would require proving P≠NP, as a first step.)

    (6) We know exactly how to generate random instances of factoring, which are hard enough for cryptographic applications: namely, pick two huge random primes uniformly at random, then multiply them!

    So, you could think of Shor’s factoring algorithm as the ‘gold standard’ in terms of verifiability of QC performance. But OK, what if someone claims to have a working QC, but a non-universal one that can’t run Shor’s algorithm or anything like it? In that case, the issues become much more complicated—and understanding that is key to understanding why people are still arguing about D-Wave.

    With D-Wave’s adiabatic / quantum annealing approach, none of the six properties I listed above hold. There are many possible solutions, you don’t know when you’ve found the best one, the adiabatic algorithm “degrades gracefully” into something like classical simulated annealing (so that you could easily shoot for the first but achieve the second), we don’t know theoretically or empirically if the adiabatic algorithm ever gets a “real-world” speedup over simulated annealing, and if it does, then we don’t know for which family of instances or how to generate those instances. And every point in the last sentence has contributed to the confusion and debate!

    Thus, to make an honest case for speedup from a quantum annealing device, ideally you would

    (1) understand the physics of your device well enough to show that large-scale quantum effects are present,

    (2) make a convincing case that you’re outperforming what could be done with state-of-the-art classical codes (including simulated annealing), and

    (3) give physics arguments that connect (1) and (2), showing that the speedup is because of the quantum effects.

    So far, I’d say that D-Wave has made partial, disputed progress on (1), and no honest progress on (2) or (3).

    For other non-universal QC proposals, such as BosonSampling, there are other issues that arise with verifying a claimed device—in fact I’ll be giving a talk about exactly that next week, at the Quantum Games and Protocols workshop at UC Berkeley.

    As for the relation to P≠NP: well, that question is sort of in the background of all of theoretical computer science! In this discussion, the place where it makes an appearance is when we ask how certain we can be that there’s no efficient classical algorithm to simulate such-and-such quantum algorithm. For most known quantum algorithms (such as Shor’s), we couldn’t be completely certain without proving P≠NP as a first step. Still, if one could use a QC to clearly, decisively outperform any classical algorithm that’s been discovered for the past half-century, most people would say that’s good enough!

  260. Scott Says:

    Sam #258: Yeah, technically I should have said “NP optimization problems.” Of course once you convert your optimization problem into a decision problem, the very definition of NP implies that you can efficiently tell whether or not the decision criterion is satisfied. However, if you could efficiently test whether a solution to the optimization problem was globally optimal, then NP would equal coNP.

  261. Greg Kuperberg Says:

    Sol Warda – “I suppose, short of writing these two men and asking them for a written confirmation, that is the closest we can come to believing it.”

    Yes, it would be nice if Ned Allen or Daniel Lidar or anyone else wrote an arXiv paper explaining how a D-Wave device was used to find a bug in Lockheed-Martin’s software. If indeed it was the device and not merely people at the company who found this bug. Because, it is a huge leap from computing that R(n,2) = n for n ≤ 8 (*) to debugging code.

    Or as you say, a clear written account posted to this blog (say) would be a great start. When people post comments on the Web, then at least there is a direct record of what is actually said.

    (*) Theorem: If there are n people at a party, then either all n are friends, or at least 2 are strangers.

  262. Rahul Says:

    Scott @259:

    Among Non-universal QC proposals there’s a key difference between Boson Sampling versus the rest, say, adiabatic / quantum annealing: We so far have no practical use for Boson sampling as opposed to the others.

    Right? Or are there practical uses to which Boson Sampling could be put to use?

  263. Scott Says:

    Rahul #262: That’s right, there are currently no known practical uses of BosonSampling, other than to demonstrate quantum speedup. On the other hand, if you added just a single resource to a BosonSampling device—namely, feedforward measurements—then the 2001 result of Knill, Laflamme, and Milburn (KLM) shows that it would get “upgraded” to a universal QC, which of course would have practical uses. So, when someone asks for applications of BosonSampling, the main one I can give is that it serves as an obvious stepping-stone to a KLM QC.

  264. Rahul Says:

    Scott #263:

    Thanks! I had no clue about a KLM QC. That’s good to know. Is that (i.e. adding feedforward measurements) a trivial step if & when you have a scalable Boson Sampling device ready? Or not so easy?

    Can you elaborate more on the promises / challenges of attaining universal QC via the BS + KLM-QC route?

  265. Rahul Says:

    My suspicion is that the part about “D-Wave device was used to find a bug in Lockheed-Martin’s software” is entirely Bullshit.

    And that’s a reason we’ve never heard much details about it ever nor will we. Some PR fellow went overboard & the tech guys now refuse to stand behind the most outrageous parts of it.

  266. Alexander Vlasov Says:

    Greg #261, my guess – it is nothing special, finding some bugs in old FORTRAN style programs with lot of conditional GOTO was rewritten as SAT.

  267. LK2 Says:

    Hi Scott, as a physicist deeply interested in theoretical computational complexity, I’d like to ask you as an expert of the field what do you think about the statistical physics results obtained for some classical computer science problems, like kSAT.
    For example:

    Sorry for going completely off-topic.
    Thanks for any comment.

    P.S.: One month ago I attended a talk of a D-Wave experimental physicist who presented the “Two” model. I tried to bring the discussion on the latest results about the no-speedup, but the simple answer was: we are just at the beginning ad we already do quite well. Give us time like people gave time to silicon computers..
    No comments about local/global entanglement.

  268. fred Says:


    That thing is such a load of crap.

    1) Spend years and millions on some vague promise of building the very first functional QC.

    2) When it’s clear it’s getting nowhere, turn the whole thing into an inscrutable black box and spend what’s left of the investment on making it look as cool as possible and hyping the shit out of it (lots of hype potential since it’s all uncharted territory).

    3) hope to recoup some of the investment, even if it means discrediting QC forever and wasting everybody’s time in the process.

    The research community will just have to move on ASAP and stop wasting time on that D-Wave BS, it’s getting ridiculous.

  269. Scott Says:

    LK2 #267: Phase transitions in random kSAT is a huge field. What do I think about it? I think it’s fascinating—in fact, phase transitions were one of the first topics I got interested in, back when I was an undergrad at Cornell in the late 90s working with Bart Selman (one of the first people to discover the kSAT phase transitions). I still think there might be clues to mine there about computational complexity in general. As you might know, though, it’s been challenging to link the study of phase transitions to the rest of theoretical computer science, for several reasons: e.g., the reliance on non-rigorous statistical physics methods; and the fact that random kSAT might simply be much, much easier than worst-case SAT, even right at the phase transition (the dramatic success of survey propagation provided some indications for that). On the other hand, phase transitions have been a bridge for two decades now for highly-nontrivial connections between statistical physics and theoretical computer science. A short summary is that, whenever computer scientists and mathematicians have been able to prove something rigorous (e.g., about the structure of the solution space near the phase transition), it’s matched the conjectures that the physicists made using their non-rigorous methods—like, every single time! One could ask: why even bother with rigor, then? Well, one reason is simply that the physics methods don’t give you conjectures about everything you’d like: e.g., they can’t address the possibility that some amazing new algorithm will be discovered that outperforms the known algorithms (as happened with survey propagation). In computer science, the main tools we have right now to address that possibility are completeness and reductions—but alas, completeness results have been stubbornly elusive for random kSAT. Anyway, that’s probably enough uninformed rambling for now—a real expert would be able to go deeper / correct any mistakes I made.

  270. LK2 Says:

    Scott, thank you very much!
    I did not know about survey propagation: I’ll look into it.
    Sorry if I do not disclose the name of the d-wave physicist that gave the talk.

  271. Greg Kuperberg Says:

    LK2 – I’d like to expand on a couple of aspects of Scott’s answer.

    First, the original motivation of 3-SAT or k-SAT was its worst cases, not its random cases. Worst-case 3-SAT is NP-complete. Random-case 3-SAT is not thought to NP-hard under any reasonable distribution; it may or may not be hard by other standard depending on the distribution chosen. (After all, you could encode random-case discrete log or factoring into k-SAT and call that distribution “random” k-SAT.) So, while random-case k-SAT is a great topic in statistical physics or probability theory or what have you, it is as far as anyone knows simply a change of topic from NP-hardness. Indeed, to the extent that anyone finds structure in random k-SAT such as phases, that is solid evidence that it isn’t computational hardness, because that is always about lack of structure or indecipherable structure. You might only be able to hope for computational hardness at a phase transition point; even then it is not clear that it really is hard.

    A contrasting case is the random-case hardness of the permanent or certain questions related to it. It is enough to look at the permanent of a random matrix over Z/3 (i.e., the permanent mod 3) or Z/4. There is no statistical physics model for this problem, precisely because people have proven that the random case is as hard as the worst case.

    The idea that every theorem in mathematics confirms a physics conjecture reminds me of the joke that economists have predicted 9 of the 5 last recessions. The cream of the crop of semi-rigorous physics is indeed great stuff. But there are also a lot of erroneous conjectures in theoretical physics, and a lot of jumping to conclusions. Precisely because those conjectures are less reliable, you don’t hear about them as much. The reputation of theoretical physics as black magic has truth to it, but it is also buttressed by selection bias. It’s the same business in number theory (and sometimes TCS itself), except without the gloss that it’s black-magic physics. If someone one day proves that the density of twin primes is Theta(1/log(n)^2), then no one will say, “Wow, the arithmetic physicists already knew that without bothering with rigorous proof”, even though it could have been said.

  272. Scott Says:


    I’ve been reflecting on what quax did—pivoting seamlessly from touting a customer testimonial about the usefulness of the D-Wave device, to (when challenged about it) saying that, well, of course the device isn’t going to be “useful” yet in the vulgar sense of solving anything faster, but there are lots of other good reasons to buy it.

    It reminds me of nothing more than the attitude of many religious believers. Namely: it’s vulgar, naïve, and simplistic to ask for “evidence” of God’s existence, in the conventional sense. God is not some bearded man in the sky granting wishes; rather, He’s the metaphysical precondition of the existence of reality, the very ground of Being. God is not Himself seen; rather, He is that through which we see everything else. God is the … wait a sec, am I about to get a promotion at work? Hallelujah, God answered my prayer!! God is real!!! Take that, skeptics!!! No, wait … they’re giving the promotion to Jones instead?? OK then, back to the metaphysical precondition of existence.

  273. rrtucci Says:

    Scott, awesome comment about the connection of phase transitions and complexity theory. If Complexity Theory and Physics were a courting couple, all that Complexity Theory would need to do to make Physics fall head over heals over it would be to mention that it’s very interested in phase transitions.

  274. Scott Says:

    rrtucci #273: LOL, that’s all it would’ve taken? In that case, complexity theory and physics could’ve started dating decades ago… no, wait, they did…

  275. quax Says:

    Scott #272, really meant to stay away but you’ve proven me wrong that there are no new arguments to be heard.

    Kudos :-)

    Comparing the search of evidence for D-Wave quantum speed-up to trying to prove the existence of god is one I haven’t heard before.

    I can see it now, the Church of D-Wave. This could really work, after all Scientology is totally lame these days, shedding followers left and right, people that’ll probably be really attracted to something as edgy and Sci-Fi as quantum computing (speed-up be damned, as a matter of faith).

  276. Nick Read Says:

    Hey Scott and Greg,

    I’m rather interested in hearing more about what’s known in average-case complexity of NP-hard optimization problems.(I’ll ignore the slights to my physicist’s pride!)

    1) Are there any such problems in which average-case is known (for some distribution, or for some “reasonable” distribution) to be as hard as worst case? (My interest is in the ground state of an Ising spin glass, which is the same as maximum weighted cut of a weighted graph, with weights allowed to be both positive and negative. One’s impression is that in the typical instance for any iid distribution of weights —with at least some fraction of them positive— on non-planar graphs this always hard.)

    2) Greg, you didn’t provide a reference on the average-case complexity of mod-3 permanent being same as worst case!

    3) Also, what do you think of work by theorists such as Boris Altshuler and computational physicists such as Peter Young and others whose work suggests (non-rigorously) that the quantum adiabatic algorithm will not be able to give speed-up on NP-hard optimization problems?


  277. LK2 Says:

    Greg #271, thanks for expanding the topic: very clear.
    The permanent example vs kSAT really clarified the issue and the limits of the statistical approach. Very interesting.

  278. Gil Kalai Says:

    Are there reasons to believe that approximate Boson sampling (aka approximate permanent sampling) for random Gaussian matrix is as hard as it is for arbitrary matrix, or that classical computers ability for this problem will have drastic cc consequences? I ask here because it “feels” like an average case problem rather than a worst case (but maybe there is some self-reducibility I am not aware of).

  279. Greg Kuperberg Says:

    Nick – Answering out of order:

    (2) The first version of the result is due to Lipton. The basic idea is that if you have an oracle that can compute most permanents (say mod p for some prime), then you can exploit it to learn all permanents. The reason is that the permanent is a polynomial. In the simplest version, assume that p is a prime larger than the size of the matrix and you want the permanent mod p of a matrix M. Then you can randomly choose a line passing through M in the space of matrices, and ask the oracle for all permanents for matrices on the line. Let us say that the oracle at least knows when it does not know the answer. Then if you have enough permanents, you can use Lagrange interpolation to find the permanent of M.

    Since then the method has been refined using curves through M rather than lines, etc, and using error-correction rather than just interpolation, to allow weaker oracles that are correct less often or that do not know when they are wrong.

    Sorry, I do not have a great reference at my fingertips right now. Scott is better than I am at that.

    (1) I read somewhere, I am no longer sure where, that people do not expect random self-reducibility for NP-hard problems, by contrast with #P as demonstrated by Lipton et al.

    (3) There are different refinements of NP-hardness, which in general allows polynomial fudge factors. The very hardest version is the conjecture that circuit satisfiability (for a polynomial-sized circuit or even quasi-linear or maybe even linear) on n inputs should take Omega(2^n) classical computation time, basically that you can’t do better than exhaustive search. The quantum version would be Omega(2^{n/2}) in light of Grover’s algorithm. For these hard cases, I see no pressing need for a separate and more heuristic analysis of adiabatic algorithms. After all, an adiabatic algorithm can be simulated by a sequential algorithm.

    But, a more rigorous derivation that certain routine types of adiabatic algorithms can’t work for these “very NP-hard” problems, not just that you don’t expect them to work, could be a good result. I do not know much about Altshuler-Young specifically.

  280. Scott Says:

    Gil #278: Alas, no. We don’t have any result currently showing that approximate BS with Haar-random unitaries is as hard as general approximate BS. Haar-randomness was a particular choice that we made, for several good reasons: it’s easy to achieve using random beamsplitter networks, the submatrices have a clean, simple form (they’re iid Gaussian), and most importantly, it’s easy to “smuggle in” a random submatrix so that an adversary can’t tell where you put it. But we can’t rule out the possibility that there’s some other distribution over unitaries that would give rise to a harder approximate BS problem — or at any rate, a problem that it’s easier to *argue* is hard. That’s a wonderful research problem!

  281. Greg Kuperberg Says:

    Gil – The message that I take from Scott’s and Alex’s work on BosonSampling is that they hope that random self-reducibility of the permanent can be pushed further for Gaussian-random matrices. Of course I am speaking for Scott here.

  282. Scott Says:

    Nick #276:

    (1) No, it’s not yet known if there’s any NP-hard problem with worst-case/average-case equivalence, where “average-case” means with respect to some efficiently-samplable distribution. This has been a major open problem for 30+ years. (On the other hand, if you allow a non-efficiently-samplable distribution, then every problem is worst-case/average-case equivalent with respect to the so-called universal prior.) For much, much more about this topic, see the excellent survey by Bogdanov and Trevisan.

    (2) I don’t actually think it’s known that the permanent mod 3 is worst-case/average-case equivalent. On the other hand, permanent mod p is certainly worst-case/average-case equivalent, where p is any prime sufficiently greater than the matrix dimension n. For details, see these lecture notes of Luca Trevisan (or my BosonSampling lecture notes from Rio).

    (3) I don’t really understand Altshuler’s and Young’s work well enough to comment on it. I’d say that the situation with the adiabatic algorithm remains extremely murky: we do have theoretical examples where it outperforms simulated annealing, but we also have theoretical examples where simulated annealing outperforms it, and we have no idea whether either such separation is ever particularly important in practice.

  283. Scott Says:

    Greg #281: Yes, for the Haar-random/Gaussian case, we can at least give a simple, plausible, “purely-classical” hardness conjecture under which everything works out the way we want.

    For other matrix ensembles, there might actually be ways for an adversary to identify the submatrix whose permanent you care about (in which case, our reduction argument would completely break down). Or even if not, you’d probably get some weird, complicated collection of submatrices about which it’s hard even to make a conjecture. At least, that’s what we found when we tried.

  284. Joshua Zelinsky Says:

    Question: Can you use Chinese Remainder Theorem to get that for most primes p one should have that permanent mod p is worst-case/average case equivalent? The rough idea here is that you can build up a large effective size by taking mod p_1p_2…p_k so that that ends up being much larger than n. I’m not sure this works at all, but this suggests at least naively that for any random prime p we’d expect worst-case/average case equivalence to be likely.

  285. Gil Kalai Says:

    Scott, Greg (#280,#281) on the positive side: perhaps we can deduce from classical computer achieving approximate (or exact) bosonsampling for Gaussian input some average-case ability of classical computers which while not of a hierarchy-collapse quality still of some quality. (Someting like average 3-SAT but higher up in the hierarchy..)

  286. Rahul Says:

    Say we got a Boson Sampling device working & with large enough n. Say n=20.

    Is that an achievement because it’d be a quantum device that solves at least one specific problem faster than we can simulate it classically? Or is there a more nuanced interpretation to it.

    Are there no other known quantum processes / devices for which a faster classical simulation does not exist?

  287. Scott Says:

    Rahul #286: The argument I’d make is two-part.

    (1) If you got it working with 20 photons (so that any known classical simulation would need millions of operations), then it would become very hard to imagine a fundamental physical principle that would prevent scaling to 200, 2000, or any larger number of photons. (Arguably, even harder to imagine than it is now…)

    (2) If you can scale it to any number of photons, then modulo our complexity conjectures, the Extended Church-Turing Thesis is false.

  288. Greg Kuperberg Says:

    I see that I made a mistake concerning the self-reducibility of the permanent mod 3. However, as best I understand it and as Luca Trevisan’s notes suggest, the self-reducibility argument only depends on the cardinality of the finite field of coefficients, and not necessarily of the use of prime numbers. So we can look at the permanents of n x n matrices defined over F_{3^k}, where both the choice of the finite field and the size of the matrix are part of the input. You can say then that if you are smart enough to compute most permanents over F_{3^k} for k large enough compared to n, then you are smart enough to compute all permanents over F_3 (and F_{3^k} to boot).

  289. Greg Kuperberg Says:

    I would also expect that the permanent of a random matrix over Z/3 is as bleak as if it were reducible to the hardest case, in the sense that no one has any statistical model or algorithm to suggest that the random case is any easier than the hardest case. Heck, even if the distribution on the coefficients in Z/3 is biased, as long as the matrix coefficients are i.i.d., then probably for large matrices that’s already about as hard as the uniformly random case for linearly smaller matrices.

  290. Rahul Says:

    Scott #287:

    Naive question: Re. #2 why isn’t something like, say, measuring the electron density of a complex molecule similar to Boson Sampling? You could select initial conditions (i.e. molecule identity) & then measure electron density fairly accurately but predicting the same measurement via a full classical solution by computation is impossibly slow? The bigger the molecule the worse the problem.

    Or say setting up a complex fluid flow problem & trying to simulate it classically versus treating it as a device & measuring the pressure & velocity fields using probes. Measurement seems faster. At some intermediate scale you have a problem that’s computationally verifiable yet faster via measurement than classical computation.

    Are these situations different from a Boson Sampling device? Why does Boson Sampling disprove the Extended Church-Turing Thesis but not these ideas.

  291. Alexander Vlasov Says:

    Scott #272, I was aware about similar and IMHO more justified example: universal quantum computer.

  292. Bill Kaminsky Says:

    Short Version:

    I got a question on this topic of average-case hardness of NP-complete problems. Would it be surprising in any complexity-theoretic way if it turned out that:

    1) on the one hand, no polynomial-time reduction exists from an arbitrary instance of k-SAT to a (possibly polynomially-sized set of) Uniform Random k-SAT instance(s) [i.e., Uniform Random k-SAT isn't average-case NP-complete], but

    2) on the other hand, solving almost any instance of Uniform Random k-SAT at certain clause-to-bits ratios nonetheless takes exponential time?

    Longer Version:

    I disagree with Scott’s intuition expressed in Comment #272, specifically:

    Indeed, to the extent that anyone finds structure in random k-SAT such as phases, that is solid evidence that it isn’t computational hardness, because that is always about lack of structure or indecipherable structure. You might only be able to hope for computational hardness at a phase transition point; even then it is not clear that it really is hard.

    I disagree because it seems to me that at least one “phase” that non-rigorous statistical physicists and rigorous probabilist mathematicians have discovered in Uniform Random k-SAT is pretty much the epitome of Scott’s phrase “indecipherable structure.” I speak of the so-called “shattered” or “frozen” phase of Uniform Random k-SAT. It’s now rigorously established that there exists a *finite* interval \( [\alpha_{shatter}, \alpha_{UNSAT}) \) in the clauses-to-bits ratio \(\alpha\) right before the SAT-UNSAT phase transition in which there exist positive, non-infinitesimal constants \(\gamma, \delta, \epsilon, \zeta\) characterizing how:

    1) On the one hand, there are still an exponential number \(O(2^{\gamma N})\) of satisfying assignments, where \(N\) is the number of bits.

    2) But on the other hand, with respect to Hamming distance, these satisfying assignments are split exponential number of “clusters” that each contain a mere \(O(2^{-\delta N})\) fraction of the satisfying assignments, hence the name “shattered.”

    3) Moreover, speaking of Hamming distance, these clusters are mutually separated by distances of \(O(\epsilon N)\), meaning you have to flip a finite fraction of all the bits to get from one cluster to another.

    4) Finally, if “energy” is defined as the number of violated clauses, there exist energy barriers of height \(O(\zeta N)\), certainly preventing any thermal annealing-style algorithm from efficiently traveling from one cluster to another.

    Combined with the fact that even before the shattered phase, each added clause eliminates a finite fraction of all the satisfying assignments, we have a picture where any algorithm that tries to grow partial solutions based on some subset of the clauses is bound to take exponential time. This is because any such build-up-partial-solutions algorithm will be forced to travel between ever-shrinking clusters, and again travel between clusters involves surmounting barriers that are both wide in Hamming distance and high in energy.


    1) For a non-rigorous statistical physicist’s proof of this final point that, even before the shattered phase, each added clause eliminates a finite fraction of satisfying assingnments:

    Monasson and Zecchina, “Entropy of the K-Satisfiability Problem,” Phys. Rev. Lett. 76(21) 3881–3885 (20 May 1996), freely available as a preprint at

    2) For the first rigorous proof of the existence and some properties of the shattered phase:

    Achlioptas and Ricci-Tersenghi, “On the Solution-Space Geometry of Random Constraint Satisfaction Problems,” In Proc. of 38th STOC, 130–139, ACM: New York, NY, USA (2006), freely available as a preprint at

  293. Scott Says:

    Bill #292: The remark you disagreed with was from Greg Kuperberg (comment #271), not me. Despite what some people think, Greg and I are not entirely indistinguishable. :-)

    No, it would not be surprising in any complexity-theoretic way if random k-SAT at the phase transition turned out to be hard, but not NP-hard. Indeed, at least for k≥4, that would be exactly my guess, with maybe 60% probability. (When k=3, as I mentioned before, the survey propagation algorithm calls the hardness into serious question.)

    And personally, I’d say that the structure revealed by the statistical physics analyses is relevant to the computational complexity of random k-SAT; it’s just not the only thing that’s relevant. The most likely possibility, I’d say, is that certain statistical properties of the solution space (the “freeze-out” you mentioned is a good candidate) are necessary for a random CSP to be hard, but not sufficient for hardness. We know that the grossest statistical properties of the solution space can’t be sufficient for hardness, because random XORSAT has almost the same properties as random k-SAT, but of course is easy.

    (Incidentally, that’s precisely one of the main observations that killed Deolalikar’s claimed P≠NP proof! He was relying on certain statistical properties of random 3SAT, and people quickly pointed out that random XORSAT had the same properties.)

  294. Scott Says:

    Rahul #290:

      why isn’t something like, say, measuring the electron density of a complex molecule similar to Boson Sampling?

    Because for the electron density, we don’t have a reduction-based argument for hardness. Or better: if someone gives such an argument—specifying what the input is, how to measure the input size, etc., and relating the hardness of the resulting well-defined computational problem to some more “generic” problem, like a #P-complete one—then measuring electron density would become similar to BosonSampling.

    As Alex and I said in the introduction to our paper, our basic goal was to strike some middle ground between
    (1) the chemists and condensed-matter physicists who believe some particular quantum simulation problem is “hard,” but have no idea whether the hardness comes from asymptotics or merely from people not having been clever enough yet (and in many cases, the truth has turned out to be the latter), and
    (2) the theoretical computer scientists who have a formal tool for arguing that something is asymptotically hard (namely, reductions), but haven’t generally applied that tool to quantum systems like the ones that arise in present-day experiments.

  295. Greg Kuperberg Says:

    Bill – I see your point. I guess you’re right, there could be a computationally difficult phase of random k-SAT. In particular, if random k-SAT in this shattered phase produces instances that look hard in the sense that no one can solve them, then for all I know they really are hard. Still, absent any theory of random self-reducibility, it’s a change of topic from NP-hardness.

  296. Nick Read Says:

    Greg and Scott,

    Thanks to you both for your answers and for references.

    Greg, Bill Kaminsky #292, who evidently got up earlier than me, asked a question on your response similar to one I had.
    But another way to look at average (and also typical-)case hardness is in terms of distNP-completeness and whether such problems can be typically solved in polynomial time. The conjecture is that they cannot. Is it correct to say that the natural problems known to be distNP-complete do not have very natural distributions of randomness? Is that why random k-SAT near the transition is not known to be hard in this sense?

    (Now I have an 80-page survey article to read.)

  297. Greg Kuperberg Says:

    Nick – Bill established that I have gotten beyond my expertise horizon. Re your question about distNP — it was news to me that it has complete problems — I just found this quote from Luca Trevisan: “It is still an open question how to apply this theory to the study of standard NP problems and natural distributions of inputs.”

  298. Bill Kaminsky Says:

    To Scott #293 — Ooops. Sorry for confusing you with Greg. ;)
    And, more importantly:

    1) Oh right, k-XORSAT! I forgot that there is something quite well known that “shatters” in a way that stymies local-search and iteratively-improve-partial-solutions algorithms but nevertheless has “global structure” that’s efficiently exploited by something as “simple” as Gaussian elimination (or whatever your favorite systems-of-linear-equations-solving algorithm is since XORSAT clauses are linear mod 2). And indeed, as your reference to poor ol’ Vinay Deolalikar reminds us, forgetting about this XORSAT example can lead to major embarassment.


    To Greg #295 – Oh, I wholly agree. Elucidating why large families of nontrivial algorithms (in this case, local-search algorithms, iteratively-improving-partial-solutions algorithms including the vaunted survey propagation, and intelligently backtracking complete solvers like DPLL) are inefficient on random k-SAT is a change of topic from NP-completeness. I just think it’s worth mentioning that on both…

    a) theoretical grounds like these Uniform Random k-SAT insights, and

    b) empirical grounds from SAT-solving competitions (and DIMACS-graph-theory challenges, etc.)

    …it’s quite plausible that NP-complete problems readily permit the random generation of instances that are almost always hard. Thus, the stuff about the non-computability of the all distributions rigorously known to generate average-case-hard instances shouldn’t necessarily lead to the intuition that it’s nigh-impossible to find truly hard instances of NP-complete problems.


    To everyone — just so nobody’s confused — survey propagation does fail endemically in the “shattered” phase of Uniform Random k-SAT. It was just the fact that the shattered phase turns out to be very thin for Uniform Random 3-SAT (i.e., the difference \( \alpha_{UNSAT} – \alpha_{shatter} \) is maybe 13 if I recall correctly for the scaling of the shattering transition with k … I’d have to dig up the reference)

  299. Nick Read Says:

    Greg #297, the 80-page survey by Bogdanov and Trevisan that Scott mentioned looks good but does not include N. Livne’s paper “All natural NPC problems have average-case complete versions” (2006), which makes nice reading.

  300. Bill Kaminsky Says:

    Not to be neurotic, but I just noticed the end of my last comment was rendered incorrectly due to me not closing some HTML tag or MathJax parentheses.

    Thus, its final paragraph

    To everyone — just so nobody’s confused — survey propagation does fail endemically in the “shattered” phase of Uniform Random k-SAT. It was just the fact that the shattered phase turns out to be very thin for Uniform Random 3-SAT (i.e., the difference \(\alpha_{UNSAT}–\alpha_{shatter}\) is maybe 13 if I recall correctly for the scaling of the shattering transition with k … I’d have to dig up the reference)

    is missing a couple lines. It should’ve read (additions in italics):

    To everyone — just so nobody’s confused — survey propagation does fail endemically in the “shattered” phase of Uniform Random k-SAT. It was just the fact that the shattered phase turns out to be very thin for Uniform Random 3-SAT (i.e., the difference \(\alpha_{UNSAT}–\alpha_{shatter}\) is a mere 0.1 clauses-per-bit or so for k = 3) that led to now-dashed hopes that survey propagation would efficiently solve all random SAT instances right up to the SAT-UNSAT threshold. However, it’s now clear that the width of the shattering transition grows as k grows. Empirically, this is clear from SAT competitions where k = 7 readily suffices to generate uniform random instances with just a few hundred bits that stymie all entrants. Moreover, Achlioptas and/or Coga-Oghlan have a rigorous theoretical argument for k greater thanmaybe 13 if I recall correctly for the scaling of the shattering transition with k … I’d have to dig up the reference)

  301. Darrell Burgan Says:

    Scott #259 – that made perfect sense – thanks so much for taking the time to explain all this. I’m sure this is elementary stuff from your perspective but, for a layperson who understand classical computing very intuitively, it’s fascinating.

  302. Rahul Says:

    Scott #294:

    Thanks for trying to explain it to me.

    So, say, for a problem like calculating the ground state wave function for a molecule with n electrons, is the asymptotic computational complexity currently unknown?

    Is this even a well posed question by me? If not the asymptotic complexity, what’s the O(n) for the best known algorithms for calculating the exact wave functions?

  303. Scott Says:

    Rahul #302: Calculating the ground state energy (forget about the rest of the wavefunction), for an arbitrary molecule with n electrons, is actually a QMA-complete problem, which is even harder than NP-complete!

    You might ask: if the problem is that hard, then how does Nature solve it? And the answer is that Nature doesn’t, in general! For a complicated enough molecule, Nature could easily get stuck in a metastable state other than the ground state—as indeed seems to happen with prions, the agent of mad cow disease.

    So, I hope that illustrates some of the difficulties in even formalizing what problem we’re talking about! BosonSampling, at least, can certainly be solved by a system of noninteracting bosons, essentially by definition. Are there other physical systems that can also solve BosonSampling-like problems? Yes, of course there are! Most of those systems are so powerful that, in principle, they could do universal QC (so Shor’s algorithm, etc.) as well. But others seem kind of like BosonSampling: they don’t appear to be capable of universal QC, but they can nevertheless perform some sampling task that we don’t know how to efficiently simulate classically (examples include the IQP model of Bremner, Jozsa, and Shepherd; the constant-depth quantum circuits of Terhal and DiVincenzo; and Stephen Jordan’s permutational QC).

    Now, within that class of physical systems, what makes BosonSampling special is really its connection to the permanent function, and the fact that the permanent is so special in theoretical computer science (being random-self-reducible over finite fields, and having other remarkable properties). That’s what at least enabled me and Alex to make plausible hardness conjectures for approximate BosonSampling, whose analogues we don’t yet have for the other systems (but maybe something analogous can be done for them, and I regard that as an excellent open problem).

  304. Rahul Says:


    Thanks! I guess I should read more about this. I guess I understand the experimental protocol for Boson Sampling OKish but it’s the motivation that I always struggle with.

    My naive picture of what you are trying to achieve with Boson Sampling is stuck with something like this:

    You are trying to eventually get to a device, that’s per se useless, but maps n inputs to outputs uniquely & in a manner that can be correctly calculated by classical simulation but at a speed that’s much slower than simply measuring the outputs of the device itself (say n=20). If that worked it’d disprove the Extended Church Turing Thesis because by induction you could soon attain a scale where outputs are still measurable but not simulable in any reasonable time frame (say n=200).

    There are already existing devices (rather systems) that do this,( i.e. produce outputs that are faster to measure than can be classically simulated e.g. measuring electron density of a given molecule) but those systems don’t count for what you are trying to do because we don’t know enough as to how to frame the abstraction that allows us to formalize how those systems work.

    Hence getting your device to work at scales faster than classical simulation would disprove the Extended Church Turing Thesis but not just pointing to current systems which are already much much faster than simulable classically.

    Perhaps my summary is bullshit. :)

  305. Gil Kalai Says:

    Here is the way I see the different views and matters now. (Some of the views of different players may have changed somewhat over time which is quite a welcome phenomenon. I certainly didnt follow that.) [B&] the group of Boixo et als; [SmSm]- the paper by Smith and Smolin; [SSSV] the paper by Shin et als.

    1) When [B&] talks about “quantum annealing” they do not refer to a single model/algorithm but to a whole class of models. (As they told me “Quantum annealing” refers to a “quantum adiabatic algorithm subject to two additional conditions: (1) the initial Hamiltonian is a transverse field and the final Hamiltonian is an Ising model, (2) the system is coupled to a thermal bath at a temperature and coupling strength that allows for a description via a master equation with decoherence in the energy eigenbasis.” The research hypothesis is that the D-Wave device exhibits quantum behavior captured by a model of an open quantum system, or more precisely that DWave runs “quantum annealing” in the above sense. ​(They considered (carefully chosen) two models Q1- SQA and Q2-the master equation.)

    In their main paper they claim demonstrating that DWave manifest quantum annealing (in this wider sense) by showing the proximity of DWave input/output behavior with Q1 and Q2 while rejecting a classic algorithm C1 (classical annealing) based on statistical test T1 that capture bimodal behavior.

    2) [B&] do not claim large-scale or long-range entanglement, and, in fact, do not talk about entanglement.

    3) [SmSm] exhibit another classical model C2 which also have bimodal histogram behavior. [B&] rejected C2 based on another test T2 based on correlation of success probabilities on individual instances. [B&] regard this as a refutation of [SmSm].

    4) [SSSV] demonstrates a classical model C3 that passes both tests T2 and T3. [O] reject the interpretation of SSSV. [B&] plan to address this new model C3 in the future. Meanwhile they note that what their paper was about was to show that quantum annealing is consistent with the behavior of the device – the test did not and could never address the question whether large scale entanglement is present. Therefore they regard the main criticism that SSSV have regarding their paper as actually vacant.

    5) [B&] claim that the “opposing” papers (Smolin et al.) are trying to make the case that there are no relevant quantum effects whatsoever. [B&] think this is refuted by the experimental evidence they have presented. However, it seems that what the opposing papers are really claiming is that [B&]‘s evidence for quantum effects is nonexistent.

    6) Regarding entanglement, Greg explained to me that the entanglement claimed for DWave are weaker than what other groups with higher quality qubits can achieve and therefore are not as remarkable as I thought and they need not be double checked as I thought. I stand corrected on this matter. (But I still think this is a very nice result that need to be checked carefully.)

  306. Jeremy Stanson Says:

    Gil # 305
    That is a nice summary. Re: item 6, there is a pretty crucial element that theorists like Greg often overlook (or at least they never comment on it). This is: fine tuning and perfecting the entanglement of a handful of qubits (like 2, or in any case fewer than 10) is an almost purely academic exercise. There is this misplaced belief that once you perfect 2 qubits, all you have to do is mass produce them and you’ll see that same perfect behavior on a larger scale. This is so very wrong! Once you scale up the system, you unavoidably bring in new sources of noise and everything gets messier. The high-quality qubits that Greg is referring to are high quality because there are so few of them! D-Wave is encountering and overcoming engineering issues that most of the academic world hasn’t even theorized about yet.

    I also feel I need to point out the self-indulgence of some of the commenters here. For all the poopoo-ing of D-Wave that goes on, you guys received a comment from an actual real-life lead chip designer at D-Wave (Paul # 221) and nobody even responded to him!

  307. Alexander Vlasov Says:

    Jeremy #306, for me also was strange that the question #221 was not discussed (in fact, at the moment I have read that I just have been going to ask almost the same thing).

  308. Rahul Says:

    @Jeremy Stanson:

    For all the poopoo-ing of D-Wave that goes on, you guys received a comment from an actual real-life lead chip designer at D-Wave (Paul # 221) and nobody even responded to him!

    To be fair, Paul’s comment had precisely nothing to do with chip design, so the credentials you cite are mostly irrelevant.

    But yes, I agree it is an interesting ethical question, and I’d love to hear some reflection on it too.

  309. Greg Kuperberg Says:

    Jeremy – “There is this misplaced belief that once you perfect 2 qubits, all you have to do is mass produce them and you’ll see that same perfect behavior on a larger scale.”

    Maybe some people have that misplaced belief, but others don’t. Informed people entirely understand that no qubits are perfect and that in order for QC to scale, it needs quantum error correction. It’s also no secret that it’s difficult to scale up the number of qubits while maintaining their fidelity. These are indeed among the main engineering challenges of QC.

    “The high-quality qubits that Greg is referring to are high quality because there are so few of them! D-Wave is encountering and overcoming engineering issues that most of the academic world hasn’t even theorized about yet.”

    Actually, even when D-Wave only had 16 qubits, they were the same second-string kind (by standard figures of merit such as coherence time) as they have now with 500 qubits. They are free to amass more and more unremarkable qubits, but you can expect the results to be unremarkable, as they have been.

    “For all the poopoo-ing of D-Wave that goes on, you guys received a comment from an actual real-life lead chip designer at D-Wave (Paul # 221) and nobody even responded to him!”

    If I had known that “Paul B.” is a leading engineer at D-Wave, I might have devoted more attention to his strange comment about Shor’s algorithm. There is a Paul Bunyk who works or worked for D-Wave. There are also many other Paul B.’s in the world. Paul B. did say that he was only speaking for himself and not for anyone else such as I guess his employer, whoever that may be.

    I had never heard the argument from D-Wave or anyone else that Shor’s algorithm is an evil algorithm that should not be implemented because it would violate people’s privacy. I had thought the argument was that Shor’s algorithm is arcane, and just not very important compared to NP-hard problems.

    The standard position is that beyond Grover’s algorithm, or some adiabatic version of Grover’s algorithm, NP-hardness makes a problem less relevant to quantum computing, not more relevant.

  310. Jeremy Stanson Says:

    Rahul # 308

    Little to none of the discussion at Shtetl Optimized is about chip design (it’s almost as if the majority of the people who throw around opinions on this blog don’t know much about superconducting circuits or real physical QC systems…). The point is that if you didn’t all have your blinders on, you might have seized the opportunity to get real insights from within rather than staying trapped in some skeptical auto-pilot mode.

  311. Jeremy Stanson Says:

    Greg # 309

    I think you’re still slightly off on how you regard D-Wave’s qubits. The 16-qubits from way back when were designed at the time with all the infrastructure in place to scale to more qubits. So again, of course they were noisier than qubits that are built without a view towards scalability. I think it’s amazing that the noise apparently didn’t increase much in going from 16 to 512.

    I admit to being a D-Wave supporter, but I have my beefs with the hype and all that as well. The thing is, I think we should consider that D-Wave doesn’t WANT noisier qubits than everybody else. D-Wave WANTS the best qubits and the MOST qubits. D-Wave is devoting more resources towards real QC systems than anybody else and what we’re seeing from them is that systems with lots of qubits have pretty much gotta be noisy.

    Now, whether or not error correction is required in quantum annealing / AQC is, I think, still a bit of an open question.

  312. Alexander Vlasov Says:

    Greg #309, I doubt the argument #221 is strange and it seems not first time risen even in this blog. Nobody says the factoring is evil (first electronic computer also was created for breaking some codes) – seems it was only suggested that it hardly can be realized as a business plan for a private enterprise.

  313. Jay Says:

    Gil #305,

    One thing you might wish to include in your summary is the fact that [B&]‘s quantum model is not really quantum in the first place.

    As explained by Seung Woo Shin #115, what they rely on is polynomially-scaling Quantum Monte-Carlo. As indicated in wikipedia, QMC is a familly of algorithms that are either numerically exact and exponentially slow or approximate and polynomially-scaling (but of course not both exact and polynomially-scaling at the same time).

    As Michael Marthaler #122, I hadn’t fully appreciated this point. But that changes everything! If QA (proper) was superior to any classical model, then the interesting part of its behavior could not be captured by polynomially-scaling QMC. By definition!

    PS: I don’t get who and what are [O] and T3

  314. Gil Says:

    Jay, oops sorry T2 and T3 are T1 and T2 and [O] is [B&]. I didn’t get at all in my last comment into the issue of computational speed-up. What [B&] are trying to test (as they describe it) is if the Dwave evolution is genuinly quantum (more precisely “quantum annealing in their wide sense) or is it classic. So they regard C1 that they study and C2 and C3 from the opponent’s papers as classic and Q1 and Q2 as quantum. (While these last models Q1/Q2 are classically simulable at least in the range they study.)

  315. Douglas Knight Says:

    Jeremy, I did respond to Paul. I couldn’t figure out what he was saying, so I suggested an interpretation. If I’d known he was employed by dwave, then I’d have known that he couldn’t address my interpretation. It never occurred to me that he was an employee, because he was accusing the company of lying. If you have a different interpretation, feel free to share it in your own words.

    It would be very difficult to convince me that dwave chosen quantum annealing for any reason other than because it gracefully degrades into classical annealing.

    Rahul, Yes, I think you get it, but let me put it in my words. As you say, chemistry is a problem that we don’t know how to solve with classical computers. Our abstraction of chemistry is BQP complete, but it might include instances that we can’t actually reach; maybe there is systematic order in actually occurring chemistry. The point of Boson Sampling is that it is a small class of problems that seem uniformly easy or hard to reach.

  316. Jeremy Stanson Says:

    Jeremy (myself) # 311
    I think in blog comments it is best to be explicit and not leave some points to be inferred by the reader. To continue my last comment:

    D-Wave is not in the business of making sub-par, noisy QCs. They don’t want a shitty, non-performant system. D-Wave is in the business of making QCs period. What we are learning from them is that there appears to be a tradeoff between qubit QUALITY and qubit QUANTITY. I think we can all agree that there is a real physical limit to what you can compute with a limited quantity of qubits, even if they’re of impeccable quality. To increase performance, you’ve got to increase quantity. D-Wave does this, quality degrades as a result, and theoreticians scoff. But I think it is a very important result that completely undermines a lot of other QC work out there. Nobody will ever compute anything interesting with a low quantity of high quality qubits; interesting results will need to come from a high quantity of low(er) quality qubits. So it really is an exercise in futility to tout the quality of “competitive” small QC systems when all of the real work will be in how to accommodate the degradation in quality that results from scaling to a higher quantity.

    People attack D-Wave for jumping ahead in quantity without perfecting the quality, when really they’ve just skipped a red herring and advanced to the real issues that will inevitably need to be addressed by all other QC efforts (one day).

  317. Sniffnoy Says:

    It’s not clear that anyone will ever compute anything useful with either a small number of high-quality qubits or a large number of stinky qubits. The question then becomes, which is the better intermediate step towards getting a large number of high-quality qubits?

    As I understand it, the answer is more likely to be “a small number of high-quality qubits”. Because if you get a small number of high-quality qubits, then you “just” need to solve the scaling problem — which is going to be hard, and may even be impossible for the qubits you’ve constructed, in which case you have to go back to the drawing board; but at least it’s an approachable problem which has a chance of being solved. By contrast, if you get a large number of low-quality qubits, you know you’re going to have to start over with a new model in order to get high-quality qubits; it’s useless as an intermediate step.

    To put it another way, you can start with a qubit model of known quality as an intermediate step, and attempt to increase the quantity; but you can’t start with a qubit model of known quantity and then attempt to increase the quality, because increasing the quality requires changing the model, making your previous work useless.

    Do I have that right?

  318. Nick Read Says:

    I’m a little disappointed :) that I didn’t get a response to my post about Noam Livne’s paper (btw the updated published version is Comp. Complexity 19, 477 (2010)).

    It seems the result should change the discussion of average (and typical-) case complexity, yet it is mentioned only in one or two places (inc. the books of Goldreich and of Arora and Borak), almost without comment. Is there really so little progress in the field since 2006, when the first version of Livne’s paper appeared? (The same year as the excellent long survey article.)

    Scott? Anyone?

  319. Jeremy Stanson Says:

    Douglas # 315

    “It would be very difficult to convince me that dwave chosen quantum annealing for any reason other than because it gracefully degrades into classical annealing.”

    a) There’s no need! People who adopt such a firm, contrary position will be immovable on the subject. That’s fine. The Earth is flat and in the sun revolves around us. What do I gain by trying to convince you otherwise?

    b) But since I can’t resist… play devil’s advocate and pretend like you have a limited amount of resources and you want to use them to build a QC fast. You look at the state of the art and see gate-model proposals. Man, those have really strict requirements for coherence times, etc. Nobody is anywhere near meeting those even with just a few qubits, so in order to implement the required error correction we’ll need, what, like millions of qubits just to begin to run marginally interesting cases of Shor’s algorithm? How do you build, program, and control so many qubits? But then you come across this new, alternative approach called AQC. One that doesn’t appear to fight nature at every turn but instead rides along with what nature does. You can build this. You can build this now. There’s a lot of uncertainty in what it will be able to do, but that’s better than the certainty that you won’t build a useful gate model system with the resources you have now… so you set out to build an AQC.

    D-Wave is not evil. When people start accepting that they can evaluate what is going on more clearly.

  320. Jeremy Stanson Says:

    Sniffnoy # 317

    I think it is great to see people thinking about this question.

    You are suggesting that increasing quantity while maintaining high quality is easier than increasing/accommodating lower quality at high quantities. D-Wave is betting otherwise. And I agree with them, because you can go to the drawing board all you want when you’re designing your low quantity of qubits, but you’ll never even see the issues encountered at high quantities until you get there.

  321. Sam Hopkins Says:

    Jeremy #320:

    Your charge that someone interested in quantum computing cannot “see the issues” involved in a certain scientific construction without actually building it is emblematic of the anti-theory attitude that riles so many people up on this blog. The whole field of quantum computing was born because people “looked into the issues” from their armchairs and not from the lab! The idea that you can build some black-box and worry about justifying how it works after the fact flies in the face of all the theoretical work that has been done up until this point.

  322. Jeremy Stanson Says:

    Sam # 320

    There is no antagonism here. Why do people get so worked up about this?

    Theory is essential. There would be no QC without theory. There would be no D-Wave without theory. D-Wave does actually work on theory. They are not enemies of theory.

    You can see and theorize about TONS of issues involved in scaling up a QC from your armchair, but do you really think you can see all of them?

    D-Wave is trying to work out the theory as they go. you gripe about “the idea that you can build some black-box and worry about justifying how it works after the fact,” but D-Wave is continually evolving based on BOTH evolving theory and real results. Theory informs design, and RESULTS INFORM THEORY! There is no “after the fact.”

  323. Douglas Knight Says:

    Jeremy, how does your explanation of why they chose annealing differ from my explanation? Does “doesn’t appear to fight nature at every turn” mean anything other than “gracefully degrades into classical annealing”? In what way is my suggestion is an “evil” attribution of motive?

    What evidence have we received from dwave about a tradeoff between quality and quantity? Everything they’ve produced has been uniformly low quality, so that is not evidence of a tradeoff. Maybe you assume that they can do everything academic labs can, but I don’t. Maybe they are solving real scaling problems, but presumably the problems and their solutions would be trade secrets. As a black box, they have failed to demonstrate any scaling at all. The one plausible claim they have made about their accomplishments is that they commissioned a good pulse fridge.

  324. Jeremy Stanson Says:

    Douglas # 323

    My example motives for choosing QA/AQC involved a lot more than the statement about not fighting nature, and was referring to “not fighting nature” within the quantum regime, not by degrading into CA>

    Re: evil – I read malicious undertones in your statement about QA -> CA. If such tones weren’t there (?) then my comment about D-Wave not being evil was not appropriately addressed to you (though it would do quite a few people here good to keep it in mind).

    In order to provide evidence about the tradeoff between qubit quality and quantity, I have to do a lot of clicking and cutting and pasting, etc. that really seems unnecessary. Do you honestly contest that scaling a low number of qubits to a high number of qubits (and so adding more couplings between qubits; probably making the qubits bigger as a result; adding more control mechanisms to program, evolve, and readout the qubits; distributing the whole system over a larger area; all built within real-world fabrication tolerances and subject to variations; and on and on) will adversely affect the quality of those qubits? Really?

    “Presumably the problems and their solutions would be trade secrets”

    I wonder how much skepticism and negativity towards D-Wave is based on false presumption. D-Wave publishes a huge body of work. Have you looked for it? Most recently, you could look at the paper by none other than commenter # 221 on this thread for some interesting problems and solutions. Not trade secrets.

    “The one plausible claim…”

    Such nonsense! Yeah, they have wicked fridges. They’ve also produced the most sophisticated superconducting integrated circuits ever built, have demonstrated entanglement in the largest solid state system to date ( and are responsible, both directly and indirectly, for considerable and on-going progress in understanding the AQC/QA approach to computation, which could yet prove to be vastly more attainable (and therefore more powerful) than the gate model alternative.

  325. Greg Kuperberg Says:

    Jeremy Stanton – “Theory informs design, and RESULTS INFORM THEORY!”

    Result do indeed inform theory, Jeremy. They inform the theory that the D-Wave device is a not-very quantum, sort-of computer. That’s what this new SSSV paper is saying. It’s also not very surprising.

  326. Jeremy Stanson Says:

    Greg # 325

    It adds fuel to the fire, I agree. We shall see.

    A lot of the controversy now stems from the fact that AQC/QA is difficult to evaluate (which, BTW, is significant progress from where D-Wave’s controversies were just a few short years ago…). D-Wave wants the right experiments to be identified and carried out as much as anybody.

  327. Douglas Knight Says:

    Jeremy, you said that we are learning from dwave of a tradeoff. I asked you specifically what we learned from them. Now you claim that of course I should expect a tradeoff from first principles. Notice a difference?

    I skimmed the first paper you cited. I didn’t see anything I hadn’t heard years ago. It seems like exactly the “armchair theorizing” you condemned, not that there’s room for a lot of details in a 10 page paper. In particular, it doesn’t mention any problems that they were surprised to discover, let alone how they solved them. Also, it mentions no trade-offs between quality and quantity.

  328. Sol Warda Says:

    Greg#325: Dr. Vazirani’s response above says this:

    “The task of finding an accurate model for the D-Wave machine (classical, quantum or otherwise), would be better pursued with direct access, not only to programming the D-Wave machine, but also to its actual hardware.”

    Your response, please. Thanks.

  329. Jeremy Stanson Says:

    Douglas # 327

    Lots of comments from me today. Idle hands afforded by travel.

    Oh Doug! It is really a good thing for people to doubt and question, but you don’t have to be such a grump about it. Now we can descend into the pedantry of “you said, I said” blog commentary:

    As you rightfully point out, I said “we have learned from D-Wave.” I didn’t say “they have explicitly taught us.” Notice the difference? ;) They are a clear example of this tradeoff even if they don’t (which they might, I don’t know. I’m not going to go sifting through everything) come right out and use the same terms that I did. We are capable of learning by interpretation and understanding. It’s not all memorization and regurgitation like undergrad. The tradeoff is apparent even in the one architecture that I cited, even if you can’t use ctrl + f to find it.

    I say again, D-Wave WANTS the best qubits. If their qubits are noisy, they are that way for a reason. That reason is because there is a lot of them, and having a lot of them is *undeniably* absolutely essential for useful computations, whereas having perfect ones is not (possibly not “yet”) undeniably so essential.

    Now, in turn, I must ask you for your evidence of having heard years ago everything that D-Wave reported in that paper. :)

  330. Sam Hopkins Says:

    Jeremy 329:

    “That reason is because there is a lot of them, and having a lot of them is *undeniably* absolutely essential for useful computations, whereas having perfect ones is not (possibly not “yet”) undeniably so essential.”

    This argument is nonsensical. Going back to Scott’s nuclear bomb scenario, this is like the general saying “look, we all know that a nuke needs to be at least 4000 kilos; so let’s quit messing around with small scale uranium experiments and just slap a bunch of uranium on TNT, get something at least the right SIZE of what we’re looking for, and work the kinks out from there!”

  331. Jeremy Stanson Says:

    Sam # 329

    No no no. You cannot argue by analogy. This is NOT the same thing as the bomb scenario.

    We know we need a lot of qubits to do useful computations. We don’t know how good they need to be to be useful.

  332. Vitruvius Says:

    You seem to be arguing in #326 and elsewhere, Jeremy, that unlike the theorists D-Wave is actually running an experiment based on which they hope to be able to do something useful in the future. To the degree that is the case, I don’t see how your position differs from Scott’s note in #108 supra to the effect that “Fine, OK, just call it a physics experiment!”

    I think you would find that were D-Wave to be claiming that they are running a physics experiment, rather than appearing to argue (at least in their public marketing) that they are building god’s gift to quantum computing, then there would be a lot less concern in these parts about potential dangers arising from any degree of bunko, shill, and mark interaction that might emerge (were that to happen), especially as magnified by the thrill-seeking and fear-mongering public media.

    The problem is that even allowing that when one can’t reasonably trust their marketing it may still be the case that their physics and engineering are trustworthy, it becomes more important to raise the level of skepticism because already one level of trust has been lost. Fool me once, shame on you; fool me twice, shame on me. Thus it remains important for the skeptics to balance the cheer-leaders, so that however it turns out the bases will have been diligently covered, in defense against any opportunists who may arise on either side of the scale after the fact.

  333. Greg Kuperberg Says:

    Sol Warda – There is a certain language of skepticism in which all doubt is phrased in terms of missing information. Everything can be cast as “They haven’t proven that…” or “We would know more if…”, etc. This style can be defended as both guarded and courteous. It makes it easier to avoid overstatements, and to leave the door open for the other side to do better or try again.

    But it can also be misleading. The fact is that I have seen much more public discussion of D-Wave’s ordinary-looking qubits, than I have seen discussion of really good qubits made at academic laboratories. So is it really essential to keep digging for more information? Only to allow D-Wave more chances. Of course you can always ask more questions, but D-Wave isn’t so utterly mysterious; it merely doesn’t look very good.

  334. quax Says:

    Greg #333 ” The fact is that I have seen much more public discussion of D-Wave’s ordinary-looking qubits, than I have seen discussion of really good qubits made at academic laboratories.”

    It’ll be delightful if somebody was to take these qubits out of the lab and productice them akin to what D-Wave did, public interest would be sure to follow.

  335. Scott Says:

    Nick Read #318: At your request, I took a quick look at Noam Livne’s paper “All natural NPC problems have average-case complete versions,” and it looked interesting and nice! I don’t know the answer to your question of why it hasn’t inspired more followup work. Maybe you’d get more knowledgeable responses if you asked over on CS Theory StackExchange?

  336. Scott Says:

    Jeremy Stanson:

    Vitruvius #332 hit the nail on the head. As I’ve said many times on this blog, I have no objection whatsoever to someone doing a physics experiment where, rather than trying to perfect one or two qubits, they throw together lots of “highly imperfect” qubits and see what happens. In fact, I strongly support someone’s doing that. If the question came up on an NSF panel, and no one else were already doing it, I’d strongly advocate allocating some money to it.

    But suppose, hypothetically, that the results of such an experiment were to be egregiously misrepresented to the public? Suppose it were to be sold—ludicrously—not as an ongoing physics experiment with at-best mixed results, but as an already commercially useful quantum computer that already gets practically-important speedups over classical computers? Suppose that much of the press, the business world, etc. were to accept that view as fact? Suppose that that wrong perception were to distort the entire field of quantum computing—with other QC labs around the world unable to compete with this one experiment’s astronomically-inflated claims, crippled by the burden of reporting their results more-or-less honestly? Suppose that, in the popular mind, QC came basically to be identified with this single experiment, so that a failure of the experiment would be seen as the failure of QC itself? In that (admittedly strange and unlikely) scenario, isn’t it obvious that, despite the positive value of the physics experiment, I’d feel an ethical obligation to use my blog to speak out against the surrounding hype?

  337. Rahul Says:

    Douglas Knight #315:

    Can you elaborate more on this bit you wrote:

    Our abstraction of chemistry is BQP complete, but it might include instances that we can’t actually reach; maybe there is systematic order in actually occurring chemistry.

    It seems intriguing but I don’t fully understand it.

  338. Darrell Burgan Says:

    Jeremy #316 – if your argument is that D-Wave is producing engineering results when everyone is still in basic research, to me that is a great argument to make. However, speaking as a engineer (I am not a scientist), I think it is hard to point to D-Wave’s engineering results as being impressive when a $1K classical computer outperforms the $10M D-Wave.

    Yes, I know that at present having a quantum computer do anything at all is impressive, from a theoretical standpoint. But we’re not talking about theory – we’re talking about engineering. If you want to compare engineering results to engineering results, you have to show a $10M quantum computer that significantly outperforms a $10M classical computer. In the realm of engineering, real world outcomes are what matter – not theory, and not potential. Right now a $10M classical computer would blow a $10M D-Wave 2 out of the water performance-wise.

    Maybe not the D-Wave 3? Time will tell.

    I, for one, am rooting for D-Wave, or indeed anyone who can bring a real competitive quantum computer *product* to market. I want to program it. :-)

  339. Rahul Says:

    @Jeremy Stanson:

    What’s your opinion about D-Wave comparing its power consumption against supercomputers or advertising its machine as “operates under extreme environments” or related researchers announcing a ~300x speedup over competition (CPLEX etc.)?

    Or calling Umesh Vazirani’s serious work as “getting socks for a Christmas present”

  340. Scott Says:

    Jeremy Stanson #306: I had no idea that the comment of Paul #221 came from a chip designer at D-Wave. However, looking purely at the comment’s content, I found it so strange that I didn’t know how to respond.

    First of all, Shor’s factoring algorithm is not something anyone would need to build a company around. Rather, it’s something that would inevitably become possible, as an unavoidable byproduct of building a scalable universal QC.

    Secondly, once you realize the above, it seems to me that the “ethical” question basically evaporates. For if you believe that universal QC is on the horizon, then you should accept that RSA’s days are numbered regardless of what you do, and start preparing for a post-RSA world. It’s hubris to think that your personal decision not to build a universal QC could prevent everyone else from building it—particularly given its benign applications (like quantum simulation). And in any case, it’s not like a universal QC is some Faustian horror that would kill millions of people! All it would do is force them to upgrade their encryption software. :-) (This, of course, is one place where my analogy between QC and nuclear weapons fails.)

    Thirdly, even if you’re a commercial QC venture, it seems to me that implementing Shor’s algorithm would have a real application: namely, it would immediately silence all those pesky academic skeptics, whose endless yapping about evidence for quantum behavior and speedup would crumble against your cold hard prime factors! Sure, from a business standpoint those skeptics are just a minor nuisance; you have the world’s ear and they don’t. But if you could shut them up for good with a single experiment, why wouldn’t you?

  341. Scott Says:

    Rahul #304: Your summary is almost right, but you make it sound like the advantage of BosonSampling over things like estimating electron density is just a matter of dotting some i’s that only an abstruse theorist could ever understand or care about. I think it’s more than that.

    Tellingly, you casually refer to BosonSampling and electron density both being “classically hard”—but how do you know they’re classically hard? This isn’t a hypothetical question. The chemists have wonderful techniques, like density functional theory, that very often can estimate electron density efficiently using a classical computer. Sure, those techniques don’t always work—but even for the molecules where they fail, how do you know some other clever technique won’t be discovered next month?

    In computer science, the main way we know to increase our confidence about such questions is to give reduction arguments—showing that, if you did find a clever technique for this one special problem, then it wouldn’t be special to that problem at all, but would instead kill many other problems. That (modulo some caveats) is what we managed to do for BosonSampling, and I think it’s very important to do something similar for other quantum simulation problems, if we want to claim any confidence that they’re classically hard.

  342. Rahul Says:

    Scott #341:

    Thanks again! I just re-read your lovely “Computational Complexity of Linear Optics” paper & I think page 4 sort of convinces me about what you are trying to say. Sort of. :)

    Would it be fair to say this: We do have a lower bound on how fast a classical Boson Sampling (or permanent evaluation?) algorithm could be; but on the other hand we do not have a lower bound on how fast a classical wave function calculation algorithm might be? Or am I wrong?

    If that makes any sense then a further question: Can you imagine a situation where a full wave function calculation can actually be discovered to have a clever faster (asymptotic) algorithm than a permanent computation or Boson Sampling? i.e. Could it conceivably turn out that permanent evaluation is a harder problem than even a full wave function calculation?

  343. Gil Kalai Says:

    Here is my personal opinion on [B&]. The research question [B&] address is whether Dwave systems is described by a quantum annealing (Q) model (in the wide sense described above). [B&]‘s research assumption, that the answer is positive is quite reasonable. They consider the alternative that Dwave systems are best described in terms of classical evolutions in a certain class ( C). Both (Q) and ( C) represent rather wide classes of models and [B&] examine two quantum examples [Q1] and [Q2] in (Q) and one classic example [C1] in ( C). The classes (Q) and ( C) are not formally described and [B&] carefully take into consideration restrictions on the models based on what they know about DWave system.

    What [B&] try to do is a formidable task: to give statistical evidence based on input/output behavior of DWave system (regarded as a black box) whether the DWave system performs quantum annealing (Q) or represents a classical behavior ( C). It is quite possible that this formidable task cannot be achieved at all.

    A I said above, my opinion is that the findings of [B&] cannot be regarded as a serious evidence for their research hypothesis. Their approach to the statistical task of telling if DWave is represented by (Q) or ( C) is unreasonable. Even if we take their approach for granted their response to the findings of [SmSm] and [SSSV] is unreasonable.

    This is why I think [B&] approach is incorrect and their evidence weak to start with.

    1) I cannot see how you can make any serious conclusion based on the study of 3 models C1,Q1 and Q2. [B&] rejected a single classical model, how can you deduce anything from that?

    2) Generally speaking statistical tests need to be objective and a priori. Objectivity means here that there should be an objective reason for why this test has anything to do with the research hypothesis of quantumness. The correlation of successes [T2] seems objectively reasonable as a measure of proximity. The histogram test [T1] and the bimodal phenomenon seems artificial. (As mentioned above there can be mundane reasons for models both in (Q) and in (C ) to have or have not bimodal behavior of histograms and this does not seem related to quantumness.)

    3) [B&] assert that they do not claim that the histogram test [T1] is related to quantumness but still regard it as useful in rejecting [C1]. This position is logically unsound. If you think that [T1] is useful in making any inference between (Q) and (C) you do claim that it is related to quantumness.

    [B&] see matters differently, and let me make a direct quote here: “Suppose you have a black box with a promise that it is either Q or C and all you can study is the input and output behavior statistically. Both the Q and C cases are actually (possibly infinite) classes of models: Q={Q_i} and C={C_j}. How would you decide between Q and C? You remember that Popper taught us that experiments can only be used to disprove hypotheses, never prove. Thus you design a series of experiments {E_k} designed to reject either a Q element Q_i or a C element C_j, or neither. For every such experiment E_k you check whether the set Q or the set C contains elements Q*_i and C*_j that disagree with the data from E_k. Every such disagreeing element Q*_i and C*_j is removed. At the same time you check whether there is a model (or models) Q_good or C_good that agrees with the entire set of experiments conducted so far, and you call that model the current model of your black box. That’s our entire strategy.”

    Thus, they think that the way to proceed is to find the best modeling for Dwave, and here their study prefers [Q1] and [Q2] over [C1], and then when new models either in class (C ) or (Q) will be proposed to reexamine their hypothesis in terms of new statistical tests. This is a sort of statistical ping-pong game. So they reject [SmSm]‘s [C2] based on the statistical test [T2] and they now carefully examine the model of [SSSV] which they may reject based on yet another statistical test [T3]. As I said, this approach seems unreasonable and quite opposite to basic statistical principles of hypothesis testing.

    But even with their approach [B&] need to realize that [SmSm] and [SSSV] greatly weaken their claims. For example, [SmSm] do demonstrate that [T1] is unlikely to be relevant to quantumness. For this to weaken considerably [B&]‘s case you do not need to demonstrate that [C2] is a correct modeling (which [B&] reject based on [T2]).

  344. Alexander Vlasov Says:

    Gil #342, If I understand correctly? – might your point be illustrated using a joke about wrong applications of syllogisms: “all men are mortal, all pigs are mortal, but not all men are pigs”

  345. Jay Says:

    Gil, their (Q) model “in the wide sense” actually belong to classical behavior ( C).

    Sorry for being stubborn, but do you disagree this very simple point? :-)

  346. Scott Says:

    Rahul #341: Getting there, but not full marks yet. :-)

    The point with BosonSampling is that we can connect our belief about its hardness to our belief about the hardness of calculating the permanent itself—and the permanent of an nxn complex matrix is #P-complete. (Even to approximate, and even on average under the Gaussian distribution, and we think even to approximate on average.) Thus, we don’t have an unconditional proof that BosonSampling is hard, and indeed we can’t have one in the present state of complexity theory (where we can’t prove P≠NP, etc.). But much like with the simpler case of proving a problem is NP-complete, we can do much, much better than just saying that we tried to think of a fast algorithm and couldn’t.

    For “a full wave function calculation,” it depends what you mean. If you mean outputting the full wavefunction, then obviously it takes exponential time just to list the output—so no, it’s not possible that someone will discover a faster way to do it. :-) But let me assume that’s not what you meant.

    Even if you meant, “simulating a measurement of the full wavefunction, given a completely arbitrary time-efficient quantum evolution”—well, that problem contains universal QC (and BosonSampling) as special cases, so obviously it’s not going to be easier. (And if you meant simulating a measurement of the ground state, things would be even worse: that problem is QMA-complete in general, not just BQP-complete!) But let me assume that’s not what you meant either.

    If you meant: “estimating the electron density or other measurable properties, not for completely arbitrary quantum systems, but for proteins or high-Tc superconductors or quark-gluon plasmas or other specific systems that actually occur in the lab”—then yes, it’s entirely possible that people will discover much faster ways to solve those problems with classical computers than we currently know. (Indeed, they already know ways that are a lot better than brute-force.) So yes, this is exactly where you might want a reduction-based hardness argument, like we give for linear optics.

    One last remark: even for the general problem of simulating time-efficient quantum evolutions, the hardness of BosonSampling provides one of the planks of the evidence (along with Shor’s factoring algorithm and other things) that that general task is hard. In fact, when Alex and I first started thinking about BosonSampling, our entire goal was to connect our beliefs about the hardness of simulating quantum systems, to our beliefs about the hardness of the permanent. The fact that we could do so not merely with an arbitrary quantum system (like a fully-functional universal QC), but with a particularly simple system (noninteracting bosons), just came as an unexpected bonus.

  347. Nick Read Says:

    Scott #335: Thanks for taking a look and for your suggestion, which I may try.

    I also wanted to say that I really enjoyed reading your book.


  348. Greg Kuperberg Says:

    Gil – “What [B&] try to do is a formidable task: to give statistical evidence based on input/output behavior of DWave system (regarded as a black box) whether the DWave system performs quantum annealing (Q) or represents a classical behavior ( C). It is quite possible that this formidable task cannot be achieved at all.”

    But this is a simplistic dichotomy. There is quantum and then there is quantum. A system can show indisputably quantum effects, and yet not exploit them computationally and be classically simulable. Indeed, that happens all the time in nature, in many cases with so little noise that the noise is irrelevant. The developing consensus, which I think we could have guessed from the beginning, is that the D-Wave device is a sort-of quantum, sort-of computer.

    Scott once asked how QC skeptics could find common ground with D-Wave, the ultimate ultra-optimists. Well, if you see people running victory laps after making a sort-of quantum, sort-of computer, then can be consistent with the view that that’s as quantum as it ever gets. D-Wave itself has lent support to it, by badmouthing competing efforts.

  349. Jon Lennox Says:

    Scott @#339:

    The issue I think is raised by Paul’s question @#221 is this: what *would* the business case be, currently, for building a scalable universal QC? I.e., given today’s knowledge, what customers could be expected to pay for things we believe a Universal QC could do better than an equivalent-cost classical supercomputer?

    One potential answer, it seems to me, could be pharmaceutical companies, if quantum simulation could be done well enough to simulate protein folding. There are possibly other customers of quantum simulation as well, e.g. in materials science and the like.

    But Paul’s point is that there is no business case for an implementation of Shor’s algorithm, at least for funding by Silicon Valley VCs. (Funding by the NSA is a different question, and based on the Snowden revelations appears to be happening, at least to some extent.) Even if there were customers for breaking RSA, public success makes the business model self-defeating as everyone stops using it.

    Obviously researchers and academics would be interested in buying a practical QC, to see what could be done with it, but that seems like it would be rather a smaller market than commercial customers would be, and that’s what funders are interested in.

    Is there anything else a QC could do that would have commercial uses?

  350. Jeremy Stanson Says:

    Re: D-Wave as a physics experiment

    I think any serious foray into a new technology space can be identified as an experiment.

    D-Wave is a business that operates based on the anticipated results of a physics experiment. It seems most people support the physics experiment – we all think it is a good thing to do (maybe YOU wouldn’t prioritize it over some other experiments, but it is still worthwhile in its own right) and we can all agree that we are learning a lot as a result of D-Wave’s efforts, both from them and from other people who have been motivated to examine AQC/QA because of the attention garnered by D-Wave.

    The experiment, so far, has cost >$100M. There is no way this experiment was going to get done by anything other than an ambitious commercial enterprise. In order to fund the experiment (which we all think is worthwhile to carry out), D-Wave has to play a game that involves PR and hype. I say, if that’s what it takes, then do it. The hype is sensational. Here @ Shtetl-Optimized, we’re not blinded by sensationalism, and I’m quite confident that none of the big names that have become involved with D-Wave: Lockheed, NASA, Google, Bezos, In-Q-Tel, are blinded by sensationalism either.

    We all know hype is hype. Those of us who are skeptical of hype should, after so many years, be able to move past it as a means to a worthwhile end.

    I don’t share the fear that a failure by D-Wave will damage QC research in general. I saw Batman & Robin (with Clooney and the nipples) and then I still went to go see Batman Begins. It was awesome. A failure by D-Wave would only open the door for new endeavors whose sales pitch is based on exactly how they are different from D-Wave and how they have learned (maybe not by explicit teaching Doug! but by paying attention!) from D-Wave’s experiments.

  351. LK2 Says:

    Jeremy #250:

    I am an experimental physicist and I do not think the d-wave experiment is worth doing and if nobody spent >100M$ in a bunch of superconducting rings embellished by a fancy black box
    (apple i-stuff style) is because they were trusting solid physics and computer science.

    In fact, they have now 512 qbits without global entanglement, but
    you do not have to be a rocket scientist but just a graduate student to understand that they cannot maintain a sufficient level of coherence (even locally) for being useful.

    If it would have been possible with SC technology, this would have been already realized many many years ago in a simple university lab.

    The hype stuff is simply unjustifiable from the point of view of scientific intellectual honesty.
    I’m happy you are a d-wave enthusiast and d-wave is not robbing money to anybody (actually creates jobs) but at a high price for scientific credibility.
    I live where they are and I know something about them.


  352. Vitruvius Says:

    “We all know hype is hype”? “Those of us who are skeptical of hype should, after so many years, be able to move past it as a means to a worthwhile end”? Really, Jeremy? I think not. If one can’t trust the hype, how can one trust that the purported end is worthwhile? One of the synonyms for hype is propaganda. Where does one draw the line, given that one level of trust has already been forfeited? Once one goes down that road, one’s on a very slippery slope. Perhaps that’s why the kind of attitude you elucidated there, Jeremy, is one of the greatest enablers of evil in the history of our species.

  353. Rahul Says:

    Jeremy Stanson says:

    In order to fund the experiment (which we all think is worthwhile to carry out), D-Wave has to play a game that involves PR and hype. I say, if that’s what it takes, then do it.

    Ah yes. The old excuse: Ends justify the means.

    Speak for yourself. I don’t think the majority here would like to lie & misrepresent to get the money for this experiment.

  354. Rahul Says:

    Jon Lennox #349:

    Even if there were customers for breaking RSA, public success makes the business model self-defeating as everyone stops using it.

    Wonder if someone could intercept, dump & archive packets now to be decrypted once such a QC is available.

    Even if people stop using RSA, all previously encrypted messages are vulnerable, right?

  355. Rahul Says:


    Let’s say you succeeded and hence the Extended Church Turing Thesis was dis-proven.

    Can you sketch out it’s implications? What would be the major changes, effects etc.

    Even before we get to a rigorous proof or disproof what’s the intuitive feeling among experts. Do you expect it will fall? Are people split on this or is there a mild consensus even? Or is this something most people expect will be dis-proven but it’s only a question of obtaining rigorous proof? (as rigorous as can be under light of the remaining conjectures etc. in your attempted proof)

  356. Jon Lennox Says:

    Rahul @#354:

    Wonder if someone could intercept, dump & archive packets now to be decrypted once such a QC is available.

    Even if people stop using RSA, all previously encrypted messages are vulnerable, right?

    Yes, I think so — even for traffic using Perfect Forward Secrecy, I believe variants of Shor’s algorithm can crack all the Diffie-Hellman and RSA variants that are currently in use or proposed.

    It seems reasonable to assume the NSA is doing this for messages of interest, both waiting for a practical implementation of QC, and also (when PFS is not in use) in case they can in the future compromise the encryption keys.

  357. Vadim Says:

    What’s amazing to me about this whole debate is the verbal gymnastics some people are willing to go through to defend this company. If the goalposts stayed where D-Wave first placed them, the whole thing would be an unmitigated, uncontroversial failure. Lying about what you’ve got now – and accepting money as if you’re providing something of value – while actually hoping to create it in the future with the earned income, is commonly called a “scam”. Can anyone say for sure whether or not they’ll create something useful in the future? No, but given the demonstrably false claims they’ve made about the present state of their technology, one would have to be very naive to have anything other than an “I’ll believe it when I see it” attitude.

    This is a situation where words are cheap; people can talk and argue forever, going round and round in circles. But until D-Wave creates something that undeniably works better than a classical computer for *something*, they have no record of success. Zero. What exactly have they done, then, to even earn the benefit of the doubt? Would people defending them believe anyone’s claims that they’ve created X without a shred of evidence?

  358. Joe Fitzsimons Says:

    Regarding the complexity of computing electron density, what specific problem are you considering? If you allow all physically possible configurations of matter, then it seems trivially hard to compute classically. It is straightforward to find Hamiltonians which have ground states which are a universal resource for measurement based computation with local rotations applied and which have no local minima. Once the temperature drops enough for the Gibbs state to have a significant overlap with the ground state, sampling such a system should be complete for IQP, and due to the lack of local minima, it should not take long to thermalise.

  359. Scott Says:

    Rahul #355: If the ECT were convincingly falsified by some BosonSampling-like experiment, I think the main implication would simply be that a major experimental milestone had been achieved, and in a way that confirmed theoretical expectations. So, I’d say that the implications would “merely” be as great as (say) when the Higgs boson was discovered, and no greater.

  360. wolfgang Says:


    >> If the ECT were convincingly falsified by some BosonSampling-like experiment

    But this is unlikely according to the paper by P. Rohde et al.

  361. quax Says:

    Jeremy #350, sorry to hear that your well reasoned argument had to expose you to this ugly truth about yourself, that Vitruvius realized peering into your soul:

    Perhaps that’s why the kind of attitude you elucidated there, Jeremy, is one of the greatest enablers of evil in the history of our species.

    Yep, being hopeful for D-Wave is just like that. I bet you also cheer own any genocide you come across.

  362. quax Says:

    Vadim #357, you realize that evaluations of D-Wave’s machine were performed according to Lockheed’s and Google’s specifications?

    But anyhow, I salute you for trying to protect these blue chip companies from this nefarious scam. Clearly they have no idea what they are doing and are run by fools.

    BTW if you don’t already do this, you should start boycotting Amazon. As a major investor Jeff Bezos is in on the scam, so you really should not support him in any way.

  363. Sol Warda Says:

    Vitruvius #352: The last sentence of your railing & rambling entitles you to call yourself “Vituperator!” Are you alright? This is a blog for scientific debate, discussions, arguments, opinions…etc., NOT for sermonizing about good or evil! I think you have a problem, and should seek professional help.

  364. Raoul Ohio Says:

    A concept in the chess literature is elegantly summarized by stating the move in which a famous game has gone “off book”.

    Many excellent generalizations of this idea are available. For example, a historian could pinpoint the comment in which a couple of participants in this discussion have gone “off meds”.

  365. Rahul Says:

    Scott #346:

    yes, it’s entirely possible that people will discover much faster ways to solve those problems [simulating electron density for specific systems that actually occur in the lab] with classical computers than we currently know. (Indeed, they already know ways that are a lot better than brute-force.)

    I’m curious, what’s the order of the best known methods for this today? DFT, in general, is N^4, right? Same as Hartree Fock? Or can we already do better?

  366. Scott Says:

    wolfgang #360: The Rohde et al. paper doesn’t say or do anything that wasn’t well-known earlier, or indeed that Alex and I didn’t point out ourselves. The practical difficulty of scaling up BosonSampling experiments, because of the need to observe an n-photon coincidence (and the danger that the probability of one will fall off exponentially with n), is completely obvious. Nevertheless, the recent idea of Scattershot BosonSampling, combined with better SPDC sources and photodetectors and other engineering improvements, should allow scaling up to 10 or 20 photons. And if that can be done, I think it becomes harder for QC skeptics to maintain that some problem of principle would inevitably prevent scaling to 50 or 100 photons. And thus, the ECT comes under even more strain than it was under before.

    If you read Rohde et al.’s note, you’ll see that they don’t dispute the plausibility of scaling to 10 or 20 photons. The issue they’re stuck on is just the mundane one that no finite experiment can ever decisively confirm or refute an asymptotic statement. But notice that that would STILL be an issue, even if BosonSampling were scaled to a million photons!

  367. Rahul Says:

    Scott #366:

    The issue they’re stuck on is just the mundane one that no finite experiment can ever decisively confirm or refute an asymptotic statement.

    Hmm…I may be wrong but that’s not how I read what they are saying. They accept that a finite experiment can indeed refute an asymptotic assertion *provided* it is combined with an argument that, in principle, the device is arbitrarily scalable

    What they seem to assert is that this theoretical scaling argument has been made, for say a universal QC (because of fault tolerance) but not for Boson Sampling.

    So they’d willingly accept a finite sized conventional QC as refuting an asymptotic argument, just that they cannot accept a finite sized BS device.

    Again, I might be wrong.

  368. Vitruvius Says:

    My apologies for offending your sensibilities, Quax & Warda. Once the term had been introduced by others in comments 309, 312, 319, 323, and 324, I didn’t think further mention of the concept would be so controversial. If I may attempt to clarify, I’m trying (perhaps a bit too effusively) to highlight that, as Jung said, “the man who promises everything is sure to fulfill nothing, and everyone who promises too much is in danger of using evil means in order to carry out his promises, and is already on the road to perdition”, or if you prefer, Cicero’s observation that: “omne malum nascens facile opprimitur; inveteratum fit pleurumque robustius”, or even Virgil’s admonition to the effect that: Φοβοῦ τοὺς Δαναοὺς καὶ δῶρα φέροντας.

    To be perfectly clear, then, I’m not trying to assert that D-Wave or Jeremy or anyone else is being evil at this time, I’m trying to warn that once one starts excusing hype (or falsitude in general) one is embarking on a potentially dangerous path, and so then that under such circumstances one might be well advised to reconsider one’s route planning, in this case because it appears to me that, as Sir Humphrey noted in The Tangled Web, “the precise correlation between the information communicated and the facts, insofar as they can be determined and demonstrated, is such as to cause epistemological problems of sufficient magnitude as to lay upon the logical and semantic resources of the English language a heavier burden than they can reasonably be expected to bear”.

    On the other hand, I realize that perhaps I’m being unjustifiably desirous of an unreasonably small epsilon in the sense that might be extrapolated from Chaitin’s claim that “absolute truths can only be approached asymptotically”, in which case, once again, sorry about that. But then on the other other-hand, it would appear that I’m not alone in that regard, at least judging from some of the other skeptical positions developed in this discussion. Therefore it may simply be, as Sonny & Cher sang, that “the beat goes on”, and thus one would be better off not trying to repudiate me so strenuously, lest such behaviour tip one’s hand.

  369. wolfgang Says:

    @Rahul #367

    This is exactly how I read it.

    So then the question for QC and D-Wave is also not if you can use 128 spins or 256 but if you can demonstrate error correction (and so far they do not of course).

  370. rrtucci Says:

    quax said
    “BTW if you don’t already do this, you should start boycotting Amazon. As a major investor Jeff Bezos is in on the scam, so you really should not support him in any way.”

    Henning, Is that the same Jeff Bezos whose Kindle operating system doesn’t have folders? That Bezos is a fricking computer authority.

  371. quax Says:

    rrtucci, you missed my point. This is not about Bezos as an expert on anything, but the fact that he is profiting from an enterprise that Vadim unmasked as a scam.

    I only offered my help by pointing out to him how he can strike back against this evil. After all this has now apparently become a fight between the forces of evil and good.

  372. quax Says:

    Vitruvius #368, if this was actually about politics and propaganda I’d follow your line of reasoning, but fortunatelly this is about something much more banal, an IT business advertising its wares.

    As it is, they are governed by the the same truth in advertising regulations as any other business. If you feel they overstepped the bounds you can always file a complaint with the FTC.

  373. Greg Kuperberg Says:

    News flash: While D-Wave threads debate whether snow shoes count as ice skates, the Martinis group at Santa Barbara lands a quadruple axel.

  374. Gil Kalai Says:

    Jay (#345): “Gil, their (Q) model “in the wide sense” actually belong to classical behavior ( C).”

    Jay, Well, (B&) do not define the classes (Q) and ( C) precisely. The class (Q) refers to quantum annealing processes in a wide sense and the most detailed description I read was “quantum adiabatic algorithm subject to two additional conditions: (1) the initial Hamiltonian is a transverse field and the final Hamiltonian is an Ising model, (2) the system is coupled to a thermal bath at a temperature and coupling strength that allows for a description via a master equation with decoherence in the energy eigenbasis.” For (C ) I think that what (B&) mean “Classical annealing algorithms in a wide sense,” and I think it is accepted on both sides that [C2] and [C3] are in ( C). Certainly (C ) does not mean “everything which is classically simulable,” so if this is what you mean I don’t think it is in agreement with (B&).

    Greg (#348): I agree with you that it is reasonable to assume that the evolution of DWave system is in (Q) (and also that it is classically simulable). I personally do not think that this can be established by looking just at input/output behavior of the algorithm. Greg also mentioned (privately) that my comment #343 looked to him uncharacteristically strong for my commenting style (and he joked that it fits better his own style). This was not meant to be, and I did try to stay strictly on the technical issues, and also to emphasize that this is my personal judgement. But maybe I could have chosen the wording better. (B&) is a great team of experts, their paper was well received, and I am also looking forward to read their forthcoming detailed response to (SSSV) and study of (C3)

  375. Gil Kalai Says:

    (Scott #355) “Nevertheless, the recent idea of Scattershot BosonSampling, combined with better SPDC sources and photodetectors and other engineering improvements, should allow scaling up to 10 or 20 photons. And if that can be done, I think it becomes harder for QC skeptics to maintain that some problem of principle would inevitably prevent scaling to 50 or 100 photons.”

    I certainly agree with the second sentence by Scott and certainly disagree with the first. The QC skeptical case is based on two claims: 1) computational speed-up requires quantum fault-tolerance, 2) quantum fault-tolerance is not possible. The BosonSampling idea is in violation of 1). I think that there are good reasons to think that attempts for scaling up to 10 or 20 photons will demonstrate why this idea is not viable. (I even guess that BosonSampling failure will come before D-Wave failure.) Unfortunately, messing with permanents as hitting NP-hard problems will lead to the same thing.

    On the positive side, let me also mention first that I always liked BosonSampling, years before I thought about the reasons for why it is going to fail. Second, that both the two bold ideas of DWave are worth exploring : (The first idea is to build and study systems with a large number of superconducting qubits (even not of the highest possible quality); and the second idea is that quantum computers may exhibit some speed-up even for hard optimization problems.), and last, that I agree with Scott’s assessment (#359) of the importance of convincingly falsifying ECT.

  376. Scott Says:

    Gil #375: So do I have you on record that, if and when Scattershot BosonSampling gets performed with 10 photons, you admit that the general views that led you to predict the impossibility of such a thing were wrong?

    What about 8 photons? :-)

  377. Scott Says:

    Rahul #367 and wolfgang #369: Look, if Rohde et al. had simply said it from the outset the way you did—that the case for scalability of BosonSampling has yet to be made, in the same sense that the case for the scalability of universal QC has been made—then while it still wouldn’t be a novel message, I would completely agree with it.

    I guess what antagonized me is that, in the original version of their paper, Rohde et al. claimed to have “proved” the impossibility of scaling up BosonSampling—when in reality, not only did they have nothing remotely like a proof, but their heuristic argument didn’t even account for the possibility of Scattershot BosonSampling, which was already known when their paper came out. (To their great credit, Rohde et al. later toned this down following correspondence with me.)

    Personally, I’d summarize the situation thus: while the case for scalability of BosonSampling has not yet been made, a serious case against its scalability has not been made either. Indeed “in principle,” we know that you could scale up BosonSampling, by layering conventional quantum fault-tolerance on top of it! Of course, it would be much more useful in practice to have a means of scaling that used only the resources that BosonSampling itself uses (interferometers, single-photon or SPDC sources, nonadaptive number-resolving detectors). But it’s not at all implausible that such a means of scaling exists! In particular, it seems likely that even with a significant number of photon losses, Scattershot BS would remain intractable to simulate classically. At present, I don’t know how to give a reduction argument to show that (I can only handle maybe O(1) lost photons). But we certainly also don’t know how to give an efficient classical simulation of BosonSampling with many lost photons, which is what would be needed to show the opposite.

  378. Sol Warda Says:

    Greg #373: How would you characterize the paper of Mortinis et al. Is it a “Breakthrough” Is it a “Giant step forward” Is it “Incremental progress”….etc. Your opinion please. Thanks.

  379. Greg Kuperberg Says:

    Gil – both the two bold ideas of DWave

    I don’t think that this is an accurate description.

    The first idea is to build and study systems with a large number of superconducting qubits (even not of the highest possible quality);

    They don’t deserve credit for this idea unless it’s specifically with uncompetitive qubits.

    the second idea is that quantum computers may exhibit some speed-up even for hard optimization problems.

    This is not their idea either, this is Grover’s result. Their idea is to hope for a big speedup, not just some speedup, without a theoretical prediction of one. This is why they associate their work with AI.

  380. Greg Kuperberg Says:

    Sol – I’m very conservative with the word “breakthrough”, and I’m also not a serious expert in experimental QC methods. If you press me to describe this latest Martinis work, I would call it impressive, first-rate progress.

    Historically superconducting qubits have had lower fidelity than ion trap qubits. But they are getting there and I think that they are easier to make and easier to scale.

    My impression is that the real breakthrough is the Martinis-Schoelkopf investigation over the past decade. They traced the decoherence in superconducting qubits, optimized their geometry to reduce it, and invented the “transmon” version of a superconducting qubit. (The transmon geometry and the name first appeared in one of Schoelkopf’s papers, I think.) The new paper can be seen as part of this larger effort. But again, I am not a real expert on this and there may be more people who deserve top credit than these two guys.

  381. Scott Says:

    Greg #380: Thanks for the summary! That’s helpful. And if anyone else (especially experimentalists) wants to comment on the new Martinis paper, that would be greatly appreciated as well.

  382. Rahul Says:

    Scott #377:

    Indeed “in principle,” we know that you could scale up BosonSampling, by layering conventional quantum fault-tolerance on top of it!

    Would that really be of use? What benefit do you get over just layering conventional quantum fault tolerance over conventional qubits? The whole point behind Boson Sampling is to sidestep the difficulties of conventional QC, right?

    If you wanted to go in for conventional fault-tolerance, then might as well stick to disproving ECT by building a conventional QC, the hard way?

    Maybe I am missing something. Is there a useful way to mate conventional quantum fault-tolerance on top of Boson Sampling?

  383. Gil Kalai Says:

    Scott, even better than having statements for the record, you can have a look at the argument for why I think errors will scale up and cause BosonSampling to fail rather quickly. I don’t expect  much difference between attempts to scale up bosonsampling to many bosons and attempts to scale up qubit/gates universal quantum computers to many qubits (without  fault tolerance), and I don’t think that you provided any good reasons to see any difference beside (much welcome) optimism, hope, and enthusiasm.

  384. Robert Says:

    Vitruvius #368 Why do you quote Virgil in a modern Greek translation? He was a Roman and wrote in Latin (Timeo Danaos et dona ferentes). Could it be that you thought he was Greek and that you can’t tell classical Greek from modern?

  385. Rahul Says:

    In the context of those asking how is a commercial venture to make money using Shor when the potential end use of breaking cryptography is ethically gray, would pushing Grover’s algorithm be a feasible option?

    Or is the potential speedup & utility for Grover’s not high enough to make a good commercial case?

  386. Greg Kuperberg Says:

    Rahul – If you did a good enough job with Grover’s algorithm, then the resulting quadratic speedup absolutely would be worth it for commercial purposes. After all, two of the most practical algorithms in the world are FFTs and sorting algorithms, and both of those are only quadratic speedups.

    One problem is that state-of-the-art qubits are a thousand times slower and a trillion times more expensive than bits. These are only constant factors, but a quadratic speedup is not interesting until you overcome them. Even if you could make qubits with fidelity better than the fault tolerance threshold, the constant factor overhead would probably have to be improved for a quadratic speedup to become useful in practice.

    Another problem, if you happen to be D-Wave, is that promising an exponential speedup is not consistent with only delivering a quadratic speedup. In fact, they have accepted quadratic overhead in the scheme to embed other people’s graphs inside the wiring diagram of their qubits — this may or may not cancel out the quadratic speedup of Grover’s algorithm or an adiabatic equivalent. There is also the fact that they haven’t really achieved any speedup.

  387. quax Says:

    Greg #373/380 agreed this looks very good. I am by no means an expert, but it seems to me this fidelity should allow for error correction to a degree that’ll make scaling up for gate based QC on this basis much more feasible.

  388. Bill Kaminsky Says:

    Scott #381 wrote:

    Greg #380: Thanks for the summary! That’s helpful. And if anyone else (especially experimentalists) wants to comment on the new Martinis paper [Bill's note: That is, the paper

    "Logic gates at the surface code threshold: Superconducting qubits poised for fault-tolerant quantum computing"], that would be greatly appreciated as well.

    Hey, I’m “anyone else”! And I like being greatly appreciated!

    Seriously though, I’ve worked closely throughout my MIT years with MIT’s superconducting qubit group headed by Prof. Terry Orlando, and while a I’m a theorist who has moreover moved ever more toward pure algorithmic concerns, I meet at least what I’d call the “Adapted Yogi Bear Criterion” — that is, being “smarter more experienced than the average bear MIT quantum computing PhD student” when it comes to the nitty-gritty of superconducting qubits. :)

    So, here’s the high points and the caveats of the Martinis group paper.

    First, the HIGH POINTS:

    1) Their big, new, concrete benchmark accomplishments for superconducting qubits are exhaustive experiments to show they can achieve a universal gate set with world-record fidelities, namely \( \pi \) and \( \frac{\pi}{2} \) rotations about the X,Y, and Z axes of the Bloch sphere with >99.9% fidelity and Controlled-Z gates with >99.4% fidelity.

    2) To show off what they can do right now with such fidelities, they unambiguously exhibit 5-qubit entanglement for the first time in any solid state qubit implementation. (Ion traps have done similarly already.) Specifically, they create an approximation to the 5-qubit GHZ state ( i.e., an approximation to \( \frac{1}{\sqrt{2}} |00000\rangle + |11111\rangle \) and perform complete tomography to establish its fidelity of 0.817(5) to the ideal 5-qubit GHZ state. Any fidelity beyond 0.5 with an ideal n-qubit GHZ state is an unambiguous signature of genuine n-partite entanglement.

    3) Their big claim for the future is that since their 5 qubit linear array exhibits a universal gate set with fidelities in excess of 99%, the threshold for surface codes (which, as their name suggests, allow for 2D architectures)*, they have the first proof-of-principle of a solid-state, lithographed setup that can be plausibly scaled up into a fault-tolerant universal quantum computer!

    [* For a review article on surface codes, see Fowler, Mariantoni, Martinis, and Cleland "Surface codes: Towards practical large-scale quantum computation" Phys. Rev. A 86, 032324 (2012), freely available at:]


    1) First, there’s already nontrivial “overhead” in achieving the results here. Namely, while their qubits naturally show super-duper fidelity vis-a-vis control errors (i.e., your gates over-or-underrotating on a given axis, or rotating on an axis slightly off), they show few percent chance of leakage errors (i.e., your actually-many-levels superconducting system ends up no longer being in the two levels you’re using to define your qubit) unless complicated active control is used (i.e., pulse sequences that in essence are doing 10-fold more gates for every Clifford group gate you’re intending to do ).

    2) Of course, overhead blows up quite a bit if you’re imagining making a large-scale quantum computer. While you can sorta say you’re making one surface-code-protected logical qubit with just 13 physical qubits, you’re really going to need a 1,000 to 10,000 physical qubits per logical qubit for practical protection that allows you to say the fault tolerant threshold is just 99%. In addition to physical-qubits-per-logical-qubit overhead, there’s also a lot of code-related-gates-per-logical-gate overhead in the form of lots of Toffoli gates to be done per actual logical operation you want to do. Since Martinis and company are always impeccably thorough, you can consult the discussion around Table I in the aforementioned surface codes review paper for a serious breakdown of various scenarios for running Shor’s algorithms, complete with various scenarios trading off “time” in the form of more code-related Toffoli overhead with “space” in terms of more physical-qubits-per-logical-qubit overhead. For example, if you don’t add any physical qubit overhead beyond the original 1000-10000 per logical qubit, you’ll need to perform roughly \( 40N^3 \) code-related Toffolis to implement Shor’s algorithm for N-bit integers. (The horror! I want to break RSA right now! You mean I might need to wait waste a whole day in gate overhead to break a 2,048-bit RSA key since I need to wait like 100nsec for each of these extra gates. These quantum computers have clearly been oversold! ;) )

    As this has already gone quite long, I’ll throw the floor open to questions (and/or actual expeirmentalists) now.

  389. Vadim Says:


    Jeff Bezos is an investor, not a scientist. He doesn’t know whether the D-Wave machine is quantum or useful. For him, it’s a calculated risk. Ditto for Google and Lockheed Martin. Is the gamble likely pay off? Well, it sure hasn’t yet, and I’m guessing no one at Google or Lockheed will be too heartbroken if it doesn’t. To them, it’s basic research. It’s only a useful device in the eyes of D-Wave themselves.

  390. Bill Kaminsky Says:

    One oopsie in, and a TL;DR (“Too Long; Didn’t Read”) summary for my long comment above (i.e., #387):

    The oopsie:

    My sentence

    1) Their big, new, concrete benchmark accomplishments for superconducting qubits are exhaustive experiments to show they can achieve a universal gate set with world-record fidelities, namely \( \pi \) and \( \frac{\pi}{2} \) rotations about the X,Y, and Z axes of the Bloch sphere with >99.9% fidelity and Controlled-Z gates with >99.4% fidelity.

    was poorly written. Just having \( \pi \) and \( \frac{\pi}{2} \) rotations about the X,Y, and Z axes of the Bloch sphere along with Controlled-Z’s aren’t universal, of course. These rotations, however, are what Martinis’s group uses for benchmarks of the fidelity. However, in principle, they can do arbitrary rotations around X,Y, and Z axes.

    The TL;DR:

    Holy $#!t !! I finally can say with a straight face that universal quantum computers might be built in the next 2 decades (given adequate commitment of resources of course… but not more than the low tens of billions dollars, I dare say, which really isn’t more than 2 or 3 generations of new microprocessors these days). :)

  391. Sol Warda Says:

    Bill #387: Bravo, great summary!. Even for non-experts like me I sort of got it, I think. But, it still looks like we have a long way to go. A simple question, Bill: If we were to scale it up to thousands of qubits, how long, in your considered opinion, would it take to break RSA-2048, which is 617 decimal digits long? Dr. John Preskill, in a recent talk at the University of Waterloo, displayed the number on the screen and said a universal gate model quantum computer could factor it in 2 seconds! Do you disagree with that and why?. Thanks.

  392. Jay Says:

    Gil #374,

    Yeah, I understand ( C) as “anything classical”, and Dwave as arguing their computer does not belong to ( C) because of [B&]‘s result. Glad if you’re right that [B&] would disagree.

    Bill #387,

    Thank you! Do you know if this achievment was due to a specific trick or “just” a series of incremental small improvements? Also, were your prior (to see a >30 logical qbits QC in the next 10 years) changed by this paper?

  393. Bill Kaminsky Says:

    Sol Warda #389:

    As I referenced above (perhaps too briefly before devolving into a joke about the “horror” of having to wait a whole day to break 2048 bit RSA), Martinis and his coauthors do perform “realistic” estimates of the resource requirements for Shor’s algorithm on 2000-bit integers in their surface codes review article* by plugging in parameters derived from their present day qubits.

    And as my joke implied, they indeed estimate times of roughly one day to factor 2000 bit integers.

    [* Again, that article is: "Surface codes: Towards practical large-scale quantum computation" Phys. Rev. A 86, 032324 (2012), freely available at: See in particular their discussion around Table I.]

    The fact their estimate is about 50,000 times greater than estimates like John Preskill’s of using Shor’s algorithm to break 2048 bit RSA is that estimates like Preskill’s (1) don’t really take into account any error correction overhead and (2) moreover assume giga-operations-per-second like present-day classical computers. In contrast, Martinis’s present-day qubits take 40 nanosecs to do controlled-Z operations and 100 nanosecs to do measurements and so are more like 10 mega-operations-per-second.

    That said, there’s no physical reason to believe that superconducting qubit quantum computers can’t eventually achieve speeds of giga-operations-a-second or even tens of giga-operations-a-second.

    So, who knows?! By the time, people scale up superconducting qubits to make universal quantum computers capable of doing millions of encoded logical operations on thousands of encoded logical qubits (and thus trillions of actual physical operations on tens of millions of actual physical qubits), maybe it’ll take mere minutes or even seconds to break RSA.

    Having written all that, I’m very, very tempted to go off on a tangent. And like Oscar Wilde, I only know how to deal with temptation by succumbing to it. So, here goes:

    If the Edward Snowden leaks have taught us anything, it’s that brute-force breaking of cryptosystems really is at the bottom of the list of even extremely-well-funded crypto breakers like the NSA. It’s always oh-so-much-cheaper-and-faster to find inadvertent errors in the security protocols or to pay vendors to introduce illicit errors in their security protocols or to simply ignore all the encrypted stuff and just focus on the gigantic amount of readily available metadata that amply suffices for 90+% of whatever an intelligence officer would ever want to know.

    I could now go off on a rant on my speculations of what quantum computers are actually going to turn out to useful for, but as always, my comment has grown too long already and I should stop… at least for the moment. ;)

  394. Vitruvius Says:

    Not to put too fine a point on it, Quax, nor to overstay my welcome at Shtetl-Optimized, but given that D-Wave’s marketing hype is governed by statutory regulation, as you observe in your comment #372, the discussion thereto denotationally is about politics, in Aristotle’s sense of τα πολιτικά, contrary to your counter-claim in that comment.

    Furthermore, since D-Wave is a private Canadian company incorporated under the laws of Canada, it may well be the case that they are not in the first instance governed by the American FTC as you claim in that comment, rather, it may be that they are, in principle, governed by the relevant Canadian statues. However, sales are probably governed by contract and local statutes, but not being a lawyer, that’s not my bailiwick.

    Anyway, be that as it may, yes, I am of the opinion that we would all be better off were truth in advertising regulations (or more specifically, in this instance, anti-hype in advertising regulations, and in reporting, and related propaganda) significantly better across the media and business marketplaces. And yes, I am skeptical of what I’ve just proposed, given my dislike of regulations and speech restrictions in general, though fraud (not that I’m claiming that here, sigh) has a long jurisprudential history of being subject to particular constraints.

    And yes, my position on this matter may be due to my aspergery and perhaps excessive enjoyment of philosophical considerations related to the nature of truth, of the sort typified at Kuhn’s Closer to Truth web site, and Scott’s excellent interviews there, and it may be related to the timeless battles between engineering truth and marketing propaganda of the sort typified by the disagreements my engineering background has lead me into over the decades.

    Mea culpa. Yet, metaphorically, if I say my bridge will support 10⁴, and marketing says “our” bridge will support “tens of thousands”, and the bridge fails under over-load, people will look askance at me, not at marketing. It’s my name on the bridge. As Chesterton wrote, “I’ve searched all the parks in all the cities and found no statues of committees”. Shall I get down off my noble engineer romanticism pedestal now ;-)

    Meanwhile it may well be that the plurality of folks couldn’t care less about truth, in which case, given democracy, I’m out-voted. C’est la vie, as Zeno of Citium might have said had French been invented before c. 262 BC. But that doesn’t mean you can just dismiss my participation by re-routing me to the FTC, at least, not politely.

  395. Jeremy Stanson Says:

    The work by Martinis et al is excellent and fascinating. Being a dwave supporter does not mean I can’t see that and support this work as well.

    I note Martinis et al write that they anticipate significant engineering challenges in scaling up their system. Right or wrong, dwave is building something different because they believe the engineering challenges ahead for martinis et al are insurmountable today. And to their credit, they have more experience with such challenges than anybody else.

  396. Bill Kaminsky Says:

    Jay #391

    Thank you! Do you know if this achievment was due to a specific trick or “just” a series of incremental small improvements? Also, were your prior (to see a >30 logical qbits QC in the next 10 years) changed by this paper?

    1) It was a series of many, many improvements, including lots of subtle engineering stuff in terms of materials science (especially using molecular beam epitaxy rather than cheap ol’ sputtering to make good aluminium qubits) and first-rate waveguide & resonant cavity design. To see just how long a road it has been, note that the two big tricks of having the qubits in resonant cavities to control the radiative environment and the transmon design to largely eliminate sensitivity to stray charges date all the way back to 2007 and before (and include the innovations of many groups other than the Martinis Group at UCSB… e.g., the transmon design itself comes from the Devoret and Schoelkopf Groups at Yale as well as Blais’s Group at Université de Sherbrook in Québec.)

    2) The updating of my prior on the chances of universal quantum computers with tens if not hundreds of logical qubits in the next 10-20 years was largely due to the embarrassing fact that looking at this recent Martinis Group experimental paper forced me to look at their 2012 review article on surface codes. I honestly didn’t realize there was a code with an threshold around 99% that only requires nearest-neighbor couplings on a 2D lattice that moreover requires overhead of “just” 1,000s to 10,000 physical qubits per logical qubit. For shame! I should’ve known that sometime between 2010-2012 if I were paying better attention.

    That said, I’m not exactly sure my prior has been updated all that much.

    A) There really are big engineering challenges… though by “big” I mostly mean time- and dollar-intensive as opposed to the nobody-yet-knows-how-to-do-what’s-needed sense. To take but one example, as Geordie Rose himself has pointed out, these transmon qubits used by the Martinis Group are very big. The crucial parts of the 5-qubit linear array in their most recent paper take up like 10 square millimeters. I may just be ignorant, but it seems to me that people will need to design at least two substantially new generations of waveguides and resonators that are one and then two orders of magnitude smaller than the present ones if we’re to imagine fitting tens of thousands, and then tens of millions of these qubits on something you can fit in a dilution fridge.

    B) And again, let me emphasize the time- and dollar-intensive nature of such engineering. It seems to me that building something with thousands of physical qubits, let alone millions, really is something that requires resources well beyond academia… unless you count things like the LHC and ITER as academia (though even there see my last point in my rant below). And it’s unclear to me whether the resources will be forthcoming from the private and/or government sector in the next 10-20 years to do this absent some clear sense that quantum computers will definitely allow us to do something that we can’t do now.


    Elaboration on “Something We Can’t Do Now”/Personal Rant:

    IMHO, code-breaking doesn’t count, since it’s so much easier for those with the necessary billions in their bank accounts to spend them instead on breaking/circumventing codes without brute forcing them.

    Quantum chemistry doesn’t count IMHO either, since similarly it’s much easier for those with the necessary billions to invest instead in making better empirically-tweaked density functional codes or “just” synthesizing umpteen variants of some hypothetical molecule and measuring the darn things.

    Now, granted, D-Wave has shown that one can get roughly $150 million over a decade for quantum computing, but:

    (1) I think much of this was the long-shot bet there somehow there would be some substantial and relatively easily obtainable speedup on some large swath of instances of NP-complete problems not only making it seem for all practical purposes as if \( NP \subset BQP \), but also we could enjoy that amazing world by 2015, and

    (2) the necessary funds for truly building a universal quantum computer of the type we’re talking about are perhaps 100-fold greater (i.e., maybe $15 billion or so over a decade or two).

    Finally, I’d be remiss if I didn’t note that the subatomic physics and fusion physics communities are able to extract billions of dollars from taxpayers every decade or so, but I view this as a sociological oddity that can’t necessarily be duplicated in the quantum computing case.

  397. Rahul Says:

    Scott #366:

    Nevertheless, the recent idea of Scattershot BosonSampling, combined with better SPDC sources and photodetectors and other engineering improvements, should allow scaling up to 10 or 20 photons. And if that can be done, I think it becomes harder for QC skeptics to maintain that some problem of principle would inevitably prevent scaling to 50 or 100 photons.

    Pedantic, but maybe that should read “Boson Sampling Skeptics” rather than “QC skeptics”.

    Even if Boson Sampling was completely successful is there a strong reason for a QC skeptic to alter his priors about universal QC devices?

    Boson Sampling devices seems so far different from a more typical, general purpose QC, ( say one that can do Shor factoring) that I’m not sure if success nor failure at one venture means much for the development plans of the other? Would it be more relevant to view a Boson Sampling device as a specialized, dedicated device-designed-to-attack-ECT rather than the more general connotation of a QC?

    Is it a reasonable position to stay a QC skeptic while not remaining a Boson Sampling Skeptic? How about vice versa? [I'm using QC in the more general sense as a useful device that can factor integers, fold proteins, optimize spin glasses etc. ]

  398. Joshua Zelinsky Says:

    Rahul, #393,

    The ECT seems like one major reason that people are QC-skeptics. So if that is demonstrated to be false, a major cause of skepticism should go away. And at least in some cases like Gil, the other reasons that they are skeptical of QC apply also to Boson Sampling.

  399. wolfgang Says:

    I am not an expert on anything, but it seems to me that the crucial point is quantum error correction.

    Without it one can assume that neither BosonSampling nor QC can scale to violate ECT.

    So it still comes down to threshold theorem vs Gil Kalai, Robert Alicki et al.

    The argument seems to be that quantum error correction is very different from error correction in a classical computer, because the errors are not ‘local’.
    If a classical bit fails it does not increase the probability that another classical bit in a distant part of the memory fails.
    However, a QC requires the entanglement of n qubits and thus the errors will entangle as well and assumptions of the threshold theorem will not hold – this seems to be the argument of Kalai, Alicki etc. as I see it.

    If this were true, the ECT would be safe even if BosonSampling and/or a QCwork for a finite number n of bosons/qubits – because scaling to n -> infty would be impossible.

  400. Scott Says:

    wolfgang #398: No, even if some particular assumption of the Threshold Theorem were to fail, the ECT would still not be “safe,” unless you could explain how an efficient classical simulation of the resulting noisy quantum systems was supposed to work. This is a fundamental point that not nearly enough people understand: given the a-priori implausibility of simulating a whole exponential wavefunction in classical polynomial time, even with some noise, the burden of proof falls at least as much on the skeptics as it does on us!

    And I’d say exactly the same about BosonSampling. If Gil wanted to change my mind about it, he could show me how to simulate BosonSampling in classical polynomial time, when (say) 5% of the photons are randomly lost. It’s not impressive enough to me that our hardness results don’t currently extend to that case.

    In other words, the specific cases where we manage to relate the hardness of simulating our quantum system to more “conventional” apparently-hard computational problems (like factoring or the permanent), is just the sharp point of the spear. Remove it, and you still have the entire rest of the spear hurtling at you! (And this is a spear that could develop additional sharp points in mid-flight, if people work at it—and that can only be stopped by simulating it in classical polynomial time. A strange kind of spear.)

  401. Scott Says:

    Rahul #396: Yes, Josh Zelinsky #397 gets it right. If BosonSampling were completely successful, then any remaining QC skeptics would be in the comical position of arguing that, yes, the Extended Church-Turing Thesis is indeed falsified by quantum mechanics, exactly as quantum computing theory predicted, and exactly contrary to what they (the skeptics) predicted. But Shor’s algorithm?! Now that prediction of quantum computing theory would really be too much. It would be like Lord Kelvin switching after Kitty Hawk from saying that heavier-than-air flight is impossible to saying that, yes, of course it’s possible, but obviously not for more than a few minutes at a time.

    Note that Gil, in this thread, has predicted based on his general views that BosonSampling can’t scale even to 8 or 10 photons. So if it does scale that far, isn’t it obvious that that will call his general views at least somewhat into question?

  402. Scott Says:

    Rahul, two additional points that might be helpful to you:

    (1) If you had a fault-tolerant universal QC, then you could use it to simulate an ideal BosonSampler. So in that specific sense, it’s not possible to be a BosonSampling skeptic without also being a QC skeptic. (Of course, one could believe in fault-tolerant QC, but not believe BosonSampling devices by themselves will ever do anything computationally interesting without fault-tolerance layered on top of them. That’s a perfectly consistent position.)

    (2) Even if you had a fault-tolerant universal QC, if you wanted a particular kind of evidence that your QC was hard to simulate classically—namely, evidence based not on the “specific” problem of factoring, but on a much more “generic” (and likely #P-complete) problem, like approximating Gaussian permanents—then the only way we know how to get that evidence, would be to use your universal QC to simulate a BosonSampler. That was actually the original idea that led us to BosonSampling—the idea of implementing it “natively,” using real physical bosons, only came later! And it’s a reason why BosonSampling would still be of theoretical interest, even after we had universal QCs capable of factoring and other more “useful” tasks.

  403. Rahul Says:

    Scott #400:

    No that argument seems unsatisfactory to me. Indeed any skeptic asserting the unlikelihood of universal QC may be consistent with ECT but is ECT necessarily the only good reason to be skeptical of QC?

    e.g. I can be skeptical of “Dogs can Fly” without really believing in a faith based “Mammalian Non Flight Thesis” (MNFT) that asserts “No mammal can fly.” At some date, if people discover bats as a counterexample & hence soundly refute the MNFT that doesn’t really weaken my skeptic case against “Dogs can Fly”?

    Cannot one be skeptical of QC on grounds outside of ECT? (Though sadly, I’m not smart enough to articulate what such grounds might be.)

    Couldn’t it turn out that ECT is an overly strong (& hence invalid) generalization & hence something like your Boson Sampling device succeeds(*) without really bolstering the case of QC? In practice are all ( respectable ) QC skeptics wedded to the ECT as their only reason?

    *Ignoring the theoretical lacunae for the moment e.g. the case for scalability of BosonSampling has yet to be made etc.

  404. Greg Kuperberg Says:

    Bill Kaminsky – I think that you don’t need to get so much into forecasting the future in response to this exciting news from Martinis et al. Yes, it’s interesting that transmon qubits are physically large (relatively speaking). But who is to say what will happen to fidelities as they get better at this; and who is to say what quantum algorithms will be tried with any number of N qubits. I would just say, walk before you can run. (And maybe I’m missing something, but I don’t think that Shor’s algorithm has to take cubic time?)

    Anyway, a question: Why would a large transmon-type qubit have less quantum noise than a small one? I understand perfectly well that quantum mechanics is not strictly a microscopic theory. That is, the relevant parameter to see quantum effects is the number of available states and not the physical size of the object. But generally speaking the number of available states and the number of noise sources go up with size. So how does it happen in this case that size is your friend?

  405. Rahul Says:

    Scott #401 :


    Re. simulating a Boson Sampling Device with a full QC. Technically, you are right, I guess. I just feel it’s a lil’ bit of a trick answer. When I hear BS-skeptic, I take it to mean skeptical of BS working out by mere scaling in its current incarnation or some related modification but not as using a full-blown universal QC to simulate it.

  406. Alexander Vlasov Says:

    Scott #400, I agree, that BosonSampling may play against some ideas about classical computer simulations of QC, but to be more pedantic I may say, e.g., that despite I never considering ECT as necessary true even for classical physical systems, I still doubt about impossibility of effective simulation of BosonSampling by classical stochastic process. It is because ECT is some abstract conjecture, but my doubts about BS is due to concrete troubles and guesses.

  407. fred Says:

    Interesting update
    “Logic gates at the surface code threshold: Superconducting qubits poised for fault-tolerant quantum computing”

  408. Scott Says:

    fred #405: Yes, many people have been discussing that paper above!

    And Rahul: I can think of almost no QC skeptics who don’t believe the ECT, either explicitly (like Levin, Wolfram, ‘t Hooft) or implicitly, as an unacknowledged consequence of their other beliefs. Maybe you can be one of the first! :-) But keep in mind that, if the ECT is false, then (by definition) SOME form of post-classical computation is possible.

  409. Rahul Says:

    Scott #406:

    I wouldn’t be surprised if I’m the first. :) Probably reflects more on my ignorance than anything else.

  410. Jay Says:

    Fred #405,

    Interesting paper indeed. You might enjoy reading Bill Kaminsky #387 and beyond :-)

    Bill Kaminsky #395,

    Thanks again, this is the kind of information one can’t have but by talking to a “more experienced than the average bear”. And please forgive the nitpick: Sherbrooke. :-)

    I think you strongly underestimate how costly is synthesizing and analysing umpteen variants of some molecules for quantum chemistry/material science, and the interest security agencies would have to brute force breaking a series of stored messages and transactions. We’ll see!

  411. Nick Read Says:

    Wolfgang #398: a couple of comments on locality, though I am aware I may get into deep water and be forced to comment on my friend Gil Kalai’s ideas, which I’m not ready to do at the moment!

    Your comment sounds rather close to saying that entanglement is non-local and so enables fast transmission of information. It is true that (according to quantum mechanics) entanglement can occur over long distances, for example as in the EPR thought experiment, in which e.g. the spins of two well-separated particles can be entangled. This is possible because e.g. the particles became correlated (entangled) back in the past when they were close together.

    But it is also known that such entangled states cannot be used to transmit information faster than the speed of light over long distances, because in fact they cannot be used to transmit information at all. Alice making measurements on one of the qbits (particles) cannot use it to send a message to Bob who is making measurements on the second, which was entangled with the first.

    It seems to follow that the fact that an error process, which flips or effectively makes a measurement locally on the first qbit, has occurred cannot be transmitted to the second qbit, like a message. The errors don’t propagate nonlocally, at least not simply because there was entanglement. I’m not saying that errors can’t be propagated over long distances, I’m just saying that the existence of entanglement does not on its own mean long-range propagation of errors.

    Notice that part of the analysis here of errors involves the assumption that the couplings between degrees of freedom in quantum systems are local in space and time (as they are in classical systems). Effects can be propagated in long-range ways, say by particles (such as photons), but the coupling of those to your qbits, where they are emitted or absorbed, is local.

    Of course, errors are still bad, and will mess up your computation at the end of the day when you come back and want to use the state, including the first qbit where the error event occurred.

    Again, I have no idea if these remarks have any bearing on Gil’s ideas.

  412. Greg Kuperberg Says:

    Scott – ‘t Hooft gave a talk here at UC Davis over the summer. His talk and comments afterwards made clear that what really bothers him is quantum probability itself and not quantum computation. Even though he understands quantum probability as well as anyone, he doesn’t like it. He doesn’t believe quantum computation either, but I almost got the sense that he would be happiest if that were what proves him wrong.

    Nick – The way that I understand quantum “mechanics” is through the lens of quantum probability, which is nothing other than non-commutative probability. In my view, physicists should consider it significant that mathematicians, specifically probabilists, use the formalism of an algebra of observables purely for their own reasons. Quantum probability is then the mathematics of non-commutative algebras of observables, specifically, von Neumann algebras. A (quantum) state is then nothing other than a non-commutative probability distribution.

    From this viewpoint, entanglement is classically non-local but quantumly local. So, since quantum probability is true, it’s local. :-) Entanglement is a special case of inseparable states, and inseparable states in general are nothing other than a quantum or non-commutative extension of statistical correlation. No one worries whether statistical correlation is non-local or action at a distance or what have you. Nor should they.

  413. Nick Read Says:


    That’s all fine with me—and I guess you are agreeing with me—except that there are some people who DO seem to worry or make arguments based on precisely those concerns (see e.g. Wolfgang’s post), and it was those people (straw men, perhaps?) that I was trying to answer. Of course, those people are not me, and not quantum physicists.

    Btw I’d like to emphasize that your account of quantum mechanics as non-commutative probability omitted all mention of dynamics—time dependence—and also that locality of the Hamiltonian, which controls those dynamics, is an essential principle in the analysis of quantum systems.
    That is where the actual work of error analysis begins.

  414. Greg Kuperberg Says:

    Nick – I have nothing against dynamics and of course dynamics can be defined within quantum probability. But the type of dynamic that is the most relevant to studying error is a quantum operation (or TPCP) rather than a unitary operator. Or in the continuous version, I would say a Lindblad equation rather than a Schrodinger equation. I find these easier to motivate from the viewpoint of quantum probability, even if the math is no simpler than if you start with vector states and unitaries.

  415. Nick Read Says:



  416. Greg Kuperberg Says:

    “Trace preserving, completely positive map”. This the quantum generalization of a stochastic matrix, i.e., a potentially realistic linear map on density matrices. In the classical special case of quantum probability (since a not-necessarily-commutative algebra of observables could after all be commutative), it is exactly a stochastic matrix.

  417. Vitruvius Says:

    I was re-reading this excellent page and I realized that I failed to notice and reply to a direct question from Robert to me in his comment #384. Sorry about that, Robert. For the record and since Robert left no link via which I can reply using mail: no, I didn’t think Virgil was Greek (I’m dumb, but not that dumb), and I did quote him in Ancient Greek (not in modern Greek, as Robert thought, and assuming my reference is correct) in an attempt to be cute.

  418. Gil Kalai Says:

    Dear Nick, It is you indeed. Wow! Warm greetings from Jerusalem (also literally), great seeing you over here. –Gil

  419. Nick Read Says:

    Gil: Greetings! We still have a foot of snow, slowly melting.

    Greg: Thanks for the clarification. When using such an approach (Lindblad, or TPCPs), in which you have terms that represent errors/dissipation represented by such operators, one should be especially careful to ensure that the TPCPs used in the effective description have a form that can be derived or at least justified in terms of an underlying LOCAL Hamiltonian dynamics, in the form I tried to sketch in my comment, and then perhaps “integrating out” some degrees of freedom. It would be all too easy to sneak in some unphysical properties by writing down such operators directly.

  420. Bill Kaminsky Says:

    To Greg, who in comment #404 wrote:

    Anyway, a question: Why would a large transmon-type qubit have less quantum noise than a small one? I understand perfectly well that quantum mechanics is not strictly a microscopic theory. That is, the relevant parameter to see quantum effects is the number of available states and not the physical size of the object. But generally speaking the number of available states and the number of noise sources go up with size. So how does it happen in this case that size is your friend?

    I’m pretty sure that the shunt capacitor is the primary reason for the size of a present-day “transmon” qubit (which again, for those just joining the discussion now, stretch to hundreds of microns and thus aren’t anywhere near true microscale, let alone nanoscale).

    To elaborate, the transmon’s main claim-to-fame is that it’s much more immune to decoherence caused by “charge noise”, which is what happens when a thermally-excited normal quasiparticle tunnels through your Josephson junction rather than a superconducting Cooper pair. Such a quasiparticle causes decoherence by perturbing your qubit’s energy levels essentially electrostatically as some nontrivial fraction of \( e^2 / 2C_J \), where \( e \) here is the charge of an electron rather than 2.718281828…, and \( C_J \) is the capacitance of your qubit’s Josephson junction. To eliminate this perturbation, you add a big ol’ honking capacitor in parallel to your Josephson junction (i.e., a “shunt” capacitor).

    But we’re not done yet! I’m pretty sure there’s then a significant secondary reason for the transmon qubit’s size. Namely, since your qubit’s energy levels have gaps corresponding to microwave frequencies, you of course want to shield your qubit so that—to the maximum extent possible—only the microwaves you purposely use to drive quantum gates ever hit the qubit. And this task just got harder now that you have a big ol’ honking shunt capacitor potentially (no pun intended) acting as a big ol’ honking microwave antenna. Thus, you place the qubit-plus-shunt-capacitor in a resonant cavity created within their plane by a coplanar waveguide and above-and-below by “ground planes” (which are layers of uninterrupted superconductor that act as perfect EM shields).

    Actual superconducting qubit circuit designers are welcome to correct me if I’m wrong. (People from other vocations are able to correct me too, of course… but that’ll be much more embarrassing for me! ;) )

  421. Sam Hopkins Says:

    I think there is a way to believe in ECT thesis and also accept BosonSampling as a possibility: namely, to claim that as BosonSampling is not a decision problem, it is not something we should have expected a computer to do anyways (it is closer to “making toast”, to use an example that Scott often gives in talks.)

    I’m not sure I buy into that, but I at least feel that ECT does not need to be the same as the claim that a computer can efficiently simulate any physical system.

  422. Jeremy Stanson Says:

    Bill # 420

    In addition to the miniaturization of the qubits, Martinis et al. also have to figure out how to scale their input/output system. In Figure S3 of their paper, you’ll see that they have at least 2 control lines per qubit. Even 1 external control line per qubit is unscalable in a cryogenic environment. You simply cannot continue to route more and more wires from external sources into a dilution fridge.

    Following up on my comment # 395, D-Wave has identified and solved this problem. See the D-Wave architecture paper from earlier in this thread (1401.5504) as well as the wealth of other papers they have published on the subject. D-Wave implements on-chip programming circuitry (e.g., the DACs, etc. that Martinis et al. show outside of their fridge in Fig S3) that requires a fixed number (or at least, a number that scales significantly less than 1:1) of external control lines regardless of the number of qubits in the processor. Bringing these control elements onto the processor itself is essential to scale a superconducting device and also significantly impacts the design of the qubits themselves.

    Martinis et al will also start to see significant fabrication discrepancies between qubits as they scale down qubit size and scale up the number of qubits. D-Wave implements on-chip devices to compensate for such fabrication variations. These devices also require control signals, which even further necessitate the on-chip programming circuitry.

    And as Douglas Knight pointed out at # 323, D-Wave has developed the fridge technology that Martinis et al. are going to need to use. Conventional DRs are suitable for single-shot experiments, not reliable QC operation.

    This doesn’t even get into the magnetic shielding issues and large-scale superconducting IC fab issues that D-Wave has also already solved.

    All of the above issues, and many more, still need to be addressed by Martinis et al. This is why I say that impressive results with very few qubits, though fascinating in their own right, are not at all representative of progress towards a large scale QC.

    I maintain that the Martinis et al. work is excellent. It is different progress towards large-scale QC than the progress of D-Wave, but it is by no means better or more significant than the progress of D-Wave.

  423. Greg Kuperberg Says:

    Nick – Yes you are right of course. I guess at heart, I approach quantum computing as an information theorist or computer scientist, rather than a physicist. In physics, locality is mostly taken to be geometric locality. In information theory, instead, it is causal locality. Causal locality is in principle more correct, more directly the point. For example, you can talk about a state which is local to a neutrino, even as the neutrino passes through the Earth. In practice, certainly in condensed matter physics, geometric locality is the dominant reason for causal locality. (Obviously by special relativity, geometric locality implies causal locality, but the converse is not true.)

    Likewise in physics, dynamics means response to forces. That’s correct of course and it is the practical issue for constructing quantum devices (or any devices). But in information theory, dynamics simply means any time evolution. The time evolution could be a computer program. Whether or not it is one, you could have a model for it without knowing the forces. Of course, any physical implementation of time evolution depends on forces (and on Newton’s laws and Planck’s constant and so on).

  424. Bill Kaminsky Says:

    To Greg’s parenthetical question at #404

    (And maybe I’m missing something, but I don’t think that Shor’s algorithm has to take cubic time?)

    Oh, good point!

    Up until your comment (and despite seeing the several different surface-code-related gate overhead estimates for Shor’s algorithm in Martinis’s surface code review), I only knew Shor’s algorithm from its exposition in Chapter 5 of Nielsen and Chuang’s textbook. As such, I thought generating a nontrivial factor of a \(N\) bit composite integer always takes \( O(N^3) \) time and slightly over \( 2N \) logical qubits (i.e., there’s logarithmic corrections in Nielsen and Chaung’s exposition).

    But finally reading some papers, I now understand one can do substantial parts of Shor’s algorithm concurrently (i.e., you always need \(O(N^3)\) gates but that doesn’t mean \(O(N^3)\) time… indeed, by using \(O(N^3)\) logical qubits, you can reduce running time to \(O[(\log_2 N)^3]\).

    More embarrassingly, I see now I was misreading Table I of Martinis’s surface code review article It’s merely listing different ways to run Shor’s algorithm and Toffoli gate counts it lists have to do with modular exponentiation in Shor’s algorithm and have nothing to do with code overhead.

    So, also, I was perhaps wrong to besmirch John Preskill’s estimate of mere seconds to factor RSA-2048 as neglecting error-correcting-code-overhead. Potentially he was including it, and assuming a hugely parallel Shor’s algorithm.

  425. Joshua Zelinsky Says:

    Sam, you can likely without too much work turn BosonSampling into a class of decisions problems by asking yes or no questions about how accurate a given approximation is for part of the distribution, in a way similar to how one turns factoring into a decision problem.

  426. Scott Says:

    Joshua #423: Unfortunately, no one has figured out yet how to define a decision problem based on BosonSampling, that doesn’t seem much, MUCH harder than BosonSampling itself. The basic difficulty is that, unlike factoring (which has a unique right answer), BosonSampling is a sampling problem (where the goal is to sample a distribution with exponentially-large support). But I very much hope for a breakthrough on this problem.

  427. Nick Read Says:

    Greg: Great, that is interesting.

    What you call “geometric” locality is also a basic principle in relativistic quantum field theory (though perhaps it gets modified a little in string theory, AdS/CFT, and so on), and so of physics generally. Trying to “disentangle” the roles of “geometric” locality from what you consider local in your information theory point of view of entanglement is part of what makes quantum information theory in CM/QFT a nice subject. (Perhaps you recall my talk on real-space entanglement spectra from the Simons Center conference; locality was a key motivation, developed further since then.)

    Is there anything fairly succinct I can read on what you call “causal” locality in information theory?

  428. Raoul Ohio Says:

    I am having a little trouble with the notion of “believing the ECT thesis”.

    I can say that:

    (1) “it sure sounds plausible”,

    (2) “to discuss computation and algorithms, you have to start somewhere, and this is as good a place has has been suggested”,

    (3) “etc.”.

    I can’t say that I “believe it”. If tomorrow someone publishes something outside of ECT, or (less unlikely) some kind of diagonal argument showing a flaw, I will think “how about that? I’ll be darned.”.

    This situation has an obvious analogy with basing mathematics on ZF, ZF+AC, ZF+MA, or whatever brand you prefer. I recall looking into the five axioms of ZF decades ago and thinking they were for the convenience of the theorem builders as opposed to being something obviously correct (unfortunately, WTF? had not been invented yet).

  429. quax Says:

    Vitruvius #394 wrote:

    “… it may be related to the timeless battles between engineering truth and marketing propaganda of the sort typified by the disagreements my engineering background has lead me into over the decades.”

    Your point is quite valid for large organizations where marketing and engineering are separated by miles. D-Wave on the other hand is rather small, when I interviewed Geordie Rose last fall he told me that the company just now gets to the point where he doesn’t know everybody’s name anymore, and that marketing is done by just one person who only spends 50% of his time on it.

    I think the contention is simply a consequence of two different playing fields with very different rules. Scientific publications have to adhere to a high standard of factuallity backed ny ironclad evidence, and I am sure that D-Wave papers published in Nature had to live up to this standard to pass peer review.

    Marketing hype on the other hand only has to toe the lines of the truth in advertising regulations. Hype is not only allowed but expected as you are not only selling a product but a vision. From my vantage point D-Wave is behaving just as any other IT company, while carefully trying to not overstep the hype into indefensible territory. Hence Geordie at one point corrected me with regards to their claims on taking on NP hard problems.

  430. Greg Kuperberg Says:

    Henning – Geordie Rose on your blog in 2013: “We never claimed that our systems could efficiently solve NP-hard optimization problems. Don’t take my word for it — it would be great if you could search for a counterexample. You won’t find one!”

    Geordie Rose in an interview with Michael Feldman in 2005: “The system we are currently deploying, which we call Trinity, is a capability-class supercomputer specifically designed to provide extremely rapid and accurate approximate answers to arbitrarily large NP-complete problems.”

    How about that. In that period, some of us still remember, Rose contradicted the PCP theorem. The theorem says that approximate answers to an NP-complete problem can be made as hard as the exact answer.

  431. Douglas Knight Says:

    Quax, if it bothers you so much when I use the word “fraud,” would you rather I use the word “quacks”? or…

  432. Sol Warda Says:

    Douglas #429: What do you make of this? The vaunted NSF collaborating with a bunch of “fraudsters”?. Oh my, the times we live in….!

  433. Joshua Zelinsky Says:

    Scott #424, ah, is the problem then the exponential support? If so, it seems like there should be two distinct versions of the ECT, ECT-general and ECT-decision (where the second is the same claim but only for decision problems). Is there some way perhaps to show that the same issue results not from approximating BosonSampling but by solving some related decision problem involving the permanent?

  434. Nick Read Says:

    Re approximation of NP-hard optimization problems: it’s good to remember that while some NP-hard opt problems are also hard to approximate, some are relatively easy.
    The classic NP-hard optimization problem is the traveling salesman in the plane, for which an approximate tour of n cities, with length within a factor of \[1+\epsilon\] of the minimum, can be computed in a time of order $$n(\ln n)^{1/\epsilon},$$ and you don’t even need a quantum computer.

  435. Greg Kuperberg Says:

    Nick – But the target problem for D-Wave was then and often still is maximum independent set, which is equivalent to max clique. (And not just for planar graphs.) As it happens, that NP-complete problem is especially ripe for PCP hardness of approximation.

  436. Scott Says:

    Joshua #433: Actually, you can have at least three different versions of the ECT! Namely ECT-sampling-problem, ECT-decision-problem, and ECT-promise-problem. You might’ve worried that a fourth version, ECT-search-problem, would also be needed, but my paper The Equivalence of Sampling and Searching shows that it’s equivalent to ECT-sampling-problem.

    I confess that, until right now, I hadn’t seen the comment of Sam that you were responding to! Anyway, personally, I don’t see why search and sampling problems (which again are interchangeable by my result) shouldn’t count as perfectly legitimate computational problems. They show up all the time in modern theoretical computer science: e.g., find a Nash equilibrium. Sample a random matching. And in many of those cases, we don’t know how to define an equivalent decision problem: “equivalence of search and decision” really is special to the NP-complete problems and a few other problems (like graph isomorphism).

  437. quax Says:

    Douglas Knight #431 much better, quackery is to my knowledge not established as a criminal term, and conjures up such things as homeopathy were the practitioners dearly believe in the validity of their medicine. Often times they also try to find some scientific evidence to support their claims. So all around a much better expression.

    Still don’t think it’s fair, but take it over fraud any day.

  438. quax Says:

    Greg #430 and there you have it he didn’t say efficiently solving a NP hard problem.

    The beauty of such a term as accurate approximation is that it it is perfectly malleable in every day English (regardless if it has a precise definition in complexity theory).

    Seriously, if this is the best you’ve got, it’s not much to go on. It’s such a business fluff interview, it has forward looking statement disclaimer written all over it.

  439. Rahul Says:

    quax #438:

    Can you explain how D-Waves machine can approximately solve an NP-hard problem even by every day English standards?

    As an aside isn’t “designed to provide extremely rapid and accurate approximate answers” an oxymoron? Can an answer ever be accurately approximate?

  440. quax Says:

    Rahul, you are making my point, this was obviously an aspirational statement. You know, one of these forward looking ones.

    BTW, back then I paid no attention to them at all as this didn’t pass my BS detector.

    That only changed when I realized they actually got to the point of being able to sell something, and that they managed to increase integration density quite consistently.

  441. Rahul Says:

    There’s a difference between a forward looking statement & outright bullshit & I think it is useful to maintain that distinction.

    Claiming “D-Waves machine can approximately solve an NP-hard problem” falls into the Bullshit category I think.

    If you, at some point, upgraded them from BS to promising, I’d say it’s your mistake, but that’s ok: these are subjective opinions. We may disagree.

    OTOH, specific BS statements like the one Greg pointed out are hard to explain around.

  442. quax Says:

    Rahul, in my experience many forward looking statements fall into the BS category. Especially when it comes to IT.

  443. quax Says:

    “OTOH, specific BS statements like the one Greg pointed out are hard to explain around.”

    No, don’t think so, easy exercise in spin. But it’s getting late and I already took up way too much bandwidth here.

  444. Greg Kuperberg Says:

    quax – “and there you have it he didn’t say efficiently solving a NP hard problem.”

    *Forehead palm* Why didn’t I think of that!? Rose said extremely rapid, but he didn’t say efficiently.

    That settles it. This subthread is an ex-debate. You have nailed it to the perch.

    “The beauty of such a term as accurate approximation is that it it is perfectly malleable in every day English (regardless if it has a precise definition in complexity theory).”

    If anyone else here wants to learn the material: What Hastad showed about max clique (= max independent set) is that it can be no easier to find a molehill than to find Mount Everest. This is using PCP inapproximability methods.

  445. rrtucci Says:

    Scott and Gregg (and our big Lubos too), what do you think of Susskind’s recent paean to complexity theory?
    Only a physicist could write a 44 paper about complexity theory without ever mentioning a single complexity class :)

  446. rrtucci Says:

    Sorry, I meant Greg. I got carried away with the doubling of the last letter, thanks to Scott’s pernicious influence

  447. Gil Kalai Says:

    Locality: Wolfgang #399 Nick #411

    Indeed my proposal for realistic modeling of quantum noise asserts that any physical process that creates entanglement between two qubits leads to positive correlation of errors for them as well. “Locality” in general and the issues that Nick raised in #411 are very relevant (and they occupied large parts of the discussion we had with Aram Harrow and many others). As I see it, my point of view is very consistent with locality. However, some of the “local” modeling of errors in the context of fault-tolerance are, in my opinion, too strict to be realistic for interacting quantum systems.

    One important thing to remember is that universal quantum computers are hypothetical devices and my proposals are aimed to show that they cannot be realized. When some of my proposals lead to the conclusion that a large universal quantum computer exhibits certain “non-local” or other “non-physical” behavior, this is not a violation of locality but rather it gives an explanation (based on my proposals) for why such a quantum device is not viable. Similarly, if you start from the assumption that an odd simple non-abelian group exists and conclude (unconditionally or based on some conjectures) that 0=1 this is not a violation of the laws of mathematics but rather a violation of your assumption.

    One place we recently look regarding the validity of the overly strict local Hamiltonian noise models is for modeling a single superconducting qubit. These models predict some robustness of the instability of such qubits which, in my opinion, is incorrect for how experimentally created superconducting qubits really behave.

  448. Greg Kuperberg Says:

    rrtucci – It’s papers like this one by Susskind that leave me thinking that I don’t know how to read theoretical physics papers. When I read a math paper, I would ideally see at least one rigorous statement of a theorem; or if not that then a conjecture; or if not that then a definition. But many physics theory papers come across as “What I’ve been thinking about lately”, i.e., 99% or even 100% review and discussion, so that I can’t tell what actual research progress has been made.

    I know that my impression cannot be fair in all cases. I know for a fact that Susskind is responsible for brilliant, fundamental interpretations in physics. In particular, he and Nambu independently found the string interpretation of what was then known as “dual resonance models” and is now known as string theory. So, I guess he is reaching for a complexity theory interpretation of black hole firewalls. In fact, interpretations mean a great deal to me not only in physics but also in pure mathematics (and CS if you like). They are a large part of how I ever understand anything.

    The problem is that I don’t do interpretations for the sake of interpretations. (Certainly I don’t do interpretations for the sake of “paeans”, although if a good interpretation happens to be a paean, no problem.) I do interpretations for the sake of more concrete results. Susskind must have in mind some more concrete use of his complexity interpretation of black hole horizons. He might even explain that; but as I said, I don’t really know how to read this type of paper. I would need more practice with the recent theoretical physics literature. Or, it could be only that I don’t know much about the firewall topic and the motivating entanglement paradox for black hole horizons. (Indeed, I only just learned from Wikipedia that there is an entanglement paradox.)

  449. Scott Says:

    rrtucci #445: I was actually at Stanford last week, where I had a long conversation with Lenny about his new ideas. I came away quite excited about the unusual use he wants to make of quantum circuit complexity (basically, as an “intrinsic clock,” to measure how long a particular quantum pure state has been evolving for, even after it looks to most observers like it thermalized). As Greg points out, Lenny didn’t have much in the way of rigorous results—that’s just not how he operates :-) —but if nothing else, I was able to extract some perfectly well-defined, interesting technical open problems from our discussion.

    Anyway, this probably deserves a blog post of its own, which I’ll write as soon as I have time (but now I’m at a conference at the Simons center in Berkeley).

  450. rrtucci Says:

    Greg and Scott, didn’t mean to denigrate Lenny. He is highly unusual, very original, in his style of doing physics. I am looking forward to Scott’s post.

  451. Vitruvius Says:

    In your comment #429 you wrote, Quax, that “scientific publications have to adhere to a high standard of factuality backed by ironclad evidence, and I am sure that D-Wave papers published in Nature had to live up to this standard to pass peer review”. If you really believe that’s sufficient then I think you need to do some library work.

    You can start with this “Resolving Irreproducibility in Empirical and Computational Research” survey article in the Bulletin of the Institute of Mathematical Statistics last November. It provides a good overview of the problem, and c. 30 useful references for further study, including links to the following.

    To begin with there’s the “Unreliable Research” essay in The Economist October last, which I mention here because it quotes Kahneman’s statement that “I see a train wreck looming” regarding irreproducibility in his field. It also includes references to papers like “Why Most Published Research Findings Are False” by Ioannidis in PLoS Medicine in 2005, and the “Making Data Maximally Available” article from Science Magazine in 2011, which goes into their efforts to require authors to remit code and data to reduce irreproducibility in their pages.

    It also covers the now infamous cases this decade of Bayer’s inability to validate the published findings on which 67 of their projects were based, and Amgen’s ability to only replicate the results of 6 of 53 articles they studied.

    Particularly important to your Nature reference, Quax, it includes an introduction and link to Nature’s “Reducing our Irreproducibility” editorial from April last (which itself contains a link to Nature’s own collection of c. 20 articles and commentaries on the problem from their pages), and a link to their “We Must Try Harder” editorial, which goes into more detail about why even non-fraudulent results often do not stand up in practice.

    Returning to the IMS Bulletin article, it notes several reasons which have been postulated for the reported lack of reproducibility in empirical research, beyond mistakes or misconduct such as outright fraud or falsification, including small study size, inherently small effect sizes, early or novel research without previously established evidence, poorly designed protocols that permit flexibility during the study, conflicts of interest, or the trendiness of the research topic. All of those apply or potentially apply to D-Wave.

    To be clear, I am a proponent of science. I think that without good science there is no hope for the future. I agree with Popper’s often referring to Kant’s comment that “optimism is a moral duty”, while noting that Karl was himself a philosopher of science.

    However, I think that unless and until D-Wave’s results are reproduced, and in a open fashion, we can conclude nothing concrete about whether they are selling a product and a vision, as you allude to in #429, or whether they are hyping only a vision that only true believers can see, along the lines of your reference to homeopathy in your comment #437 (aka your Quax prefers quacks comment ~ um, sorry, low-hanging joke, I couldn’t resist ;-)

  452. Rahul Says:

    Meanwhile, MIT’s Tech. Review Magazine seems to have named D-Wave on its Top 50 Smartest Companies list.

    Well, the irony that this had to come out of MIT. :)

  453. Gil Kalai Says:

    Some comments on BosonSampling: Approximate BosonSampling is very interesting theoretically but it is quite peculiar as a experimental way to falsify the effective Church-Turing thesis. On the theoretical side, Let me draw a quick analogy with FourierSampling. FourierSampling is behind many successes of quantum computation and the ability to Fourier-sample leads to similar hierarchy collapse consequences as those for BosonSampling.

    Now suppose somebody comes with the idea that you can build a machine which gives you approximate FourierSampling (Without quantum fault tolerance) and you wish to study this possibility, similar to the study of approximate BosonSampling. There are two things to notice:

    a) Getting such an approximation is ultra optimistic – we do not know any mechanism beside quantum fault-tolerance to achieve such a goal.

    (Again you can hope like Scott does for BosonSampling that somehow when the number of qubits is not too large, say 20-30, we can achieve it.)

    b) Assuming that the approximation is arbitrary is overly pessimistic.

    This is the flip side. If we can achieve good approximations, we also have usually good reasons to think that the fluctuation/errors will be rather local and not be of arbitrary nature.

    So something that is special in the approximate AA’s BosonSampling idea (as in the analogous approximate FourierSampling) is a combination of ultra optimistic assumption on achieving approximations plus an overly pessimistic assumption on the nature of the approximation. This is theoretically interesting but otherwise not very reasonable.

    To the best of my memory (from thinking about it a few months ago) for FourierSampling if the approximation (e.g. error-pattern) is local then approximate FourierSampling machine can give (with linear overhead) precise FourierSampling (with both the algorithmic advantages and the hierarchy collapse consequences) just with classical error correction! Before getting overly excited about it note that the reasonable interpretation of this is that approximate FourierSampling is as difficult as quantum fault-tolerance.

    It is interesting to examine if assuming approximate BosonSampling plus some natural assumption on “local” errors will allow to simulate precise BosonSampling on approximate BosonSampling machines (by some sort of classical error -correction). I remember trying to do such a thing (and some discussions with Scott) using compound matrices to achieve the necessary redundancy. Namely, if you wish to sample the minors of an n by 2n complex matrix (according to |Permanent|^2), sample approximately instead the minors of its second compound matrix. In any case, this is an interesting research project.

  454. Sol Warda Says:

    Rahul #452: If you think being named by MIT Technology Review in the Top 50 innovative companies, then being named by no less than NSF itself should drive you really Mad!!!.

  455. Magic Numbers Says:

    Sol Warda #452

    Are you actually a DWave employee or principal?

    -Magic Numbers

  456. Sol Warda Says:

    Magic Numbers #455: Yea, don’t you know I’m the CEO??!!

  457. Magic Numbers Says:

    Sol #456

    When you are trolling around the comment sections with every appearance of being a shill, are you surprised that someone asks?


  458. Sol Warda Says:

    Magic Numbers #457: A shill??. Did you believe what I wrote above?. Very naïve, young man!.

  459. rrtucci Says:

    Magic Numbers, Sol is very courteous. You, on the other hand, are a nasty thug

  460. Magic Numbers Says:

    rrtucci #459

    I’m sorry you feel that way, but I do think it was legitimate to ask. “Sol” has generated a significant amount of comment area noise (including attacking the motives of the researchers of the Shin et al paper that this thread is about) and has posted on other blog comment areas in various tones, ranging from quantum-annealing otaku, to seemingly affiliated researcher.

    I hardly call “Sol”, “courteous”. Additionally, he never answered my question (unless you count the sarcastic attempt to occlude the issue in #456.)


  461. Mike Says:


    I agree with you completely. Sol has no compunction about insinuating that others have evil motives (rrtucci sometimes adopts this approach as well, but he’s smart enough to try and dress it up in hourous garb), and then, at least in Sol’s case, when asked to back up his pronouncements with facts and analysis, he claims ignorance about the science and the details. However, given his level of discourse, I seriously doubt that he works for D-Wave or anyone else that would require at least some level of thoughtfulness.

  462. fred Says:

    Magic #460

  463. quax Says:

    Vitruvius #451, completely agree that the gold standard to establish scientific factuality is the successful and independent reproduction.

    Yet, when I referenced D-Wave’s Nature publications it was not to imply that these results are above questioning, rather that D-Wave has to adopt a very different standard of communication when publishing scientific papers versus giving interviews to business or IT magazines.

    I expect them to be very careful in what they disclose and report in their scientific papers. If there was just the impression of a misconduct in this area it would translate into very costly PR damage.

  464. Rahul Says:

    Stupid question: How does one go about finding a asymptotic lower bound for a problem? In general. Upper bounds, I can understand: just find a specific algorithm that’s O(n^3), say.

    Specifically, I was wondering, people seem confident that faster classical algorithms for the permanent problem won’t be found. Wondering how that sort of thing is proven.

  465. Greg Kuperberg Says:

    Rahul – It’s not proven! What is proven (by Valiant and Toda) is that if the matrix permanent has a polynomial-time algorithm, than an ocean of other problems do too. That’s because all of those other problems can be re-encoded as permanents to calculate. The permanent is #P-hard, which is even broader than if it were merely NP-hard.

  466. Rahul Says:

    @Greg Kuperberg:

    Thanks! In general, what’s a good problem for which a lower bound is proven. I was wondering how one goes about it, in general. Was curious how one proves that a faster algorithm can never be found.

    I mean, I can think of something like, “Sort an n element vector” needs to be at least O(n), (I think) so that each element is at least visited. But that seems kinda trivial.

  467. Evan Says:

    Rahul — Lower bounds are hard in general. A common class of lower bound proofs are decision trees. Consider sorting a list of N elements. The way to think of this is that you are trying to find the permutation that turns the list you have into the one you want. There are N! permutations. Every time you compare two elements, you can at best reject half of the permutations. Therefore, the lower bound for sorting is log_2(N!). We can use stirlings formula to simplify this, and we get the bound O(N*log(N)). That plus an algorithm that is N*log(N) (such as merge sort), makes this a tight bound.

  468. Scott Says:

    Rahul #466: There’s a huge amount to say about your question—indeed, a “full” answer to it would basically consist of a few graduate courses in complexity theory! :-) (Though short of that, have you tried, say, my Quantum Computing Since Democritus book?)

    The easiest examples of problems for which lower bounds are proven, are problems that are EXP-complete or harder. For any such problems, it follows from the Time Hierarchy Theorem that they require exponential time to solve (at least deterministically). Two examples are: deciding whether White has a win in a given chess position (where we’ve generalized chess to an nxn board, and put no upper bound on the length of the game), and deciding whether a given mathematical statement involving only quantifiers, variables, 0, 1, and addition is a true theorem about the natural numbers.

    Beyond that, we have many examples of problems that are proved to be hard, but only with respect to “weak” models of computation (i.e., not a full universal Turing machine). So for example, since the 1980s we’ve known lots of problems (PARITY, MAJORITY…) that require exponential-size Boolean circuits, if the circuits are restricted to having constant depth. And we’ve also known monotone functions (CLIQUE, MATCHING…) that require exponential-size circuits, if the circuits can consist only of AND and OR gates, and no NOT gates. And we also know time/space tradeoffs (for example, any no(1)-space algorithm for SAT must use at least ~n1.7 time). And much more.

    Having said that, the largest body of knowledge in complexity theory consists of reductions and completeness theorems—relating tens of thousands of problems to each other, and showing that if your favorite problem can be solved in polynomial time, then P=NP or something similarly dramatic would happen. That’s essentially what Alex and I managed to do for BosonSampling, although the details are more complicated (owing to BosonSampling being a sampling problem, as well as the approximation issue).

    Finally, if you find the present state of knowledge on lower bounds unsatisfactory … well then, now you know why P vs. NP is a Clay prize problem. :-)

  469. Jeremy Stanson Says:

    Scott # 468

    Not the point, I know, but how do you generalize chess to an nxn board? What pieces do you have when n != 8?

  470. Greg Kuperberg Says:

    Jeremy – I just glanced at the original paper on EXPTIME-hardness of generalized chess. They look at an n x n board with chess pieces with standard behavior. They only use pawns, queens, and rooks, but they need many of them all over the board. Also each side only has one king. Their trick is to encode a clocked computer circuit into the board using a highly artificial mid-game position.

    On the face of it, this would only be PSPACE-complete, since it only requires polynomial space to encode such a chess position. There is every reason to believe that PSPACE is smaller than EXPTIME; in any case no one can prove that PSPACE is larger than P (although there is every reason to believe that as well). However, the fact that the moves in chess alternate between two players changes the complexity to EXPTIME instead.

  471. Scott Says:

    Greg #470: Note that even with alternating moves between two players, chess would still be in PSPACE, if we had a poly(n) upper bound on the total number of moves (e.g., if this were timed tournament play and the players weren’t arbitrarily fast :-) ). So it’s really only the combination of alternation with exponential game length that pushes you up to EXP.

  472. Greg Kuperberg Says:

    Rahul – Most hardness results in computer science come down to algorithm theft. This applies both to relative concepts such as NP-hardness, and absolute hardness, such as that generalized chess takes exponential time. Although, the type of algorithm theft is different in the two cases.

    Why do people conjecture that there is no fast algorithm to compute the permanent of a matrix? If there were one, you could expropriate it to solve a lot of other problems quickly: You could break nearly any encryption scheme, solve nearly any optimization problem, etc. (This is officially called reduction.)

    Why do people know that there is a problem that requires exponential time? It is the output of a highly artificial algorithm that makes itself difficult by stealing every other algorithm that runs faster. For instance, it can run the nth algorithm on the nth input for 2^n steps; if that simulated algorithm finishes in that time, it contradicts it by giving the opposite answer. (This is officially called diagonalization.)

    Why is generalized chess EXPTIME-hard? Because, basically, you can construct a computer with chess pieces that can run any algorithm for exponential time. (Here I am skirting over the extra role of two players who alternate moves. For comparison, solitaire chess or cooperative chess is PSPACE-hard directly by this construction.)

    Unfortunately there are good reasons to believe that algorithm theft is not enough to show that NP-hard or #P-hard problems are hard. But by diagonalization exponential time is more than polynomial time, which is enough for some natural problems such as many generalized two-player games.

  473. Scott Says:

    Greg #472: That’s a superb summary; thanks!

  474. Jay Says:

    Greg #472

    I don’t get the diagonalization one. What if the algorithm is not computable? Does that counts as EXPTIME-hard?

  475. Greg Kuperberg Says:

    Jay – The diagonalization (in the proof of the Hartmanis-Stearns time hierarchy theorem) is done carefully so that the answer *is* computable. You define an algorithm A that, for simplicity, only accepts unary input. For the unique input of length n, it simulates the nth Turing machine T_n for time 2^n. If T_n finishes in that time, A contradicts it by giving the opposite output. If T_n does not finish in that time, A gives the output 0 by default. The algorithm A is thus guaranteed to finish in time slightly more than 2^n and output *something*.

  476. Alexander Vlasov Says:

    Calculation of permanent and sampling are not the same thing. Let’s consider for nxn matrix random variable defined as product of n random elements with different rows and columns. Mathematical expectation of that variable is defined via permanent and so there is some vague idea: why we could not find some sampling process based on similar ideas

  477. Gil Kalai Says:

    A few more words on BosonSampling: you can separate the problem into two parts.

    The first is what is the fidelity you can achieve for a single boson (say in a Gaussian state) in a system where each boson has 2n modes.

    The second is given the fidelity for a single boson what will be the fidelity of a bosonic state (of the kind AA consider) with n bosons.

    My expectation is that for each of these components, the noise will scale up linearily (so all together it will scale up quadratically)  which would make  scaling Bosonsampling quite similar in difficulty  to that of scaling universal quantum computers. This is why I don’t expect the 10-20 bosons range to be realistic. (I didn’t look specifically at the newer Scattershot proposal but I would not expect it to make much difference.) Right now I am engaged in a joint research project relating the second part with BKS noise-sesitivity.

    Many proposals to shortcut the gate/qubit quantum fault-tolerance (or even just part of it) including BosonSampling, Dwave, topological quantum computing via nonabelions, etc. look like a famous Sidney Harris cartoon about a mathematical proof which has a part …and here a miracle happens — .    If you make a thought experiment where this miracle is simulated on a noisy quantum computer that does not enact quantum fault tolerance you realize that this miracle cannot happen.

    Let me add that I am skeptical also about the “mantra” that BosonSampling is useless for anything except falsifying the ECT thesis. If you go to somewhat more complicated gadgets than bosons you get BQP-completmness, and for FourierSampling there are glorious applications, so it is reasonable to expect nice algorithmic applications for BosonSampling as well.

  478. Jay Says:

    Greg – ok default ouput of course. Thx!

  479. fred Says:

    About boson sampling, I read that
    “When two photons reach a beam splitter at exactly the same time, they will always follow the same path afterwards – both going either left or right – and it is that behaviour that is so hard to model classically.”

    That’s interesting – I (naively) thought of using flow networks to solve tripartite (3-dim matching) as a max-flow problem, but it just can’t work because any standard efficient max-flow implementation relies on incremental steps involving flows of one unit, so a 2-flow can end up being split into two separate 1-flows. Any attempt to preserve 2-flows involves exponential algorithms. It’s also equivalent to saying that you just can’t implement a NOT gate on such flow networks (but you can realize AND and OR gates).
    I wonder if the fact that “simultaneous” photons always stick together through alternate paths can’t be used to model this somehow…

  480. Jay Says:

    Greg – Is there any restriction* on the kind of separation we can prove this way? For example could we define as EXPLOGTIME-hard the set of algorithms that takes at least 2^n.log(n) steps, LINCONST3-hard those that takes n+3 steps, etc.

    *other than “probably useless” or “provably boring” ;-)

  481. Greg Kuperberg Says:

    Jay – Your questions are good, but at this point you’re just asking Scott and me for a course in complexity theory. You should enroll!

    In the space hierarchy theorem, you can split hairs as much as you want as long as ratios go to zero. Suppose that the value of f(n) is computable in space f(n) and g(n) is computable in space g(n), and suppose that the ratio f(n)/g(n) converges to zero. Then diagonalization gives you a problem computable in space O(g(n)) but not O(f(n)).

    In the time hierarchy theorem, the simulation has logarithmic overhead, so the usual statement is that f(n)log(f(n))/g(n) should converge to zero.

    The problem with an expression such as n+3 is that in the usual formulation of complexity, you freely give up constant factors. O(n^2) means “eventually less than n^2 times a constant factor”. Constant factors are considered boring because one computer could be faster than another by a constant factor. So n, n+3, and 3n are all taken as equivalent. (Except when they are not considered boring! But the point is, that is a change of topic from the most standard form of complexity theory.)

  482. Joshua Zelinsky Says:

    Minor technical addendum to Greg’s last comment, to get the diagonalization to work you also need that your functions are time constructible which is a technical condition that applies to most non-pathological functions.

  483. Scott Says:

    Gil #477: Actually, Scattershot makes a lot of difference. It establishes that everything Alex and I showed you could do with hypothetical single-photon sources, you can also do with SPDC (Spontaneous Parametric Downconversion) sources, of the sort that already exist today. In other words, the unavailability of deterministic single-photon sources, and the need to use postselection to simulate such sources, is no longer an obstacle to scaling.

    The remaining issues, of course, are unheralded photon losses, detector inefficiencies, beamsplitter miscalibrations, and other problems of that kind. But if we let ε be the amount by which those things affect any one photon, then there doesn’t appear to be any obstacle to scaling to ~1/ε photons. And from talking to experimentalists, it sounds like ε=0.1 or maybe even ε=0.05 should be feasible with current technology—if you had enough money to afford a large array of state-of-the-art photodetectors, which maybe none of the experimentalists do at present. Hence my prediction that n=10 or maybe even n=20 should be feasible with current technology.

  484. Raoul Ohio Says:

    From market driven hype to totally fake papers: You would think this would be easier in Philosophy than in CS:

  485. Darrell Burgan Says:

    And SCIgen is an MIT product, no less:

    The stuff it generates is hilarious.

  486. Nick Read Says:

    Gil #477: I’m rather disappointed by your comparison of some of the ways of circumventing the error issue with the Harris cartoon “here a miracle occurs”. I won’t defend D-wave, and Scott can speak for himself (and has). But I (and others) have spent some effort over many years to show that/how quasiparticles with nonabelian statistics can theoretically arise in a physical system—thus no miracle is required.

    It’s also hard to believe that you could write something as casual as “if you make a thought experiment where this miracle is simulated on a noisy quantum computer that does not enact quantum fault tolerance you realize that this miracle cannot happen.”

    On the other hand, it would seem to me, and probably to many readers of this blog, that your schemes in which subtle correlated errors mysteriously arise whenever anyone wants to build a quantum computer, but are not otherwise observed or expected based on all other expt and theoretical work in physics—just so that you don’t have to give up the extended Church-Turing hypothesis—surely THOSE are examples of “…and then a miracle occurs”?

    I also note that in one of your anti-QC papers, after I had told you about topological QC using nonabelions as a possible way to obviate the need for error correction, your response was to add a conjecture “Nonabelions do not exist”, without any physical explanation of why, just to sidestep the question. That looks like another example of the same Harris principle.

  487. Ian Finn Says:

    Nick #486: I too have been disappointed recently with the quality of Gil’s arguments… And he sounds more and more like a broken record these days.

  488. Gil Kalai Says:

    Dear Nick, thanks for your comment!

    The argumant against topological quantum computing. Indeed the case “against” topological quantum computing is considerably stronger than the case against quantum fault-tolerance which is built bottom-up by a gate/qubit architecture. A good place to read about my argument is  my presentation from a talk last year at MIT which is linked in this post. The weakness is not in the idea that nonabelian anyons can be created but in the proposals to create very stable qubits using those quasiparticles. The argument  is an application of the very basic idea of reductions from the theory of computing. Here it is:

    1) The experimental process for building the qubits can be simulated by a (hypothetical) noisy quantum computer on some microscopic level.

    2) It is unlikely that the experimental process for creating the anyonic system invoke some quantum fault-tolerance.

    3) On the level of the hypothetical noisy quantum computer that simulates the process, no quantum fault-tolerance is activated and this does not enable to create stable qubits from noisy (microscopic) components.

    This argument is not a mathematical-quality proof but it is still fairly strong, especially taking into account the fact that the present papers on creating stable qubits based on anyons do not take the noisy process of preparing the anyons/qubits into account.

    Do nonabelions exist? Topological quantum computing was certainly on my mind since the beginning (and my first 2005 paper) but I never posed a conjecture that nonabelions do not  exist and only discussed  stable nonabelions which (theoretically) allow to achieve very stable quantum information.

    Of course,  if I will see a clear path from my point of view towards the nonexistence of nonabelions you can be sure that I will be quite happy and will not be shy to propose it (and you will be among the first people I will try it on).

    It is a fascinating open problem if nonabelions can be constructed. When you consider low temperature nonabelions it may well be the case that their creation requires some (non-trivial) quantum computing, which in turn requires quantum error-correction on some level. (So you get some sort of a vicious triangle..)   When you allow high temperature/entropy there could be some ambiguity regarding these new exotic phases of matter (due to the non uniqueness of expressing mixed states as combinations of pure state.)

    Let me come back to the other points you have raised in a separate comment.

  489. Daniel Sank Says:

    Just want to point out that we (Martinis group) recently achieved state measurement at the fault tolerant threshold for the surface code

    which makes a nice combination with the high fidelity gates. For the surface code it’s critical that the gates and measurement can both be done with high accuracy.

  490. Scott Says:

    Gil #488: I’m quite familiar with “the very basic idea of reductions from the theory of computing,” but still found myself totally unable to follow the bizarre, Alice-in-Wonderland logic of your “impossibility proof” for stable nonabelian anyons. The whole reason why people got interested in nonabelian anyons in the first place is that, as shown by Kitaev and others, they could naturally implement quantum fault-tolerance. [Addendum, thanks to Nick: this, at least, is why computer scientists got interested! Physicists had already been interested for other reasons.] So your step (2) (“It is unlikely that the experimental process for creating the anyonic system invoke some quantum fault-tolerance”) is question-begging and invalid. When you concede, “This argument is not a mathematical-quality proof,” that strikes me as an understatement: I find these sorts of arguments unworthy of an excellent mathematician like yourself.

  491. Nick Read Says:

    Gil: a quick response, leaving other points for later.

    1) It was early in 2005 that I pointed out to you (in an email) the idea of topological quantum computing, after you kindly showed me a draft of a paper about QC that focused on the need for error correction. Clearly that draft did not mention it.

    2) In your later paper arXiv:0904.3265 (page 41) you have “Conjecture F: Stable non-Abelian anyons do not exist in nature and cannot be created.” Apparently the insertion of the word “stable” is meant to refute my statement. I’m not sure what exactly you meant by “stable” then or now (it does not seem to be not explained in that paper), but I suspect that “stable” nonabelions are the kind I always talk about.

    In any case, even if your conjecture should have been phrased as “nonabelions may exist but don’t provide topologically-protected (`stable’) quantum information storage and manipulation because of errors due to an unknown and non-local mechanism”, the latter claim still sounds a lot like “and here a miracle occurs”.

  492. Jeremy Stanson Says:

    Daniel # 489

    Would be very interested to know your thoughts on my comment @ 422, if you’re willing to indulge us.

  493. Gil Kalai Says:

    I think that you are wrong here, Scott. When you have a noisy quantum computer, (and you do not implement quantum fault-tolerance) and you assume a low-enough noise level which allow you to create a certain quantum code, this will lead to a substantial amount of logical errors, or in other words, to a mixture of a cloud of codewords rather than a delta-function on a single point in the Bloch sphere. This is what you can expect from an experimental process which will implement qubits based on nonabelian anyons or Kitaev’s surface code.

    In order to have a process that lead to much more stable encoded qubits based on less stable “physical qubits” (or other microscopic components) you need to activate quantum fault tolerance in the process of preparing those stable encoded qubits.

  494. Gil Kalai Says:


    The precise conjecture is:

    “Nonabelions don’t provide topologically-protected (`stable’) quantum information storage and manipulation because of accumulation of errors in the experimental process which create them”.

  495. Nick Read Says:

    Scott #490: your comment appeared while I was typing, and now perhaps I won’t need to respond to that argument!

    I’d also like to remark gently that some of us (including Preskill, btw) were interested in nonabelian anyons long before Kitaev’s work, for reasons having nothing to do with QC (at least in my case—this may have been before I even heard of QC). No doubt you meant “people in computer science” . . . :)

  496. Nick Read Says:

    Gil #494: based on several of your remarks, I interpret “creating” the nonabelions to mean the initialization process before computation starts. That is, one begins with some physical low-temperature state of matter, then with some external controls creates the initial state (in the topologically-protected subspace, which incidentally is not well described as “qbits”) of some number of nonabelions. This may indeed involve creating the nonabelions from the ground state (or vacuum, if you like) of the system.

    I actually AGREE with you that there is a possibility of errors in that creation process. That is because the nonabelions can be created from the ground state only by creating them, perhaps in pairs, or else in other small groups, and then the members of the group are necessarily close together, and local perturbations or coupling to the environment might sometimes cause an error. It is only when already-existing nonabelions are far apart that their shared internal state is top-protected, due to the requirement that terms in the Hamiltonian be short-range.

    My response to this issue is two-fold. 1) the error can occur in initialization only, not when running the computation afterward, which substantially mitigates the problem—one does not need to use auxiliary qbits to correct for on-going errors. 2) There are probably other ways to initialize the state without bringing the nonabelions together. For example, using measurement.

  497. Gil Kalai Says:

    Nick, OK, so maybe we are in some agreement. Actually I was interested in the claim that using certain anyons you can create very stable qubits. One such proposal is based on a pair of anyons describing together a 2-dimensional Hilbert space (hence a qubit) that becomes more and more stable as you move these anyons apart.

    The way I see it is:

    1) If, when you start you do not have a superb-quality qubits but just ordinary-quality qubits (namely with small but non-negligeable probability of logical errors) then this will fail your computation afterwards.

    2) Regarding the other probable ways. Ahh, this is precisely where my argument comes to play. Any experimental process for initialization can be simulated by a noisy quantum computer (on some microscopic level). If this hypothetical computer does not invoke fault tolerance your initialization will continue to have the same problem.

  498. Nick Read Says:

    Gil #488, #493,

    I have the strong feeling that your argument is circular, or begs the question. You say that the experimental implementation (or sometimes you say simulation) of the system with nonabelions is (or, is on) a noisy quantum computer, and therefore it cannot be fault tolerant. But this just seems to beg the question, because the claim is we can use nonabelions as an intrinsically fault tolerant QC.

    (I also think I’m saying just the same as Scott #490, in different words.)

  499. Bill Kaminsky Says:

    Daniel Sank #489,

    Congratulations on the great experimental results with the Martinis Group! :)

    I second Jeremy Stanson’s request at #492 for your thoughts on how feasible it’ll be for the Martinis Group to scale up the number of transmon qubits it can put on a chip while still being able to act upon all of them with a universal gate set achieving >99% fidelities.

  500. Nick Read Says:

    Gil # 497,

    I agree with much of what you say until near the end of 1). In the example (which was our original one, in fact) there are 2 states for each pair of nonabelions (modulo certain conservation laws we can ignore), so that is a qubit, though the qubits are not uniquely defined because given n nonabelions, the partition into n/2 pairs is not unique.

    So anyway there can be an error in initialization if you prepare it by separating the nonabelions. That error will propagate through your computation, but there will not be “new” errors that occur later, unlike with ordinary qubits.

    For 2), see previous comment.

  501. Gil Kalai Says:

    Nick, no I dont think so. We are talking about the initializaton process. In this process we create (say) a single qubit on a Hilbert space based on two anyons. There are two questions: a) what is the initial state of this encoded qubit. b) how stable is this state.

    The intrinsic FT property may tell you that the initial state is very stable to local noise. But it does not tell you what will be the initial state of the qubit. The initial qubit state depends on the experimental process that led to the creation of your qubit. There is no intrinsic fault-tolerance for this process. This means that the initial state will be a mixture rather than a sharp delta function just like an ordinary qubit.

    (I should add that with such initial qubits even if the computation itself is noiseless, you cannot run quantum computation.)

  502. Nick Read Says:

    Gil #501,

    Yes, we agree here, except I’d put your parenthetical remark at the end differently: you can RUN a quantum computation algorithm with such initial qubits, you just can’t RELY on the result.

  503. Scott Says:

    Gil: So if I understand correctly, your view is that it’s not even physically possible to produce the initial state of a nonabelian-anyon QC, let alone evolve that state to effect a desired quantum computation? I.e., that we can’t even create the anyonic analogue of the all-0 state for qubits?

    If so, then do you agree that your view would be refuted by an experimental demonstration of nonabelian anyons (even if the anyons weren’t then used for computation)?

  504. Rahul Says:

    Greg, Scott, Gil, Evan:

    Thanks for trying to explain lower bounds to me!


    I will give your book a shot. So far I was too cheap to buy. :) I did read some of the lecture transcribed pdfs you had posted on here though.

  505. Rahul Says:

    Scott #483

    And from talking to experimentalists, it sounds like ε=0.1 or maybe even ε=0.05 should be feasible with current technology—if you had enough money to afford a large array of state-of-the-art photodetectors, which maybe none of the experimentalists do at present. Hence my prediction that n=10 or maybe even n=20 should be feasible with current technology.

    How much money approximately are we talking about here? How large an array? Can a lot of bit n pieces and equipment from the n=3 boson samplers they already constructed a year ago be reused or extended? (n=3 is indeed the current limit right? I faintly remember reading about n=4 but not sure. )

    When you did ask the experimentalists about ε, didn’t you ask them about the “if / maybe” above? i.e. is it money really that’s stopping them from building the scattershot sampler or something else? Are they all as sanguine about the prospects as you are?

    Also, would n=10 be enough for your goals? How long does it take to simulate n=10 on a conventional computer today?

  506. Gil Kalai Says:

    Hi Scott (#503),

    My view is that you cannot achieve a highly stable qubit based on anyons, just like you cannot achieve a highly stabe logical qubit based on quantum error-correction applied to much less stable physical qubit. For a “qubit” you need to be able to create not just a stable ground state but also stable superpositions. (As we require for ordinary qubits.)

    As I wrote above, an experimental demonstration of nonabelian anyons will not be in contradiction with my conjectures. (I did hope to find a path from my conjectures to the central question of achieving stable nonabelion anyons.)

    Nick is absolutely correct in his description of our early correspondence on the matter. (I simply forgot a few things.) In response to an early draft of my first paper that I sent him, Nick emailed me and told me about the issue of topological quantum computing via nonabelions. Certainly there are on my mind since then, and naturally other people also mentioned them. (I did not remember and found it surprising that I had a draft of a paper so early after I start working on the matter but I was young and reckless at that time.) Overall, as I wrote Nick in 2012, my point of view is too crude to make a direct distinction between abelian anyons and nonabelian anyons. If you can show that in general or even just for a restricted class of quantum systems, creating nonabelions (in sufficiently pure state) requires (deep) quantum computation, this may give an indirect way to tackle the central question of their existence.

  507. fred Says:

    Back in early 1900, one could probably have made all sorts of strong arguments around the Navier-Stokes equations that you’ll never be able to go from

    (D-Wave’s attempt is more like claiming that is a scalable solution for heavier-than-air flight?)

  508. Scott Says:

    Rahul #505: I wish I knew better answers to your questions. I don’t know how much money—let’s say millions or tens of millions, but not billions. :-) I know that the best photodetectors currently available (e.g., those from NIST) require cooling to near absolute zero, which substantially adds to the cost. To do a “clean” BosonSampling experiment with 10 photons, you’d want maybe 100 photodetectors, but you could also do a “dirty” experiment (one with many multi-photon collisions) with, say, 20 detectors, and that would already be of interest. Yes, experimentalists reuse equipment all the time from one experiment to the next, and I’m sure that could be done here to some extent. But if you wanted to use an integrated optical chip, you’d have to fabricate a new chip. Some of the experimentalists (e.g., the O’Brien group in Bristol) seemed extremely optimistic about scaling up, others a bit less so. In any case, none of them think there’s any fundamental obstacle, only technological ones. n=10 would be enough to force Gil Kalai to admit that he was wrong, which to my mind, would be worth the entire cost of the experiment. When n=10, a classical computer could calculate the output probabilities using ~20,000 arithmetic operations. So, it’s still perfectly efficient classically, but depending on how you chose to quantify resources, the optical chip (involving only a few hundred optical elements) would arguably be “more” efficient.

  509. Scott Says:

    fred #507: LOL!

  510. Sol Warda Says:

    Bill Kaminsky: Here is a video of Dr. Martinis talking at Google about their latest paper & the factoring of RSA-2048!

  511. Rahul Says:

    Scott #508:

    When n=10, a classical computer could calculate the output probabilities using ~20,000 arithmetic operations. So, it’s still perfectly efficient classically

    Thanks! So if absolutely everything else works out, what n do you think you need to demonstrate if a convincing disproval of ECT is the goal?

    Also, in a classical simulation is the BS problem parallelizable & if it is, does that lift the bar a lot higher? (i.e. Boson Sampling scale needed to disprove ECT?)

  512. Scott Says:

    Rahul #511: You can never definitively “disprove” the ECT; all you can do is make it less and less tenable by more and more convincing experiments. Personally, I see no reason to scale beyond about n=50: at that point, I’d say that whatever point can be made with BosonSampling experiments has been made, and indeed classical computers start to lose the ability even to verify the results.

    Short of that, though, any increase in n makes me all the happier: at n=10 Gil Kalai is disproved, at n=20 classical simulation takes millions of steps, at n=30 it takes billions of steps, and at n=40 it takes trillions of steps.

    And yes, classical simulation is easily parallelizable. But I don’t think that changes the outlook: I’d say the right measure of cost is the simulation time times the number of parallel processors (since the whole point is to show that, to achieve a scalable classical simulation, one or the other of those would have to grow exponentially with n).

  513. Jay Says:

    Wow it’s cool there were exactly 2^7 responses :-)

    Oh shit :-(

    Wow it’s cool there were exactly 1+2^(3×2+1) responses :-)

  514. Scott Says:

    Err, you meant 29?

  515. Jay Says:

    Greg #481,

    Thank you, your answer was food for thought. If I read between the lines well, what you’re saying is that, because we can simulate any sort of computation using logarithmic overhead in time, and no spatial overhead, then separations out of diagonalisation have good generalisability iff the separations are larger than that.

    Which led me to be able to identify, for the first time and to my full satisfaction, my inconfort with diagonalisation: the conclusions we can reach are limited because of at least one hidden assumption, namely that the model of computation we are using is equivalent to any other.

    For the halting problem, the conclusion seems “limited” by the validity of the Church-Turing thesis (the real one). If one could switch from digital to perfect analog then maybe all algorithms could halt (as far as I understand we couln’t apply diagonalisation because the number of analog machine is itself non countable -of course as far as we know it’s an impossible task to construct a perfect analog computer).

    For the existence of algorithm with exponential runing time (or whatever similar properties), the hidden assumption (which is probably not hidden at all to your eyes) is the running time equivalence (up to a logarithmic overhead). But maybe there are some QCs that could do the job ordinary TMs would take exponential time to do (again we can’t number all QCs, or at least we would need to restrict the quantum states that feed them). But maybe another model of computation (such as analog with noise) could be faster even if still Turing equivalent (is the analog + noise case already known?).

    Finally, I wonder what is the equivalent hidden assumption for the real>integer “out of diagonalisation” demonstration.

    PS: for the idea I should enrole. Sure! As soon as I will consistently be able to sort out 2^7 and 2^9.

    Scott #514,

    #515 is a palindrom. I will pretend reaching it was my evil plan all along.

  516. Scott Says:

    Jay #515: That the machine (or whatever) that you rule out by a diagonalization argument, has to be of the same kind as the class of machines that you’re diagonalizing against, is not a “hidden assumption” of diagonalization. It’s a stated assumption.

  517. Jay Says:

    As I said, it’s probably not hidden at all to [experts'] eyes. I just never saw it explicitly stated (I have the kindle edition of your book, if you want to point me the statement I’ve not seen).

    The point is, when Greg says “Why do people know that there is a problem that requires exponential time?”, what we layperson should read is “Why do people know that there is a problem that requires exponential time for a digital computer (but maybe not for a quantum computer, or a noisy analog computer, or something else we don’t know yet)?”.

    Or do you think this distinction is unimportant? Look, I would understand if we were talking about Turing-equivalence, but do you think the case is as strong for complexity?

  518. Scott Says:

    Jay #517: Every impossibility theorem in computer science is relative to some particular computational model. You can always violate the conclusions of a theorem by violating its hypotheses. (“…and hence we see that the one-time pad has perfect information-theoretic secrecy.” “Nuh-uh! Not if I smash down your door with an ax and steal your encryption key it doesn’t!”)

    However, an important point to make about diagonalization is that it’s extremely generic. If you want a problem that’s exponentially hard for quantum Turing machines, well then, just diagonalize over the set of all quantum Turing machines, and you’ll get such a problem. The same can be done for nondeterministic TMs, or even TMs with oracles for the halting problem … really, any model of computation whatsoever with at most a countable infinity of machines. (And even for models with uncountable infinities of machines, like infinite sets of polynomial-size circuits, diagonalization-style arguments still often work.)

  519. Jay Says:

    LOL :-)

    What is the best reason to think quantum computers (which we can feed with some unknown quantum state), noisy analog computers (which could be dictated by some unknown distribution of noise) or even Navier-Stokes computers (in case Tao’s blow up can happen for, say, a stringy version of a fluid), should be treated as countables?

  520. Scott Says:

    Jay #519: For quantum computers, we can prove that you can diagonalize over them to produce hard problems for them—even if the quantum computers are fed polynomial-size “magic quantum advice states” that depend on the input length and can otherwise be arbitrary (e.g., could take exponential time to prepare). This follows from the BQP/qpoly ⊆ PP/poly theorem, which I proved in 2004. The key idea is that, even if the quantum state encoded an exponential amount of information in its wavefunction, you’re only going to see a polynomial amount of information when you make a measurement—and that severely limits the number of possible languages that you could decide, even as you range over all possible quantum states.

    For the other models you mentioned, a general issue is this: if you have an uncountable infinity of possible computers, then there’s no way even to write down which initial state you want the computer prepared in using a finite amount of information! Thus, such computers couldn’t be prepared reliably in the same state twice, and it’s debatable in what sense they’d even be “computers.” Of course, if your analog / Navier-Stokes computer behaved more-or-less reliably (i.e., was protected against chaotic amplification of noise), then you could get away with specifying the initial state only to finitely many bits of precision. But in that case, we’re right back to the countable world, where diagonalization applies!

  521. Jay Says:

    Thank you, very clear and interesting!

    Ok I will try to make this my last one for a while, so in advance thanks again and again for the time and the fun.

    Suppose we have three problems so that P1 is larger than P2 and P2 is larger than P3, under a classical model of computation. Is it known if we could have, under some other model of computation, P1 larger than P3 larger than P2?

  522. Jay Says:

    Let P1, P2, P3 such as:
    P2==multiplication (of integers, the basic one)

    Using the usual model of computation, it is strongly believed that P1>P2>P3.

    Now consider a slighty modified model of computation where numbers are not coded using the usual unary notation, but using the primorial number system.

    As we can always switch from one model to the other, no change on the computability: both are Turing completes.

    However, the complexity seems now deeply affected, and that we now have P3>P1>P2.

    Unless I screwed up something, I feel troubling that the complexity classes are not “naturally well ordered”, and that it may depends on something that is not even explicitly defined such as how a number should be represented.

    But maybe that´s again the kind of things that looks stated to experts’ eyes. ;-)

  523. Gil Kalai Says:

    Scott: “If Gil wanted to change my mind about it, he could show me how to simulate BosonSampling in classical polynomial time, when (say) 5% of the photons are randomly lost.”

    This is a very fair demand, Scott. Just that we talk about the same thing: The bosonic state described by an n-rows and 2n-columns Gaussian matrix, lives in a Hilbert space with basis elements indexed by choosing (with repetitions) n colums from the matrix. Now, you delete every row with probability t (so you aim at t=0.05) and you take the mixture of all the bosonic states which corresponds to this matrix with (1-t)n rows (roughly) and 2n columns. This is what you want to simulate classically, right?

    And also “(I can only handle maybe O(1) lost photons)”

    (Again, if I understand you correctly,) I would expect that one cannot handle O(n^a) lost photons for a >0.

  524. Gil Kalai Says:

    Scott, Let me explain why I expect that in the above setting starting with a Gaussian random matrix, the 5% lost photons will have devastating effect and will lead to flat distribution that “does not remember” the input matrix at all. If you think about a setting of Gaussian input matrix mixed with Gaussian noise then indeed it takes a noise level above 1/n to lead to flat distribution. This is a rather simple noise sensitivity result that Guy Kindler and I figured out recently.

    The setting is analogous to similar notions about Boolean functions nicely described in terms of voting rules. Adding Gaussian noise is analogous to error in counting votes of people and lost Bosons are analogous to random abstentions or random votes which are lost. In the Boolean case it is not hard to move from noise sensitivity for noisy counting to noise sensitivity for lost votes so I expect this too in our case. The analytic study may give pretty good description of the number of Bosons for which 5% photon lost will be devastating but I suppose that this can also be checked numerically.

    It is interesting that my result with Guy also lead to “flat distributions” for noisy Boson-sampling which is the same conclusion that Gogolin, Kliesch, Aolita, and Eisert reach but our point of view is different.

    It will be interesting to understand specific Bosonic states which exhibits noise stability. This is indeed part of our project but it does not look easy.

  525. Scott Says:

    Gil #524: Yes, what you describe in your comment is one intuition that I also had. But then the intuition on the other side is that, when only 5% of photons are lost, the permanents that you’re summing will share almost all of their rows, so might be very well-correlated with each other!

    I share your intuition that (as I’d put it), “as totally wrong as Gogolin et al. were, if you lose enough photons then they eventually become right.” But I don’t have a good intuition about how many photons have to be lost before you converge to near-uniformity, and at what rate. Fortunately, this is something for which numerical simulations should be able to tell us the answer soon.

  526. Gil Kalai Says:

    Thanks for your comment, Scott. Indeed, the permanent being very noise sensitive suggests that the permanents you mentioned will be very very uncorellated. I think that you may get some intuition about it without referring to noise-sensitivity by thinking about plus minus 1 matrices. For example, the permanent of the full n by n matrix is the signed sum (according to the first raw entries) of the permanents of the n-1 by n-1 matrices with the first row deleted. This suggest almost no correlation between the full matrix permanent and that of the (n-1) times (n-1) permanents.

    Anyway simulations will be interesting. I would expect the “almost flat” conclusion for the effect of various types of noise will be satisfied, and also that the permanent will behave just like the determinant, so after this beeing realized for small values you can get pretty good impression for larger cases by simply replacing permanents with the much easier determinants. (Moreover, even the low-order corrections for flatness may represent parameters of the matrices which are very easy to compute.)

    As I said when your noise is “a small Gaussian noise added to the raws” rather than “every raw deleted with a small probability” we can prove noise sensitivity (and hence flat distribution).

  527. Scott Says:

    Gil #526: Except, I have a counterargument showing that it can’t possibly be as simple as that. Alex and I proved in our response paper to Gogolin et al. that the probabilities in a Haar-random BosonSampling experiment have a significant amount of fluctuation, purely because of the scaling the matrix’s rows (and nothing further about the permanent itself). Now, if k<<n photons are lost, the output probabilities will still almost scale with the rows, so we expect that source of fluctuation to remain. Of course, this argument doesn’t show that the output distribution will be hard to sample classically—indeed, if the scaling of the rows was the only thing you had to worry about, then it wouldn’t be hard to sample! But it does strongly suggest that the output distribution can’t be uniform when k<<n.

  528. Gil Kalai Says:

    Hmm, interesting, I will think about it as this issue is not relevant to the +1 -1 example and it should be relevant to our noisy-Gaussian added noise example. It can just be a matter of different normalizations.

  529. Gil Kalai Says:

    Actually, Scott, on more thought, it looks that distribution based on multiplying the rows-norms will be noise sensitive as the permanent itself is (but maybe I miss something here). So one easy baby experiment you can run is simply to look at your estimator itself and see what happens to it (in expectation) when k bosons are lost.

  530. Gil Kalai Says:

    Regarding Boson sampling. I think that in spite of the scaling issue Boson sampling is noise sensitive. The version Guy and I considered is that instead of your Gaussian random matrix A you have a mixture of sqrt(1-t^2)A + tB where B is also Gaussian random. In such a case I think we can prove that we get noise-sensitivity: for every fixed t, as n goes to infinity the distribution will become flat. (In fact quite quickly, since this is the case also if t =1/n^{1-\delta}.) In my opinion, ours is a more appropriate noise model compared to a random loss of k bosons but I do not expect the outcomes to be different. Still there is also the strong contrary intuition of Scott (which perhaps follows from results from AA’s second paper) so I suppose we will need to carefully check the matter. I expect this will be something that can be checked analytically (and also simulations can be useful).

    Now about the interpretation: If indeed we converge to flat distribution after noise this, of course, means that asymptotically noisy bosonsampling is not computationally hard at all. My interpretation would be that this makes the bosonsampling idea for demonstrating quantum speed up not viable. I realize that there may still be hope that some sort of speed up will be demonstrated in the 10-30 boson range but I find this hard to believe. (Do we have natural examples where computational hardness is manifested in intermediate ranges of parameters and then vanish asymptotically.) Of course, simulations can make matters and especially the 5-30 boson range clearer. As I said, we may well run the simulations also for determinants and for estimators based on row weights. I would not expect much differences between the three variants.

  531. Curious Programmer Says:

    Guess Rose was right again: SSSV ruled out as a model of the D-Wave device. Scott you seem to be wrong a lot about D-Wave. Anyone else notice this?

  532. Scott Says:

    Incurious Programmer #531: YAWN…

    (1) Neither I, nor Umesh, nor the SSSV paper ever claimed that the SSSV model had been shown to account for all features of the D-Wave device. (Don’t believe me? Reread what we said, concentrating on the actual words!) The claim of SSSV, rather, was that the Lidar group’s evidence for quantum behavior was unsatisfactory, since a classical model can produce the same pattern. And that claim was correct, as Lidar et al. themselves agree: if it weren’t, then they wouldn’t have needed to do new experiments and publish a new paper!

    (2) I recently attended a conference in Aspen, where both the SSSV group and the Lidar group (and D-Wave itself) gave talks and had a public discussion about the latest data. So I can tell you, if you care (which you don’t), that the situation is way more complicated than the SSSV model being “right” or “wrong”! Specifically, it’s looking like the SSSV model works just fine at explaining the machine’s behavior on random Ising spin instances—and Lidar et al. agree about that. The new result of Lidar et al. is that they can construct weird, special instances where they can see quantum annealing signatures that can’t be accounted for with the SSSV model. This is scientifically interesting, of course, but keep in mind that (a) no one knows whether the behavior in those special instances will ever arise in “real-world” instances, and (b) even if it does, it still almost certainly won’t yield a speedup! That’s because, ‘quantum behavior’ or not, you can still simulate the D-Wave machine efficiently on a classical computer using path-integral Quantum Monte Carlo. Indeed, almost the entire case for quantum behavior in the D-Wave machine relies ironically on the ability to simulate the machine classically using QMC!

    (3) Daniel Lidar—lead author of the new paper—agrees with SSSV and every other serious scientist about the lack of any practical speedup from the current devices. In fact Lidar was a coauthor on two important papers (which he stands behind) demonstrating the lack of speedup on all the sets of instances they tested. In other words, there’s no support here—zero—for Geordie Rose’s bombastic claims about the practical utility of the current device.

    (4) As it happens, Daniel Lidar has also been outspoken in talks about his belief that D-Wave should incorporate quantum error-correction, since they have no serious hope of getting a speedup until they do. Once again, this is diametrically opposed to the line from Rose.

    But other than that, yeah, “Rose was right again,” and I “seem to be wrong a lot about D-Wave.”

  533. Jeremy Stanson Says:

    Everybody, both Scott and D-Wave alike, shuffles and tweaks their position to accommodate new information. It seems the question of whether or not the D-Wave machine exhibits quantum effects is slowly being put to rest. Time and time again, we see that it does.

    Two key points of contention remain: i) does it offer any computational advantage? and ii) if it does, can that advantage be attributed to the quantum nature of the device?

    512 qubits is an amazing feat of engineering, but in a real-world noisy environment it is perhaps not computationally sufficient to address these outstanding points. I have observed that all of the experts keep being surprised by how well the machine exhibits quantum behavior despite the noise and short coherence times. Given this, I don’t know if error correction is really necessary, but regardless I don’t think you can simultaneously argue that a) D-Wave needs error-correction and b) the effort is underwhelming because they haven’t shown any speed-up with 512 qubits. If you truly believe they need error correction, would you EXPECT any sign of a speed-up with merely 512 qubits? How many error-corrected qubits do you need to prove a speed-up for any gate-model quantum algorithm? A lot more than 512, I think.

    Now is the time to support the effort and encourage the development of these fascinating machines to 1000s of qubits and beyond.

  534. Scott Says:

    Jeremy #533: Two quick clarifications.

    (1) Suppose D-Wave falsely claims to have achieved milestone X, I point out there’s no evidence that they have, and then years later they do achieve X. Does that mean I was wrong? If it does, then we’re using words in ways I don’t understand, and playing a game that I don’t recognize as science! People constantly want me to prognosticate, but I’ve learned that probably the most useful thing I can do is limit myself to the gap between what D-Wave claims about its current machines and the reality of where they are. If you reread my archives, you’ll notice I never claim that D-Wave can “never” do something. All I say is that they badly misrepresent the current state of the field, and that (despite publishing in academic journals, speaking at academic conferences, etc.) they fail to play by the usual rules of science.

    (2) If you think I found it “surprising” that they failed to see a speedup at 512 qubits, then you’ve totally misunderstood me! The reason the lack of speedup was worth pointing out is not that anyone serious expected a speedup, but rather that before the studies by Troyer et al., D-Wave and its supporters had been loudly trumpeting “speedup” claims—for example, those of McGeoch and Wang! Is the fact that they were doing that now erased from history (despite being googlable)? Do we all now need to pretend, along with Geordie, that random 512-qubit Ising spin instances were just “obviously” a bad place to look a speedup, so no one on the D-Wave side was even the slightest bit disappointed by not seeing any?

    As for “supporting the effort,” I’d say they have plenty of support as it is. If people like superconducting QC, I’d encourage them to support the efforts of the Martinis and Schoelkopf groups, which have achieved ~10,000x longer coherence times on a way smaller budget, are (unlike D-Wave) actually nearing the fault-tolerance threshold, and are now ready to think about scaling up.

  535. Jeremy Stanson Says:

    Scott # 534

    I think I misdirected you a bit with my previous comment. My point is not about comparing actual speed-ups to claims by D-Wave. Let us completely isolate the D-Wave quantum machine from the D-Wave PR machine. My point is that I don’t think anybody who advocates error correction should expect to see a speed-up at 512 qubits. So, the fact that there is no speed-up demonstrated at such a scale should not be a point of criticism from one who advocates error correction.

    Since a) the quantum nature of the device is, I think, pretty much established, b) the lack of a speed-up remains the only real scientific point of contention, and c) you appear to be an advocate of error correction who should not expect to see a speed-up at the current scale, it follows that you should no longer have any genuine scientific point of contention, at least, not about the CURRENT state of things! ;) All you are left with is criticizing the PR, which is such a boring way to spend your time!

    But you should be happy! The epic D-Wave debate is perhaps now truly progressing into your realm of quantum complexity.

    Re: Martinis et al. scaling up. Yes, they have achieved exactly what D-Wave deliberately set out to avoid: they have produced an impressive and elegant device that bears no resemblance whatsoever to what the scaled up version will need to become. See, at least, my comments 395 and 422. The works of D-Wave and the Martinis group or at opposite ends of a spectrum, and neither approach is inherently better than the other.

  536. Scott Says:

    Jeremy #535:

    (1) You write, “Let us completely isolate the D-Wave quantum machine from the D-Wave PR machine.” But it’s my contention that the two can’t be isolated. Throughout its history, D-Wave has made many of its decisions—such as scaling up right away rather than improving the underlying technology, ignoring error-correction, sticking with stoquastic Hamiltonians, not doing experiments to directly measure entanglement or to see what happens to the machine at higher temperatures—essentially because PR considerations overrode scientific ones. And so far, their PR-focused approach has been working! Much of the world has simply bought into the PR, without even asking the questions any scientist would ask. And the result has already been to suck a lot of the oxygen out of QC efforts that try to play by the rules of scientific honesty.

    Now, I’m someone who believes those rules are there for a reason, that they’re not just formalities like the one about the fork going left of the plate and the knife to the right (or is it vice versa?). I’m someone who thinks that science and technology, stripped of the bending-over-backward honesty that Feynman discussed in Cargo Cult Science, can maybe stagger a few steps forward like a headless animal, but eventually it’s going to stumble and crush a lot of innocent bystanders when it does. So tell me again why I shouldn’t care about “criticizing the PR”?

    (2) Your phrase “the quantum nature of the device” packs together a lot of different things that really need to be separated in this discussion. At a small enough scale, everything (including ordinary transistors) has “quantum nature”! And no one ever disputed that D-Wave’s qubits behaved like qubits when you examine them individually. And for several years, pretty much everyone has agreed that there’s collective quantum behavior at the 2-8 qubit level—I accepted that as soon as they published the evidence. So I think the real questions right now are:

    (a) Is there ever “global” quantum behavior, involving dozens or hundreds of qubits?

    (b) If so, is the global behavior of a kind that could ever lead to a speedup?

    (c) Is the global quantum behavior “generic,” or does it only show up for extremely special instances (for example, those for which the spectral gap is not too much smaller than the temperature)?

    Initially, the USC group claimed evidence for a “yes” answer for (a). SSSV pointed out that their evidence was unsatisfactory. Now Lidar et al. have returned with stronger evidence. This is how science is supposed to work—if the whole discussion were at this level, I’d have no reason to blog about D-Wave! :-)

    On the other hand, while D-Wave could be on its way to putting a clear check-mark in front of (a), almost everyone seems to ignore questions (b) and (c). But the D-Wave session in Aspen really underscored their centrality for me! From the data I saw, it’s now looking like, while the SSSV model can be falsified on special, easy instances, it works quite well on the hard instances! And it’s also looking like, even when you do get collective quantum behavior, it’s behavior that’s extremely well-modeled by Quantum Monte Carlo. And as long as that remains true, there’s no real hope for an asymptotic speedup.

    I agree that this is all scientifically quite interesting, and yes, even if there’s no speedup, I’m happy D-Wave has reached the point where people can do actual experiments that test the sorts of questions above. Again, if the hype level could be turned down, I’d be happy to just sit back and let the scientific discussion unfold the way I saw in Aspen. But the decision to turn down the hype level rests with D-Wave (and Google/NASA), not with me.

  537. Jeremy Stanson Says:

    Scott # 536

    Seems it’s just you and me left, but I explicitly address your last comment just in case an entry squeezes in between us :)

    I want to encourage the following perspective:

    D-Wave makes rational decisions in order to achieve its objectives.

    I think we see evidence for this in their willingness to relax qubit quality in order to maximize qubit quantity. Of course they want the highest-quality qubits, but the qubits in a large-scale quantum processor will always be lower quality than the qubits in a small scale proof of concept. They have embraced this and accepted the inevitability of having to manage lower-quality qubits at larger scales.

    If we consider that they are behaving rationally, it follows that D-Wave does not want to piss anybody off with their PR. However, they may be willing to piss off a few individuals in order to secure funding and big coverage for the masses. If you’re a start-up company and Time Magazine comes to your door asking to put you on their cover, you don’t say no and you don’t say “only if every statement in the article passes our rigorous scientific review.”

    You state that you are happy D-Wave has reached this point, but perhaps this may not have been possible without the work of their PR engine. I argue that the field of QC would not be better off had D-Wave relied solely on scientific discourse to further their efforts, because had that been the case then D-Wave would be dead today.

    People talk a lot about “holding D-Wave to the standards of scientific honesty” and “they fail to play by the usual rules of science” (your comment # 534), and so on. But D-Wave does play by the usual rules of science when they play in the usual scientific arenas (papers, conferences, etc.). The usual rules of science do not apply to PR and media coverage. There is no peer-review of the news. That’s not D-Wave’s fault. D-Wave, rationally, has to play in both of these fields in order to succeed. You should limit your application of “the usual rules of science” to the usual arenas of science, because that is the only place where such rules have any meaning or authority. This discrepancy is not unique to the D-Wave situation. Criticisms of D-Wave’s PR really should be broadened as criticisms of PR practices in general and discussion of the issues that arise when science is turned into a business.

  538. Jeremy Stanson Says:

    Also, sorry I must add that your statement:

    “Throughout its history, D-Wave has made many of its decisions—such as scaling up right away rather than improving the underlying technology, ignoring error-correction, sticking with stoquastic Hamiltonians, not doing experiments to directly measure entanglement or to see what happens to the machine at higher temperatures—essentially because PR considerations overrode scientific ones.”

    Is very speculative and, I think, quite wrong. I think all of these decisions can and were based on scientific considerations:

    i) I think I have given many good arguments for the scientific merits of focusing on scaling up from the get go;

    ii) “ignoring error-correction” is meaningless, as it is not yet known exactly how error correction could/should be implemented in D-Wave’s hardware, and in all likelihood schemes may be implemented in any given architecture. “Not yet openly discussing implementations of” does not equate with “ignoring.”

    iii) the form of Hamiltonian is, as I’m sure you know, directly a result of the physical hardware configuration. I don’t see this issue having much PR attention at all… but attainability would be a big consideration from the design side.

    iv) In recent yesars, the PR leverage that D-Wave has gotten out of doing precisely the entanglement and temperature experiments you note would clearly indicate that PR considerations were not what delayed these experiments.

  539. Scott Says:

    Jeremy #537:

    (1) OK then, I only ask that the same indulgences granted to D-Wave also be granted to me! I, too, like to think that I pursue my interests rationally (at least most of the time…). They do what they have to do to stay in business, which is their job; I do what I have to do to fight misunderstandings about quantum computing, which is my job. They don’t consider themselves bound by the standards of science when speaking to the public or investors, but instead issue bombastic statements misrepresenting where they are and trashing academic approaches to QC. So, I write blog posts where I also express myself very differently than I would in research papers. Or should the skeptics just unilaterally disarm and cede the whole public arena to them?

    (2) I have no idea whether QC would be better or worse off without D-Wave. That depends on all sorts of unpredictable events that are still to our future (e.g., maybe they trigger an anti-QC backlash, or slow others down with patent litigation; on the other hand, maybe they decide to incorporate error-correction, or maybe some of their technology becomes crucial to Martinis or someone else). I do, however, feel strongly that we’d be worse off with a D-Wave whose public statements were totally unchecked by skepticism.

    (3) I have often used this blog to criticize hype unrelated to D-Wave! If I give D-Wave an extra amount of love, it’s only because (a) their hype encroaches on stuff I actually know something about, (b) they keep generating more and more and more of it, and (c) people keep asking me about it.

  540. Scott Says:

    Jeremy #538: The decisions I’m talking about are ones that were made early on. E.g. if you say: “they haven’t done such-and-such obvious experiments, or implemented non-stoquastic Hamiltonians, because their hardware doesn’t allow it,” that just pushes the question back to: why wouldn’t they have designed their hardware to allow it? (Regarding error-correction, one could similarly ask: if they don’t yet know how to incorporate it in order to have a serious shot at a practically-important speedup, then why wouldn’t they slow down and think and experiment more until they did know?) When I asked the question about hardware point-blank to D-Wave people when I visited them, I was told the answer was: because investors and public don’t care about the things a scientist would care about; they only want to know how many qubits you have right now. So, this is not speculation on my part.

    D-Wave had a choice: it could have tried to educate investors and the public about how, sure, you could integrate hundreds of superconducting qubits right now, but if you do that then you’re not going to see any quantum speedup, so if you want a speedup then you’re much better off thinking harder and doing more basic research first. Instead, they decided to give people the “MORE! QUBITS! NOW!” that they thought they wanted because of incomplete understanding, and to worry later about how to spin the more-or-less inevitable lack of speedup that’s resulted.

  541. Douglas Knight Says:

    The fork goes on the left and the knife goes on the right because “fork” has the same number of letters as “left” and “knife” as “right.” Of course, “napkin” has yet a third number of letters, but its length has the same parity as “left,” which is where it goes.

  542. Nick Read Says:

    Gil #501, Nick #502 (I also posted this comment over at Gil’s blog):

    I’d like to clarify/follow up on my earlier remarks about topological quantum computation (TQC) with nonabelian anyons.

    The concern was that, while it seems we both agree that time evolution of well-separated anyons should be inherently robust against on-going errors (because coupling to the environment is through local operators only), the initialization step for the anyons would involve them not being well-separated for a short time, and during that time they would be potentially vulnerable to errors. Then time evolution of such faulty initial states would of course give poor results. Then it may seem that TQC would not have intrinsic fault tolerance, and would be no better in principle than ordinary QC methods using qubits and gates. I’ll admit to worrying over this briefly.

    But as I said back in #496, there are also methods for preparing the initial state of nonabelions that are themselves protected against errors in principle. Usually, these involve measurements. Suppose for example that we use the popular model, and so a pair of nonabelions has two states which I will call 0 and 1. For computation we want all pairs to be 0 initially. After creating pairs of these anyons, we make the members of each pair well-separated, so now we agree that, whatever their state is (a superposition of 0 and 1), it is robust against errors. Then we make a measurement on each pair, with result 0 or 1 for each. This can be done by passing further nonabelions around a pair, far away, and using interferometry, and is again robust against errors (indeed, the measurement can be done repeatedly as a check.) This projects each pair to either 0 or 1. Then even if there is some chance that during preparation of a pair we get 1 instead of 0 with some frequency, we can use more pairs than we need for computation, and discard the ones in the 1 state so that we have enough 0s for our computation. This can be done with error as small as you wish and only polynomial time overhead. (Thanks to Steve Simon for help with this argument.)

    So you see that there is much more here that you will need to explain away in order to refute the mere possibility of TQC!


  543. Slipper.Mystery Says:

    Here is more surreality for your reading pleasure:

    “In the fight to prove it really has developed a quantum computer, D-Wave Systems may have won a big round.”

    referring to

    quotes Vazirani, “after talking with them I feel a lot better about them. They are working hard to prove quantum computing.”

    also refers to

  544. Scott Says:

    Slipper.Mystery #543: I just talked to Umesh, and he says that he was badly misquoted.

  545. Gil Kalai Says:

    I had another round of interesting exchange with Daniel and Matthias regarding the old and new papers that we discussed in this thread. It gave me interesting perspective on their views regarding hypothesis testing (using statistics) in experimental physics in general and in this instance, in particular. Since it is lengthy let me just give a link.

    Nick and I also had another round of discussion over my blog and I was happy to host Nick. Very brief summary of the way I see things:  My main objective is not to refute the mere possibility of QC or TQC, but to offer the absence of quantum fault-tolerance as a powerful tool to explain our quantum reality. I agree that there is much more to do regarding TQC. My argument regarding the nature of anyonic qubits applies not just to the initialization stage but to every intermediate stage as well.  I think that the argument extends from qubits to gates also. Nick and Steve’s argument on how perfect gates plus measurements can be used to “purify” qubits is cool!

  546. Jeremy Stanson Says:

    Scott # 544

    With every positive statement made, you are there with a contrary. There is a big difference between healthy skepticism and ugly antagonism.

  547. Scott Says:

    Jeremy: WTF?!? If Umesh was misquoted, isn’t that a relevant fact that deserves to be known?

    The quote was almost comically out of character for Umesh, who’s usually much more skeptical of D-Wave claims than even I am, and who also would never use a sloppy phrase like “prove quantum computing.” (And it’s also opposite in tenor from another Umesh quote in the same article: “One should have a little more respect with the truth.”) So I called Umesh to ask him about it—if he’d really changed his mind, then I certainly wanted to know his reasons, so that I could consider whether to change my mind as well! :-)

    Instead, Umesh told me that the writer mangled what he said almost beyond recognition. He also said to go ahead and convey that in my comments section, while he figures out how else to respond—so that’s exactly what I did.

    More broadly, I reject your view that any “pro-D-Wave” claims are “positive,” while “anti-D-Wave” claims are “negative” (or even “ugly”). Why do they get a monopoly on “positive”? I’m actually a pretty “positive” person: positive about theoretical computer science, positive about the fault-tolerance threshold, positive about the amazing experimental work aimed at getting to that threshold, positive about the deep insights into the natures of specific problems (e.g. factoring) that have been needed to design efficient quantum algorithms for those problems. And I don’t see how you can be positive about the things I’m positive about without being negative about a lot of what D-Wave says.

  548. Jeremy Stanson Says:

    Scott # 547

    Ah, but surely if Umesh could be so readily misquoted with so little time in the limelight, then at least some of the comments for which you so readily attack D-Wave could be similarly misquoted? This is why I continue to overlook the PR myself, and I encourage those who are irked by it to keep their focus on the real discussion. Anybody who has been involved in anything newsworthy knows that media coverage is very imprecise.

  549. Scott Says:

    Jeremy, D-Wave people comment on this blog (Geordie himself did until a few years ago), so if my criticisms of them ever rely on misquotes by the press, then they have plenty of opportunity to correct me. But the truth is that, with D-Wave, Google, and NASA themselves producing PR like this, misquoting could hardly make things worse.

  550. Jeremy Stanson Says:

    I’m trying to watch the video as objectively as possible, and apart from the fact that it’s fluff and a bit silly I can’t find any real issue with it. There’s the part where the news reporter refers to a “quantum leap” in computing power (which technically means a very, very tiny increase but which she intends to mean a large jump) and there’s the reference to humans evolving from monkeys (with which I have no objection but I suppose I’d leave it alone if you said you do), but otherwise what’s the problem?

    I think the problem is, sadly, yours. D-Wave pissed you off many years ago and now your impressions are biased. This is what I’m referring to when I point out that you find a contrary to every step of progress. Greg Kuperberg makes many smart and interesting comments, but then he STILL gripes about the Sudoku demo! I think, if pressed, D-Wave would admit their initial demos in 2007 were not handled very well (I think I’ve read this from them somewhere, and they did replace their CEO…). They have moved on. They don’t handle PR in the same way anymore. You should move on too.

    If I’m wrong, then please give me just a few simple examples of what is wrong with this video, and you can’t just say the “overall vibe” which I doubt you would criticize in the same way if this was your first encounter with D-Wave.

  551. Scott Says:

    Jeremy, on the contrary, my views about D-Wave have not been uniformly negative, but have varied in direct response to what D-Wave was doing. In the beginning, when they did the Sudoku demo, which inspired the Economist and others to run stories about QC that were cringingly wrong from start to finish, I was pissed—how could I not be? But then I talked to the D-Wave people (both when they visited MIT and when I visited them), and they published a nice paper in Nature, and they seemed serious about actually understanding what was going on with their devices, rather than making premature and bombastic claims of practicality. So I decided that they’d changed, and I moderated my tone, even announcing my “retirement” as skeptic. I thought that the ordinary scientific process would take care of everything from that point forward. But then came the sales to Lockheed and Google, and the bombastic claims started up again even worse than in 2007—this time, accompanied by claims that Lockheed and Google were promptly going to use their new machines to “crack otherwise impossible problems.” And on top of that, D-Wave’s supporters (like Brian Wang) were now gloating to the skies that the sales proved that everything I ever said about D-Wave was wrong. So I got re-pissed … again, how couldn’t I? And then I met Matthias Troyer, who spent hours explaining to me how his group’s recent experiments with the machine had indeed found no speedup, and how D-Wave had initially collaborated with him on his experiments but broke off contact once they saw that things weren’t going their way. And my pissedness turned into resolve to tell people what I knew about what was going on.

    And sorry, but I’m not watching that video again. Once was enough for a hundred lifetimes.

  552. Slipper.Mystery Says:

    The video is extremely informative.
    The part where they illustrate the superposition of a pizza with a bagel to produce a pizza bagel is culinarily precise, edifying and moreover edible.
    They explain that a quantum computer will be used to find the least expensive trip through 20 cities, observe that we won’t be able to ask it the meaning of life, but do clearly state that it will be able to answer the question “are we alone?”.

  553. Gil Kalai Says:

    A quick update about BosonSampling and noise sensitivity (#530,#527): Taking the intuition from permanents I thought that the square of the permanent will be very noise sensitive leading to a flat distribution for noisy BosonSampling. However, exploring this with Guy Kindler it looks now (after several ups and downs) that this is incorrect (in agreement with what Scott wrote #527). The Permanent square has substantial presence also on low Fourier (Hermite) coefficients. It is still true that the noisy distribution will be far away from the original permanent-squared distribution perhaps even for very small amount of noise.

    (And being in the quantum-movie business myself, my own view of the Google/NASA/DWave video is that it is very good.)

  554. Jeremy Stanson Says:

    Scott # 551

    I understand you not wanting to watch the video again – why devote the time just to further a discussion deep down in the comments section of an old blog post? But you are the one who introduced the video as an example of what is wrong with D-Wave’s PR, and I find that the video in no way supports your claims. Quite the opposite, in fact.

    Furthermore, after digging around a bit I can’t really find any D-Wave-controlled PR from recent years that I would consider “bombastic” or scientifically dishonest. I do think they jumped the gun a bit in PRing the McGeoch et al. results, but true to your position that we can only make comments about the current state of things, they didn’t have then all the further data that we have now.

  555. Nick Read Says:

    BBC News has a story about D-Wave (you’re quoted Scott,
    so you may know about it).

    Usual mis-statements (e.g. conflating quantum adiabatic with quantum annealing, asserting that Lockheed Martin actually use it, and so on).

Leave a Reply