D-Wave: Still propagating

Last night Jason Pontin, the Editor-in-Chief of MIT’s Technology Review, published a hard-hitting article about D-Wave, the Vancouver startup that claims to have built “the world’s first commercial quantum computer.” A condensed (and, alas, considerably mangled and dumbed-down) version of his article will appear in Sunday’s New York Times.

Jason wrote to me a couple weeks ago and said that, while he knew that most journalists had gotten the D-Wave story wrong, he was determined to get it right, and wanted my help in understanding the computer-science issues. He didn’t have to ask me twice.

Since I come across as pretty harsh on D-Wave in the article, I think it’s worth recalling how we got to this point. When the D-Wave story broke, my first reaction was to give them some benefit of the doubt (as you can see from the mild tone of my “Orion Anti-Hype FAQ”). So what changed? Well, four things:

  1. I asked Geordie Rose (the founder of D-Wave and sometime commenter on this blog) twice for actual information about the speedup D-Wave claimed to be seeing over classical algorithms, and he never provided any.
  2. Instead of answering my and others’ technical questions, D-Wave decided to attack academic science itself. Here’s what Herb Martin, the CEO of D-Wave, told an interviewer from ITworld: “Businesses aren’t too fascinated about the details of quantum mechanics, but academics have their own axes to grind. I can assure you that our VCs look at us a lot closer than the government looks at the academics who win research grants.” The track record of companies that engage in this sort of behavior is not good.
  3. I read article after article claiming that quantum computers can solve NP-complete problems in polynomial time. I reflected on the fact that a single one of these articles will probably reach more people than every quantum complexity paper ever written.
  4. It became clear, both from published interviews and from talking to Jason, that D-Wave was doing essentially nothing to correct journalists’ misconceptions about what sorts of problems are believed to be efficiently solvable by quantum computers.

Update (4/6): Geordie Rose has responded to me. A few responses to his response:

  • I apologize for getting the CEO’s name wrong. I’d originally written Herb Martin, but then noticed that the ITworld article referred to him as “Ed Martin” and therefore changed it. This seems like another case where D-Wave has to work harder to correct journalists’ misconceptions!
  • In a discussion with Geordie on my blog, following my Orion Anti-Hype FAQ, I asked:

    You say you’re seeing a speedup in your experiments — but (1) how big of a speedup, and (2) do you mean compared to the best-known classical algorithms (like simulated annealing), or compared to brute-force search?

    Then, in another discussion with Geordie following my Speaking truth to parallelism post, I asked him again:

    I’m certainly glad that you’re not claiming an exponential speedup. But at least based on the evidence I know about, whether you’re seeing a quadratic speedup — or indeed any asymptotic speedup — is very much open to question. Hence the question I asked you earlier: have you compared the empirical scaling for your adiabatic algorithm to the empirical scaling for the best classical algorithms like simulated annealing? If so, what were the results?

  • Geordie now says that “the only way scaling can be extracted is empirically, and we can’t build big enough systems (yet) to answer scaling questions.” Thanks; that’s actually extremely helpful to me. I must have gotten a wrong impression from some of D-Wave’s previous statements. For example, here’s from D-Wave’s recently-released “Introduction to Orion” document (which now seems to be available for “premium subscribers” only):

    Q: Are D-Wave processors quantum computers?
    A: Yes. We have determined that quantum effects are being harnessed to accelerate computation in our processors.

    And here’s from a comment on Dave Bacon’s blog (blog comments seem to be D-Wave’s preferred venue for announcing scientific results):

    While the jury is still not in, our studies of these systems seem to indicate that AQCs, in the presence of thermal and spin bath environments, can still provide O(sqrt(N)) scaling even though the central QC system is definitely NOT a “system that’s globally phase coherent over the entire calculation”.

  • The core of Geordie’s response is the following paragraph:

    This is worth emphasizing, because I thought it was obvious, but it turns out alot of people don’t get this. Most of the poorly thought out comments related to what we’re trying to do have come from theoretical computer scientists, who assume that the things they hold dear are likewise treasured by everyone else. Because they worship efficiency, they have assumed that’s the objective of our projects, when I have repeatedly said it’s not.

    When I read this to Umesh Vazirani over the phone, he sardonically replied, “it will be interesting to find out what’s left of this field after you’ve removed the notion of efficiency…”

27 Responses to “D-Wave: Still propagating”

  1. Ben Toner Says:

    Actually, I think that the article is quite favourable to D-Wave. Geordie Rose comes across quite well in the interview. Pontin (the interviewer) starts out quite surly and he doesn’t challenge any of Rose’s misleading assertions, including, for example, that computer scientists have some fancy definition of what “approximate” means, his definition of fault-tolerance, that his proof of concept actually proved anything, that there are “lots of applications in financial engineering,” etc. etc. I think the article would be improved if Rose’s interview were used as a source of quotations in the main text, with more comparison between the Rose and his critics, rather than ‘one side says this, the other this, you decide,’ especially when Rose comes across as more reasonable than you do, Scott.

  2. rrtucci Says:

    For your next debate, I want you to take on Laughlin. MIT Tech Review, are you listening.

  3. Scott Says:

    Hey, the QC hypers, the QC skeptics, the computer scientists who wish QC would go away … I take all comers.

  4. (Instrumental) « we don’t need no “sticking” room 408 Says:

    [...] (Instrumental) Apparently this is a hard hitting article about D-Wave, though I much prefer journalism to be done in the death zone. [...]

  5. John Sidles Says:

    Our QSE Group tracks the D-Wave’s story with great interest, not because we regard it as an Armageddon-type battle between the forces of light and darkness, but instead because we think the fortunes of D-Wave exemplify a natural and very desirable stage in the evolution of quantum information theory from an academic discipline to a tool for enterprise.

    Being huge fans of history, we foresee that the evolution of quantum information technology will occur by a path similar to the evolution of computational fluid mechanics. Without belaboring the point, the graphic that we use to depict this evolutionary path (png here) comes straight from a review article on computational fluid dynamics.

    From this historical point of view, Scott and his cohort represent “Stage I” on the CFD time-line: “Scientific Papers and Know-How.” D-Wave is among the first enterprises to attempt “Stage II: Gee Whiz, Look What Can Be Done!” and “Limited Pioneering Applications”. Quantum key distribution is another obvious example of Stage II quantum information technology.

    Arguably, quantum information technology has not yet reached “Stage III: Putting It All Together”, “Stage IV: Learning How To Use It Effectively”, or “Stage V: Mature Applications.” However, most people agree that these later stages are approaching inexorably.

    It is hugely reassuring that none of Stages I-V ever concludes; instead they form a stable mathematics, science, and technology ecosystem within which young people’s careers can prosper. In particular, the Stage I of fundamental research depends utterly for its support and long-term vitality upon the Stage V of mature applications, and vice versa, of course.

    These individual stages have goals and values and resource needs that sometimes conflict; historically these conflicts tend to be settled in a typically human fashion: by squabbling and politics masquerading as reason and statesmanship!

    Oddly enough, at the end of the day, public commitments to reason and statesmanship often turn out to be real. Isaac Bashevis Singer wrote a wonderful short story on this theme, entitled A Piece of Advice, that can be found in his collection The Spinoza of Market Street.

    So hoorah for Shtetl Optimized and the blogosphere … the agora of the science and technology community! :)

  6. Scott Says:

    Ben, the Tech Review article looks favorable to D-Wave by our standards, but by media standards it’s very critical. I now have some experience with how these popular-science sausages get made, and I can tell you that the following paragraph (regardless of what else was there to ‘balance’ it) wouldn’t make it past an editor at many newspapers:

      But computer scientists who specialize in quantum computing have been profoundly skeptical of D-Wave’s demonstration. D-Wave provided no evidence to back up its claims: it has released only the sketchiest details about the inner workings of Orion. What computer scientists do know does not impress them.

    You’re just not supposed to say things like that in an article on new technology — it’s too close to reality for comfort.

  7. rrtucci Says:

    Good job Scott. From now on I’m going to call you The Anti-Lubos

  8. Jud Says:

    Ben, as a layperson with no knowledge of quantum computing or complexity theory, GR did not come off as reasonable to me. Particularly when his answer to “How are you going to prevent decoherence?” is essentially “Try until we get it right:”

    “[G]iven that we can build it and send information in and out of it, will it in fact continue to operate as a quantum computer? That’s a point that we simply cannot answer at the present time because no one has been able to model systems at that level with any predictive capability whatsoever. It’s too complicated. That’s a question that can only be answered empirically. So our philosophy is, do a new processor every month. Say we have 12 generations per year, something doesn’t appear to be working; we can fix it through iterative redesign.”

    At 12 generations per year (can anyone with fabrication experience say whether this is remotely reasonable?), one wonders whether the “fix…through iterative design” can be achieved in polynomial time, “polynomial time” being defined for present purposes as “before the venture capital bucks run out.”

  9. John Sidles Says:

    I forgot to mention, that the CFD community openly discloses its dynamical equations and numerical techniques in the peer-reviewed literature, and many of the most-used CFD codes are open-source (like OVERFLOW). Companies like Boeing and Airbus have come to rely utterly upon this Stage I and Stage II transparency to build investor confidence.

    D-Wave seems to be departing pretty significantly from this paradigm in keeping their algorithms secret. Scott is absolutely correct in noting that “The track record of companies that engage in this sort of behavior is not good.”

    One of the rare partial exceptions (that I can think of anyway) is Celera’s reliance upon proprietary algorithms in their private sector project to shotgun-sequence the human genome.

    The NIH in particular was very skeptical that Celera’s algorithms would work when applied to large genomes. However, events proved that Celera’s confidence in their algorithms was justified.

    Nowadays many investigators apply shotgun-sequencing methods with great success, and the happy result is that there is now an enormous peer-reviewed literature on these methods, which has created great confidence in their efficacy.

    As far as I know, D-Wave has never disclosed their quantum computation algorithms, even to the limited qualitative extent that Celera disclosed their shotgun-sequencing algorithms.

    So AFAICT, there is not yet much grounds to place substantial confidence in D-Wave’s claims. Peer-review, replication, and open-sourcing have not yet been given a chance to do their work. I have to say that I fully agree with Scott in this regard.

  10. Nagesh Adluru Says:

    You are the truest computer scientist I met so far. You have not only authority but have skills to make good use of it! Academics (in fact whole world) needs more people like you.

  11. More on the TR interview « rose.blog Says:

    [...] OK in this second installment of Geordie’s Easter Weekend Vent-a-thon, I’d like to offer some food for thought related to Scott Aaronson’s latest blog post. Specifically I’d like to take him to task on his rationalization of why he is so crusty, the most recent examples of which are: 1. I asked Geordie Rose (the founder of D-Wave and sometime commenter on this blog) twice for actual information about the speedup D-Wave claimed to be seeing over classical algorithms, and he never provided any. [...]

  12. Andy D Says:

    This type of shenanigans has made Robin Hanson-style prediction markets for science seem more and more attractive to me, despite some reservations. You’ve made your case on the level of ideas, now lay down a grand challenge for D-Wave (there must be some very-hard distributions over SAT available online) and offer a bet with odds way in its favor; with the gauntlet thrown, any attempt by D-Wave to alter the terms of the challenge will seem like mere weaseling, as will further hype on paper.

    Complexity theorists could sign on to the bet, but you’d need more capital to make the scale dramatic. Luckily, it should be far easier to win over a few very rich individuals than the general public. I’m also curious to what extent this has been attempted with global warming.

  13. Ghaith Says:

    Is there any reason why D-Wave doesn’t talk about factoring in their anticipated larger model (1024 qubit)?

  14. Scott Says:

    Andy: Yeah, this does seem like a decent case for prediction markets!

    However, I think a bet has already implicitly been made, even if there’s no money at stake (on our side). If a couple years from now, D-Wave starts solving huge 3SAT instances faster than anyone can solve them with a classical computer, then Umesh and I and everyone else who’s been criticizing them will look a bit silly. The best we’ll be able to say — and we’ll be right — is, “our criticisms were reasonable ones, based on the evidence that D-Wave chose to make publicly available at the time.”

  15. Dave Bacon Says:

    This is almost as fun as criticizing string theory!

  16. cody Says:

    I was just thinking, if D-Wave’s claims were correct, why don’t they factor RSA’s challenge numbers to raise some money? Have they stated any reason why they cannot do such a thing? Not that I really need another reason to doubt them, I am way too trusting of Scott’s opinions (in general).
    It’s funny that D-Wave is so private about their work when they have such a readily available way to prove their claims without giving away their methods (i.e. cracking current encryption).

  17. cody Says:

    Oops, I should have read all the comments first. Also, I like your last comment Dave.

  18. Scott Says:

    Ghaith and Cody: Yeah, that’s an obvious question. I can’t answer for D-Wave; if you want Geordie’s thoughts on the factoring problem, see this blog post.

  19. Greg Kuperberg Says:

    My feeling is ever stronger that Rose doesn’t get the PCP theorem. Once again, he said in the interview with Jason Pontin:

    What computer scientists tend to mean by “approximate” in these cases is something very specific about how great the approximation is, and they tend to mean something that is very close to exact.

    Emphasis mine. Essentially Rose has been saying all along that the only reason that NP-hard optimization problems might be out of reach for quantum computers is that computer scientists split hairs. They have only shown that the single very best solution is NP-hard, or in the case of the PCP theorem, they only show that solutions a tiny fraction off from the very best are NP-hard. After all, Rose described their computer-to-be this way:

    The system we are currently deploying, which we call Trinity, is a capability-class supercomputer specifically designed to provide extremely rapid and accurate approximate answers to arbitrarily large NP-complete problems

    Now the PCP theorem is a non-trivial business, because the gap in the PCP is not stable under general polynomial-time reductions. Some NP-hard problems have a sizable PCP gap, others only a small gap, and others don’t have a gap at all. Since I am new to this great topic, I had trouble finding a really juicy example.

    But Rose found it for me, when he specifically cited the maximum independent set problem as one that users can solve on his computers. In a striking paper, Johan Håstad showed that finding a maximum independent set (equivalently a max clique) on a graph with N vertices is NP-hard even within a factor of N^{1-epsilon}. In other words, any computer, short of one that can fully and exactly solve NP-hard problems in polynomial time, will sometimes miss the elephant in the room when looking for a maximum independent set.

    For instance, according to Håstad (if I have read the paper correctly), if the graph has N vertices, the largest clique may have N^{4/5} vertices, but the largest findable clique may only have N^{1/5} vertices. Again, “findable” means any talent short of being able to fully and exactly solve NP-complete problems in polynomial time. I do not think that being off by a factor of N^{3/5} is either “very close to exact” or “extremely accurate”.

  20. Cheshire Cat Says:

    Greg, you’re missing Rose’s point, which is that the complexity theorist’s notion of worst-case complexity is not very useful in the real world. He seems to be saying that quantum computers may have an advantage when finding approximate answers to typical instances, i.e, instances that arise in practice. Since he doesn’t quantify what “typical” is, this is hard to refute. Certainly there is not a strong enough theoretical justification for why “typical” instances are hard to solve when you take a natural notion of “typical” such as choosing uniformly at random from some easily described distribution.

  21. Niel Says:

    Cheshire Cat said: Greg, you’re missing Rose’s point, which is that the complexity theorist’s notion of worst-case complexity is not very useful in the real world.

    This sounds like an empirical claim to me. There are hoary old arguments about large multiplicative constants and large polynomial degrees, but these are reasons why the complexity theorists’ conception of efficient is controvertial (but essentially only for non-specialists). This is different from it being wrong, which is equivalent to saying that there are problems which are deemed to be “difficult” (and always will be), but which are easy in practise — an empirical claim.

    In his 3:10 post above, Scott basically says that he’s ready to be refuted on empirical grounds. Even if he only says this because he expects Quantum Computers to violate the Strong Church-Turing thesis, it seems that he’s being quite fair by your standards, whatever your skepticism about the usefulness of the complexity theorists’ definition of “efficient”.

    In any case, the whole touble is that D-Wave has yet to demonstrate very convingly that its product “can easily solve difficult problems”, whatever consequences you would take such a demonstration to have.

  22. Greg Kuperberg Says:

    Greg, you’re missing Rose’s point, which is that the complexity theorist’s notion of worst-case complexity is not very useful in the real world. He seems to be saying that quantum computers may have an advantage when finding approximate answers to typical instances, i.e, instances that arise in practice.

    No, I have not seem him say “typical”, although I would not be surprised if he changed his message to now say it that way. I just searched the company site, his blog, and his interview with Pontin, and I do not see any relevant occurences of the word “typical”.

    His line the entire time has not been that computer scientists argue with artificial instances of problems, it’s that they split hairs by demanding either exactitude or too much accuracy. But that is just not true. In one dramatic case, max clique or max independent set, an adapted PCP theorem says that you can’t see the elephant in the room.

    Besides, there isn’t any general theorem that assures you that inapproximability is always atypical. I haven’t heard of strong results on the matter in either direction. How much patience is due to somene who demonstrates the trivial, promises the impossible, then whittles the promise so that it may or may not be impossible?

  23. Greg Kuperberg Says:

    Now that I ponder it, I remember an earlier example from D-wave that contradicts Rose’s explanations even more seriously. One of the major attractions of the Orion demonstration was that it solved Sudoku puzzles. Indeed, the title of the Scientific American article was, “First ‘Commercial’ Quantum Computer Solves Sudoku Puzzles”. Rose has repeatedly argued that hardness results are irrelevant to his game because computer scientists split hairs. But this argument doesn’t apply to Sudoku, because a Sudoku solution is either exactly right, or it’s wrong. Although Rose didn’t say it, he could accept the gift argument that computer scientists are wrong because they look at artificial instances. But this argument also doesn’t apply to Sudoku, because it’s a challenge puzzle that can be made hard on purpose. Generalized Sudoku is hard precisely because it is NP-complete.

    But there is a more immediate problem with the Sudoku demonstration, apart from what Rose is promising in the future. How can a 16-qubit computer solve Sudokus? According to a basic incompressibility theorem, 16 qubits cannot store any more classical information than 16 bits. Even a single free row of a Sudoku requires 19 bits of storage, because 9! > 2^18. How in the world can a 16-bit computer solve Sudokus? Moreover, this does not even count any overhead that you would need to translate a Sudoku to the Ising model.

    Apropos of this issue, the Scientific American article asked, “And how exactly would users know that it was the quantum computer rather than a human or ordinary computer answering their queries?” Rose’s answer was that there is no way to convince a committed skeptic. But that is not the right version of the question. My question now is, how could the Sudoku demonstration have been anything other than fake?

    Scott, do you have any ideas on that?

  24. Scott Says:

    Scott, do you have any ideas on that?

    Well, they could be calling the Orion only when they get far enough down in the search tree. Who knows? It’s really a question for Geordie.

  25. Greg Kuperberg Says:

    Well, they could be calling the Orion only when they get far enough down in the search tree.

    If it is far enough down the search tree, then it does not shake the appellation “fake”. No one at the demo had in mind a 100-yard relay race in which the qubits only run the last four feet. Remember, they don’t even have 16 fungible qubits, they only have 16 Ising qubits. Is there room to encode more than the four last cells of a Sudoku in so little space?

    Besides, you can’t solve a full-board Sudoku with just a search tree, because the search is impossibly large. Most of the work is pruning by early contradiction. That’s what makes Sudoku fun: It’s a combinatorial search that looks a trillion times harder than it really is. Branching in the search is a last resort, and moreover it has to be encoded though the pruning. I do not know how to distill any interesting Sudoku question down to 16 bits or 16 qubits, much less 16 Ising qubits.

    Who knows? It’s really a question for Geordie.

    He has already thrown up his hands with the answer that no one can convince a committed skeptic. I see an opening to do the work for him. If we can know ourselves that solving a Sudoku with 16 Ising qubits is a travesty, then in particular it was so in the Orion demonstration.

  26. cody Says:

    Scott, thanks for that link to Geordie’s blog. much like the interview, it sounds more like a buisnessman with technical jargon who can convince layman rather than a technical person with legitimate know-how who can convince peers. but im pretty much a layman so i cannot take much confidence in my criticism.

  27. anon Says:

    /…He seems to be saying that quantum computers may have an advantage when finding approximate answers to typical instances…/

    I’m no expert, but aren’t there problems in the theory of lattices that are NP-hard, with worst case as hard as average case, and hard to find approximate solutions for, all at once? Presumably the smug thing would be to ask GR for any approximate solution to any such problem of his choosing, properly generated from a seed…?