Raise a martini glass for Google and Martinis!

We’ve already been discussing this in the comments section of my previous post, but a few people emailed me to ask when I’d devote a separate blog post to the news.

OK, so for those who haven’t yet heard: this week Google’s Quantum AI Lab announced that it’s teaming up with John Martinis, of the University of California, Santa Barbara, to accelerate the Martinis group‘s already-amazing efforts in superconducting quantum computing.  (See here for the MIT Tech‘s article, here for Wired‘s, and here for the WSJ‘s.)  Besides building some of the best (if not the best) superconducting qubits in the world, Martinis, along with Matthias Troyer, was also one of the coauthors of two important papers that found no evidence for any speedup in the D-Wave machines.  (However, in addition to working with the Martinis group, Google says it will also continue its partnership with D-Wave, in an apparent effort to keep reality more soap-operatically interesting than any hypothetical scenario one could make up on a blog.)

I have the great honor of knowing John Martinis, even once sharing the stage with him at a “Physics Cafe” in Aspen.  Like everyone else in our field, I profoundly admire the accomplishments of his group: they’ve achieved coherence times in the tens of microseconds, demonstrated some of the building blocks of quantum error-correction, and gotten tantalizingly close to the fault-tolerance threshold for the surface code.  (When, in D-Wave threads, people have challenged me: “OK Scott, so then which experimental quantum computing groups should be supported more?,” my answer has always been some variant of: “groups like John Martinis’s.”)

So I’m excited about this partnership, and I wish it the very best.

But I know people will ask: apart from the support and well-wishes, do I have any predictions?  Alright, here’s one.  I predict that, regardless of what happens, commenters here will somehow make it out that I was wrong.  So for example, if the Martinis group, supported by Google, ultimately succeeds in building a useful, scalable quantum computer—by emphasizing error-correction, long coherence times (measured in the conventional way), “gate-model” quantum algorithms, universality, and all the other things that D-Wave founder Geordie Rose has pooh-poohed from the beginning—commenters will claim that still most of the credit belongs to D-Wave, for whetting Google’s appetite, and for getting it involved in superconducting QC in the first place.  (The unstated implication being that, even if there were little or no evidence that D-Wave’s approach would ever lead to a genuine speedup, we skeptics still would’ve been wrong to state that truth in public.)  Conversely, if this venture doesn’t live up to the initial hopes, commenters will claim that that just proves Google’s mistake: rather than “selling out to appease the ivory-tower skeptics,” they should’ve doubled down on D-Wave.  Even if something completely different happens—let’s say, Google, UCSB, and D-Wave jointly abandon their quantum computing ambitions, and instead partner with ISIS to establish the world’s first “Qualiphate,” ruling with a niobium fist over California and parts of Oregon—I would’ve been wrong for having failed to foresee that.  (Even if I did sort of foresee it in the last sentence…)

Yet, while I’ll never live to see the blog-commentariat acknowledge the fundamental reasonableness of my views, I might live to see scalable quantum computers become a reality, and that would surely be some consolation.  For that reason, even if for no others, I once again wish the Martinis group and Google’s Quantum AI Lab the best in their new partnership.

Unrelated Announcement: Check out a lovely (very basic) introductory video on quantum computing and information, narrated by John Preskill and Spiros Michalakis, and illustrated by Jorge Cham of PhD Comics.

51 Responses to “Raise a martini glass for Google and Martinis!”

  1. quax Says:

    To the extend that I contribute to the “blog-commentariat” I don’t really had an issue with the “fundamental reasonableness” of your views, but more with how you chose to express them.

    Of course in the bigger scheme of things it wasn’t only your prerogative to be as much over the top as you liked, it also made for good reading, an exciting blog, and overall more attention to D-Wave and the strive for QC in general.

    At any rate, I am very happy that you no longer have to be worried about the prospect of any D-Wave under-performance tarnishing the QC field as a whole.

  2. Phil Miller Says:

    When you say “long coherence times (measured in the conventional way)” I assume you mean wall-clock seconds, rather than something more exotic? I agree that absolute times are important to prevent misleading conclusions.

    However, from a computer architecture perspective, it’d also be really useful to measure coherence duration in terms of operation cycles. I assume that measures taken to improve coherence may also make operations slower. The real test for a scalable quantum computer is something like “can it complete enough operations before it loses coherence?”

  3. Scott Says:

    Phil #2: A reasonable guess, but no, that’s not what I was talking about. (Which implies, in particular, that I should have been clearer.)

    For years, D-Wave has argued that, even though its qubits decohere in nanoseconds when you define decoherence using the so-called T1 and T2 times (which, very roughly, measure the vanishing of the off-diagonal density matrix elements), nevertheless they think their system maintains its coherence for a much larger time in the energy eigenbasis. In addition, they argue that, if (like them) you’re just trying to do adiabatic optimization, then coherence in the energy eigenbasis is all that’s needed.

    This view of D-Wave’s has generated much discussion and debate among error-correction experts (among whom I don’t count myself), the reason being that we still don’t have nearly as good an understanding of fault-tolerance for adiabatic QC as we do for gate-model QC. However, I think a reasonable summary of what’s currently understood is this (experts please correct me if I’m wrong):

    Sure, if you’re trying to do adiabatic optimization, then T1 and T2 times are no longer directly relevant: perfect coherence in the energy eigenbasis would theoretically suffice to keep you in the ground state throughout, which is what you want. The problem is that, in practice, you don’t have perfect coherence in the energy eigenbasis, because your system isn’t at absolute zero. It has some nonzero temperature, and that temperature could severely increase the risk of level-crossings (even beyond what the risk would be for the perfect, zero-temperature adiabatic algorithm). Roughly speaking, the increased risk of a level-crossing depends on the ratio of the temperature to the minimum spectral gap. In D-Wave’s case, the temperature is constant (~20 milliKelvin), whereas for most or all hard optimization problems, the spectral gap is expected to decrease exponentially with the problem size. And that means big trouble when you try to scale up.

    So, while it’s true that you don’t directly care about T1 and T2 times with this approach, you’ve traded them for a different form of decoherence that you don’t really understand how to error-correct. With the gate-model approach, at least there’s a very good theoretical understanding of how you would error-correct, once your T1 and T2 times became large enough compared to your operation time (and remained large enough even as you added more qubits).

  4. Rahul Says:

    “I might live to see scalable quantum computers become a reality…”

    Probability please. 🙂

    Also, did this Google decision increase the probability above in your estimation? By how much.

    Me for one, I don’t care who succeeds in building the first, scalable, useful, universal QC. But if *anyone* does in my book Scott deserves credit.

  5. wolfgang Says:

    >> if the Martinis group, supported by Google, ultimately succeeds in building a useful, scalable quantum computer

    But will it be conscious? What is your prediction for that?

  6. Rahul Says:

    “….nevertheless they [DWave] think their system maintains its coherence for a much larger time in the energy eigenbasis.”

    Naive question: Is there an empirical way to test for coherence times in the energy eigenbasis? How?

  7. Scott Says:

    Rahul #5: I’ve coughed up probabilities for scalable QC in several previous comment threads, and I can’t remember what I said—but I fear that, if I oblige these requests too many times, then someone is going to spot an inconsistency and Dutch-book me. 🙂 So can’t you just dig up one of the earlier threads?

    But yes, whatever my probability distribution was before over the arrival time of scalable QC, the fact that Google is now supporting John Martinis does shift it leftward by a small amount. If it happens, of course the people who’ll deserve credit will be the ones who did the work and contributed the insights that made it possible.

  8. Scott Says:

    wolfgang #5: OK, my probability for the first useful QC to be conscious is approximately 0.001%, or 0.00001% if the QC is built within the next century. 🙂 FWIW, I tend to think of human-level AI—never mind consciousness!—as a vastly harder problem than scalable QC.

  9. Scott Says:

    Rahul #6: In principle, sure, all of these things are empirically testable. In practice, it’s extremely difficult, in part because D-Wave is so limited in how they can measure their system. As best I understood when they explained it to me, they basically just have one measurement basis. They don’t have the ability to prepare the same state over and over and measure it in different bases, which is the sort of thing that would be needed to do state tomography, or get “smoking-gun” proof of entanglement, or understand in detail how the system is decohering.

    Note that, e.g., Schoelkopf’s group at Yale can measure superconducting qubits in multiple bases; they used that ability to prove Bell and GHZ inequality violation in such qubits. So it’s technologically possible, if hard. But it’s not a direction that D-Wave chose to invest in, because their focus has always been on getting more qubits, rather than on understanding the qubits they have.

  10. vzn Says:

    this is awesome news overall, it will really help the field to have multiple competing groups from different milieus, both academia and industry.
    however, His Eminence emphatically denied last blog/ comment wrt JGs assertion that the Martinis group research is built on top of dwave techniques. refuted directly from the wired article

    Martinis is among those questioning D-Wave’s claims. Last June, Science published a paper co-authored by Martinis and several other scientists concluding that D-Wave’s machines aren’t actually faster than normal laptops and desktops. But he’s no D-Wave hater. Martinis has been working with D-Wave’s machines for a few years now and says he has long been impressed with the work the company has done.

    The general consensus now, he says, is that the D-Wave computers do exhibit some quantum behavior. The real question, he explains, is whether this behavior actually speeds up the D-Wave computers. And although his team will be working separately from D-Wave at Google, he thinks their work may eventually help D-Wave take better of advantage of that quantum behavior. “We’re taking some of the basic ideas of D-Wave and combining that with what the [Google] Quantum AI team has learned operating the machine,” he says.

    it is understandable & somewhat natural/ expected there is some strong aversion to DWave approaches by those working in academia, and some of this rivalry is to be expected & possibly improves the competition & overall intellectual seriousness/ sharpness/ edge. however, SOME of it (seen in this blog & comments) seems to go overboard & cross the line.

  11. Andrew Says:

    Given the state of Oregon’s economy [1], its longstanding flirtation with utopianism [2], and its recent embrace of one of the more egregious forms [3], they would certainly welcome a Qualiphate.

    [1] https://en.wikipedia.org/wiki/Oregon#Economy
    [2] http://osupress.oregonstate.edu/book/eden-within-eden
    [3] https://www.youtube.com/watch?v=3cDgOf2Om28

  12. Scott Says:

    vzn #10: (sigh) In that other thread, JG was claiming, not that Martinis will work more with D-Wave in the future (which might be true), but rather that his group’s past research—and specifically, their Nature paper “Logic gates at the surface code threshold: Superconducting qubits poised for fault-tolerant quantum computing”—somehow built on D-Wave. And that’s something that’s clearly flat-out, unequivocally false. As I said there:

      Now, it’s possible that his new collaboration with Google’s AI Lab will cause him to deal with D-Wave more than he has in the past. But whether or not it does, it’s certainly not true that the Nature paper the UCSB group already published owes anything to D-Wave.

    Unfortunately, it’s true that worrying about the actual content of what people are saying, and whether it’s true or false, is perceived by many as “going overboard” and “crossing the line.” That’s the problem I’ve had in this discussion since the beginning.

    And it’s also true that John Martinis is a much more diplomatic person than I am. That’s why he’s managing a lot of people and substantial sums of money, whereas I’m a complexity theorist who writes a crass blog. I admire his diplomacy, in much the same way that I admire President Obama’s. At the same time, I can guarantee you that, on the actual substantive questions regarding the truth or falsehood of D-Wave’s speedup claims, there’s very little daylight between John’s views and mine. (If I found out there was daylight, I might adjust my views to make them closer to his.)

  13. Rahul Says:

    Scott says:

    “And it’s also true that John Martinis is a much more diplomatic person than I am. That’s why he’s managing a lot of people and substantial sums of money, whereas I’m a complexity theorist who writes a crass blog.”

    Don’t be so hard on yourself Scott. 🙂 Isn’t he an experimentalist?

    They always get / need more money than theorists. And larger groups too. Of worker bees.

    Are there theorists managing as many people or money as Martinis in your neck of the woods?

    Anyways, not calling out things exactly as they are might be a good tactic in Politics but I rather dislike that sort of diplomacy in Science. Doesn’t do the field much good. So we rather like you as you are.

    For the sake of diplomacy if you’d declared that you have “long been impressed with the work DWave has done.” it’d be quite distressing.

  14. joe Says:

    Scott: In your post, you seem to imply that Martinis and his team will be focusing on building a fault-tolerant gate-model universal QC at Google. Of course, as per arXiv:1208.0928 , http://journals.aps.org/prx/abstract/10.1103/PhysRevX.2.031007 and others, it is likely that >10^6 physical qubits will be required to build a “useful” gate-model QC (one that can factorize a 1024-bit number, or compute the ground states of a molecule larger than what is easily possible on a laptop).

    Consequently, the rumor is that Martinis’s team will be focusing primarily on adiabatic QC at Google. Naturally this has some overlap with gate-model QC, in that if Martinis tries to make 100 qubits with long coherence times and high-fidelity control of the system Hamiltonian, then it’s something of an advance for gate-model QC too. Nevertheless, the promise that Google is investing in is that AQC with Martinis-quality qubits will be interesting/useful, since a useful gate-model QC still seems 20-50 years away.

    So, a question: assuming that Martinis is going to focus on building an AQC with his microsecond-coherence-time qubits, and let’s suppose with serious Google funding, he gets to 100 qubits in 5 years, what do you think the prospects are for AQC experimentally showing any promising signs of offering a speedup for any kind of problem? Or are we going to see a paper come out with the same conclusions as the recent Science benchmarking paper from Martinis/Troyer/et al. saying that non-fault-tolerant (but long-coherence-time/high-fidelity) AQC is also a waste of time?

  15. anonymous Says:

    I have designed an experiment for testing whether there is any entanglement between Martinis and quantum annealers. The experiment will cost the exorbitant amount of !0^(-6) Billion dollars, but nevertheless it is worth pursuing for the benefit of humanity. It’s a very delicate experiment that may take 10^(-6) years to conduct, but I think Scott has the necessary skill and patience to conduct it. How about if someone, for example a D-wave investor with some skin in the game, asks Neven, or D-Wave to stop being so conveniently ambiguous and state categorically in their websites whether this rumor is true or not

  16. Scott Says:

    joe #14: If the Martinis group were now to build adiabatic QCs with longer coherence times, error-correction, non-stoquastic Hamiltonians, better ability to measure what was going on, etc., then yes, those still might not be universal QCs, but they would certainly be a step in the right direction. (Indeed, they’d be a step in precisely the direction that many of us were hoping D-Wave would go, but that it chose not to.)

    Now, supposing such QCs were built, would they show any practically-important speedup over simulated annealing in solving optimization problems? I honestly have no idea. As I’ve said many times in regard to D-Wave, with adiabatic optimization, it’s conceivable that you could do everything right and still not see a speedup! Or maybe a speedup but only for contrived instances of no practical relevance, and such that classical algorithms other than simulated annealing do fine even on those instances. But at least one would be a lot closer to testing the right question—i.e., to learning something important about the power of adiabatic optimization.

    Incidentally, I really doubt that a million qubits are needed for a “useful” gate-model QC. Already with a few hundred qubits, I believe one could do something that would be hard to simulate classically, which would have enormous scientific importance. And while current estimates of the number of qubits needed for quantum simulation might seem discouraging, I’m optimistic that they could be brought down—with things like this, every little trick tends to gain you another order of magnitude, and it’s only recently that people have been trying hard to optimize the numbers. Truthfully, though, to get to a few hundred qubits, it’s very likely that you’ll need the entire apparatus of error-correction, and that once you have that, there’s a clear path forward to millions of qubits.

    Finally, even if, counterfactually, Martinis were to pursue exactly the same things that D-Wave is pursuing now, there would still be something that would make me happy, and that’s that I trust Martinis to bend over backwards to tell the truth about what’s going on.

  17. Scott Says:

    quax #1:

      At any rate, I am very happy that you no longer have to be worried about the prospect of any D-Wave under-performance tarnishing the QC field as a whole.

    OK, you raise a fair point. While I’m reasonably good at judging the plausibility of quantum computing claims, I’m not so good at putting myself into the mind of a CEO or an investor. In particular, if I imagine myself as an investor whose only knowledge about quantum computing came from observing the arc of D-Wave, I imagine myself saying: “let me stay as far away from this as I possibly can.” But maybe I just have the wrong mental model: real investors might be used to dealing with levels of hype that are hard for me even to conceive of. 🙂

  18. John Sidles Says:

    Scott avers “I trust Martinis to bend over backwards to tell the truth about what’s going on.”

    Indeed, John Martinis and his colleagues are generally regarded — rightly as it seems to me — as weighty, sober-minded researchers.

    Consideration  To the extent that sardonicism, cynicism, mockery — and even witticism — are obstructions to telling “the truth about what’s going on”, cannot these same practices — when embraced to excess — cumulatively impair the ability even to perceive “the truth about what’s going on”?

    Perhaps this principle motivates two of Mark Twain’s maxims:

    The humorous story is told gravely; the teller does his best to conceal the fact that he even dimly suspects that there is anything funny about it.

      — How to Tell a Story (1897)

    Humor must not professedly teach, and it must not professedly preach, but it must do both if it would live forever. By forever, I mean thirty years.

      — Autobiography of Mark Twain (circa 1910)

    Recommendation  The sustained practices of sardonicism, cynicism, mockery, and even witticism can be deleterious to young researchers; even the aged are well-advised to embrace these practices in strict moderation.

  19. Joshua Zelinsky Says:

    > Now, supposing such QCs were built, would they show any practically-important speedup over simulated annealing in solving optimization problems? I honestly have no idea.

    Is there any reason to suspect that there would be any substantial speedup? This is far from what I think about normally but at least for most other things we care about we have a reason to think things will go faster. For example, Grover’s algorithm makes one suspect that a quantum computer will for sufficiently large instances of 3-SAT solve it faster than a classical computer even as we suspect strongly that NP is not contained in BQP (because of BBBV). Is there any similar statement here for say adiabatic QCs with really long coherence times?

    And even if there’s some speedup, that’s still well before we get to practically-important speedup given the costs involved.

    Is saying you have no idea here possibly false humility?

  20. vzn Says:

    ok reasonably concede the martinis group directions in general in the past have been much different line of research than DWave and purposefully/ intentionally so (ie superconducting qubits, more gate model). however clearly as in #15 the whole field is “entangled”. there is huge communication & some crosspollination between all qm computing teams in the entire world, maybe not always directly, but indirectly, they are all highly aware of and influenced by each other’s work.
    the martinis group has cranked out massive amts of experiments in a short time & shown very rapid progress. I count 23 publications in 2014 alone ❗ and the year has yet ¼ left! moreover apparently there is some intrinsic use of adiabatic processes in past martinis work eg fast adiabatic qubit process using only sigma_z control (2014). seems it would be a fascinating exercise for someone (maybe more an expert/ well-connected insider than me) to try to draw connections between prior DWave/ martinis work/ labs. it is true, scanning their recent papers, they do apparently assiduously avoid directly citing anything DWave has done, which to me is a bit disappointing, given that Dwave now has a decent/ respectable scientific publication record, and generally deserves at least peripheral acknowledgement by other groups.

  21. Jay Says:

    There’s one things on your blog that clearly lacks some “fundamental reasonableness”: you spend more energy on Dwave than on Martinis time one hundred. And I’m quite confident you’d agree. 😉

    Could we hope for a “Shor I’ll do it” to explain surface code?

  22. Pot Calling the Kettle Says:

    “even witticism can be deleterious . . . the aged are well-advised to embrace these practices in strict moderation”

    Right John. 😉

  23. quax Says:

    John Sidles #18, sage advice. Humor on blogs can be easily misunderstood and may not be conducive to a young researcher’s career. Much safer to engage in from a tenured position.

    Joe #14 check out these Troyer et al. papers on how to squeeze significant speed-up from a couple hundred qubits for quantum chemistry purposes.

    Although there’s not much theory to go on I’d still wager that AQC may also realize non classical speed-up for some problems, and the more and better chips we get the better to settle this experimentally.

    Maybe we should start an online petition demanding that Google clarifies what architecture they are going after …

    Scott #17, there is no doubt in my mind that investors tolerate far more hype than most scientists, I’d even go further, they expect and demand this kind of hype, taking it as a proxy for appropriate business aggressiveness. A rather poor substitute for domain expertise, but then again nobody who knows how real markets work thinks that they are rational. Well, that is with the exception of the economists of the Chicago School of Business who poisoned many MBA students’ mind with their efficient market hypothesis BS 🙂

  24. Scott Says:

    Jay #21: Yeah, there’s one thing you have to hand D-Wave—they’re more fun and—crucially—easier to blog about than the many QC researchers who simply go about top-notch research and report about it cautiously, about whom I can either just say “Awesome!! Keep up the great work!,” or I can try to explain the technical details of what they’re doing, but that would require hours of background reading and a couple days’ writing on my part. And when I have invested that sort of effort in the past, I’ve often been rewarded with very few comments, and those comments I get wondering why I’m “hyping” some particular technical contribution.

    Despite all that, I think you’re right: I should be doing more popularizations of important research topics, and I’ll try to. Except not the surface code, simply because I don’t understand it well enough. Anyone who does is more than welcome to write a guest post about it: just shoot me an email.

  25. Scott Says:

    Joshua #19:

      Is there any reason to suspect that there would be any substantial speedup? … Is saying you have no idea here possibly false humility?

    No, I think it’s true humility. The question of speedups via adiabatic optimization really is complicated, with the answers (insofar as they’re known) strongly dependent on how the question is asked.

    On the one hand, yes, it’s known in principle that you can at least get the Grover speedup using the adiabatic algorithm.

    On the other hand, to get the Grover speedup requires a carefully-chosen annealing schedule, where you race through the parts of the path with large spectral gap, and only spend a lot of time in the tiny part of the path that has an exponentially-small spectral gap. And to choose such a schedule requires a lot of advance knowledge about the problem structure, which might not be realistic.

    Also, on the third hand, even if you did get a Grover speedup, that might only be a speedup compared to brute-force search, and it’s far from obvious that it would be the slightest bit impressive compared to good classical heuristics like simulated annealing.

    On the fourth hand, there’s been some work on how to combine a Grover speedup with better-than-brute-force classical algorithms (mostly backtracking algorithms), in order to get the best of both. So it’s at least conceivable that a Grover-type speedup could survive a fair comparison to classical algorithms.

    On the fifth hand, there’s also the possibility that adiabatic optimization could give you even more than a Grover speedup, at least for special instances. And indeed, Farhi, Goldstone, and Gutmann constructed instances where the adiabatic algorithm reaches the global optimum exponentially faster than simulated annealing does.

    On the sixth hand, those instances involve a wide basin with a tall, narrow “spike” partway to the bottom, which simulated annealing gets stuck at for exponential time but which the adiabatic algorithm can tunnel through. And it’s not clear that that kind of fitness landscape is ever very important in practice—or that, if it is, then one can’t just modify the classical algorithms in some straightforward way to deal with them.

    On the seventh hand, there’s also an example—based on the Childs et al. “glued-trees” construction—where something like the adiabatic algorithm (differing only in that it jumps up from the ground state to the first excited state) can provably reach the global optimum exponentially faster than any classical algorithm.

    But on the eighth hand, that example is a black-box example (not surprisingly, given that we can prove an exponential speedup 🙂 ). And it seems exceedingly unlikely that anything like the glued trees would arise in practice.

    Though on the ninth hand, where there’s a contrived, theoretical example giving an exponential speedup, maybe there’s also a practical example lurking in its vicinity (as with Simon’s algorithm vs. Shor’s algorithm).

    On the tenth hand, Aharonov et al. famously showed a decade ago that adiabatic quantum computation is universal, i.e., capable in principle of simulating Shor’s algorithm and all the rest.

    But on the eleventh hand, their construction requires a final Hamiltonian whose ground state is a “history state,” a superposition over all the time steps of a circuit-based quantum computation—something that goes outside the framework of adiabatic optimization, where the ground state is just a computational basis state (and also introduces new technological difficulties, not to mention huge polynomial blowups in running time).

    Also, on the twelfth hand, AFAIK numerical simulations of the adiabatic algorithm (and experimental results from the D-Wave machine) on “practical” instances have not yet shown any robust speedup over classical algorithms; indeed everything (even the specific pattern of successes and failures) seems modeled quite well using Quantum Monte Carlo, which (despite its name) is a classical algorithm.

    But on the thirteenth hand, there’s the recent work by Freedman and Hastings, which gives at least a theoretical separation between the adiabatic algorithm and Quantum Monte Carlo.

    There are probably some additional hands that I’m forgetting; anyone who knows some is welcome to chime in.

  26. Joshua Zelinsky Says:

    Thanks. That does answer the question and does make it seem like it really would be hard to guess either way which way it should go.

  27. joe Says:

    Scott (or others in-the-know): Regarding AQC requiring knowledge of the Hamiltonian spectrum to construct an annealing schedule that allows for a speedup, what is your opinion on the limitations of the method of arXiv:1407.8183? Naively this seems like it solves the problem, but I’m curious if there are important cases that are not covered, and hence if the paper is just vastly overselling itself? (If it really is a general method that works for problems that are “hard”, it seems like a remarkable advance, which essentially solves the problem of choosing the annealing schedule.)

  28. jonas Says:

    Scott, I agree that it would be nice if you covered interesting research topics. These research topics, however, need not be technical points about the research on building scalable quantum computers. They could be anything about mathematics or theoretical computer science or physics that you decide you feel you understand. You’ve already been doing this to some amount, such as in the recent post about Subhash Khot’s research or the old matrix multiplication post, but maybe you could do more. Some of those other problems are easier to popularize and you will get more comments that way.

  29. quax Says:

    Scott #25, admitedly a trivial observation, but that is an impressive show of hands.

  30. Raoul Ohio Says:


    Plenty of comments on SO are from knowledgeable and smart people who want to debate some details of your views. This is an excellent place to invest your “blogging energy”.

    At times you seem to be offended by the insults from the clueless. Better to ignore them, or if you are in a giving mood, offer a few clues.

    A lot of people want to debate goofy issues with scientists. I recall being at a high school reunion in the mid 1980’s and being challenged to debate the issue of “Did Jusus’s friends steal his body from the cave?”. Having had a few drinks, I replied “How the f#&k should I know? That was 2000 years ago. I don’t even know where Jimmy Hoffa’s body is”.

  31. rrtucci Says:

    If a Martian reads Scott’s post about hands, he’ll probably think that a complexity theorist is an odd sea creature with 11 hands all trashing about in the water. The center of mass of the creature hardly moves because all the hands are moving at random and the velocity averages to zero.

  32. Darrell Burgan Says:

    Scott, you said:

    “Yet, while I’ll never live to see the blog-commentariat acknowledge the fundamental reasonableness of my views, I might live to see scalable quantum computers become a reality, and that would surely be some consolation.”

    However, I think you will certainly see the market place acknowledge your views. If D-Wave cannot produce a cost-effective quantum machine that actually solves problems in a useful way — and the Martinis group does — then you can be certain the market will take care of D-Wave.

    If, on the other hand, D-Wave does produce a machine that solves some real-world problem in a cost-effective way (quantum or not), then they might hang around in some fashion. At millions of dollars a pop, methinks their machines better do a whole lot better than my laptop can.

  33. Gil Kalai Says:

    This is great news!

    It also shows that Google’s interest in quantum computing starting with cooperation with D-wave was actually (if I may say, as I expected,) positively correlated (and not negatively correlated) with interest in other quantum computing endeavors.

  34. fred Says:

    About the Google press release:

    “With an integrated hardware group the Quantum AI team will now be able to implement and test new designs for quantum optimization and inference processors based on recent theoretical insights as well as our learnings from the D-Wave quantum annealing architecture.”

    Does it mean that the work done by Martinis now “belongs” to the private sector?

  35. fred Says:

    To add on #34

    What is the exact nature of the relationship?
    “We are pleased to announce that John Martinis and his team at UC Santa Barbara will join Google in this initiative”
    What do they mean by “join”?
    Will there be any impact on the publication of results and openness, etc?

  36. rrtucci Says:

    Right on Fred. I also would like to know if this means that “learnings” is now a real word.

  37. anonymous Says:

    I believe that the New York Times hasn’t reported this Martinis/Google news yet. I hope it’s because they first want to get some much needed clarification from Neven. I hope they are reading these blog comments and intend to answer some of the questions asked here about Google’s intentions. Hmm, Can one comment on a NYT article before it has been written?

  38. Scott Says:

    joe #27: That paper was discussed at our weekly quantum information group meeting, and I remember there was a consensus that one of the conditions they assumed (I think, that you need a polynomial upper bound on the number of states on which the driver Hamiltonian acts nontrivially?) could severely limit the method’s practical relevance. But I don’t remember the details. If I remember to, I’ll ask again next time and get back to you.

    Update: It’s worse than that. As it turns out, Ed Farhi proved that the above-mentioned condition implies that the spectral gap will be exponentially small, basically because it triggers the BBBV lower bound for the Grover search problem. So then you use this method to calculate the gap, and lo and behold, you find out that the gap is exponentially small. It would be a bit like if you had a fast factoring algorithm, but it only worked for positive integers N satisfying certain conditions, and those conditions turned out to imply that N was prime.

  39. joe Says:

    Scott 38: Thanks for checking that out. That assessment appears consistent with some things that struck me in their paper: 1.) Figure 1 seems to show exponentially small gaps at one point in the annealing; 2.) Figure 2 shows only the Grover sqrt speedup, and nothing more (and presumably if they found a nontrivial, unexplored problem class that got a better speedup they would have presented it!

  40. Ann Says:

    Good video, but almost exclusively male characters 🙁

  41. gasarch Says:

    1) I will say you were correct no matter what happens.
    2) Your point of view has always been reasonable- call out bad science when warranted (and, alas, when asked to— which is one of the reasons you end up talking about D-wave more than… more than you want to is my guess), and promote good science when it happens
    (even if that is less fun).
    3) Jon Stewart: Fox News == Scott Aaronson:D-wave.
    (Wonder how it would be if Jon Stewart called out D-wave and Scott Debunked Fox News?)

  42. vzn Says:

    google ups ante and hedges qm computing bet by buying up martinis lab
    collected links, videos, news, articles, etc, other stuff on TCS + physics

  43. Raoul Ohio Says:

    1. Fox News == D-Wave: That is an inspired comparison.

    2. “call out bad science when warranted (and, alas, when asked to— which is one of the reasons you end up talking about D-wave more than… more than you want to is my guess), and promote good science when it happens (even if that is less fun).”:

    This is an important task for the progress of science. Of the few knowledgeable enough to do this effectively, Scott is one of few^2 who does not duck this chore. And it is a hard chore, requiring enormous effort, and drawing tons of uninformed criticism, which I hope Scott can learn to totally ignore. RO says: hang in there and keep up the good work.

  44. fred Says:

    Scott, you saw this?


  45. Sniffnoy Says:

    As long as we’re doing “Scott, did you see this?”, here’s an article about an impoved way of doing BosonSampling that I saw over on Reddit. I’m guessing Scott knows about this (because BosonSampling), but I thought it was worth pointing out and am interested to see if we really do get large-N BosonSampling experiments soon.

  46. fred Says:

    About Google+Martinis

  47. Copenhagen Says:

    The Google-Martinis Chip Will Perform Quantum Annealing


    Ever since the news that John M. Martinis will join Google to develop a chip based on the work that has been performed at UCSB, speculations abound as to what kind of quantum architecture this chip will implement. According to this report, it is clear now that it will be adiabatic quantum computing:
    But examining the D-Wave results led to the Google partnership. D-Wave uses a process called quantum annealing. Annealing translates the problem into a set of peaks and valleys, and uses a property called quantum tunneling to drill though the hills to find the lowest valley. The approach limits the device to solving certain kinds of optimization problems rather than being a generalized computer, but it could also speed up progress toward a commercial machine. Martinis was intrigued by what might be possible if the group combined some of the annealing in the D-Wave machine with his own group’s advances in error correction and coherence time.
    “There are some indications they’re not going to get a quantum speed up, and there are some indications they are. It’s still kind of an open question, but it’s definitely an interesting question,” Martinis said. “Looking at that, we decided it would be really interesting to start another hardware approach, looking at the quantum annealer but basing it on our fabrication technology, where we have qubits with very long memory times.”


  48. JG Says:

    Fred #44: Thats pretty cool. Seems the telecom industry are the only ones doing any ‘practical’ applications of quantum state systems for encryption and signal transmission. I guess that is if you dont count DWave and GQAIL, and of course Scott doesn’t. I doubt you get many interested parties here for things like quantum beam splitters, that falls into the grubby pragmatist arena. Actually the really interesting work in this field falls outside photonics and electronics as we think of it. The work on topological quantum computing using 2D meta particles that Microsoft Q lab is doing is mind bending. We may not see any real advances until the plekton micro megaflux circuit is invented some time from now.

  49. Dr. Ajit R. Jadhav Says:

    Hey Scott,

    Just wanted to catch you before saying bye for a while.

    Oh, BTW, thanks for the reply (somewhere above), though.

    In the meanwhile, I have caught, like, some 3+ very good arguments against P equals NP, precisely 3+ times after having had some very good arguments as to why P must equal NP. (That is, after having been a brain-wreck while continuously thinking about it over the recent few days, like.)

    I am getting closer and closer to the position that P != NP. But the point isn’t that. (Let Mr. Clay worry about *that* point–whether he actually loses [American] $ 1 Million of his own wealth, or not.)

    The point is:

    I have discovered that the only best way in which I can think about it, is via, what else, the Travelling Salesman Problem. [I am just a Metallurgical and Materials and Mechanical and Software Development Engineer, and one of those Physics Theory Hobbyists.]

    So, here is a request. Please consider the following requests/questions as if coming from a layman:

    Does the Traveling Salesman Problem (TSP for short) actually fall into the problem category for the CMI prize? Is TSP tecnically Too Hard for the Prize?

    If so, what makes it Just Precisely As Soft/Hard as The Prize requires?

    More, later (… and mostly, unannounced, if that’s OK by you.)



  50. Shtetl-Optimized » Blog Archive » Quantum computing news items (by reader request) Says:

    […] I blogged back in September, Google recently hired Martinis’s group away from UC Santa Barbara, where they’ll work […]

  51. Grey Says:

    It has really been a whole lot on quantum computing! with double mind i truly believe Google are taking the right step.