Quantum computing news items (by reader request)

Within the last couple months, there was a major milestone in the quest to build a scalable quantum computer, and also a major milestone in the quest to figure out what you would do with a quantum computer if you had one.  As I’ve admitted many times, neither of those two quests is really the reason why I got into quantum computing—I’m one of the people who would still want to study this field, even if there were no serious prospect either of building a quantum computer or of doing anything useful with it for a thousand years—but for some reason that I don’t fully understand, both of those goals do seem to excite other people.

So, OK, the experimental breakthrough was the Martinis group’s use of quantum error-correction with superconducting qubits, to preserve a logical bit for several times longer than the underlying physical qubits survived for.  Shortly before this came out, I heard Krysta Svore give a talk at Yale in which she argued that preserving a logical qubit for longer than the physical qubits was the next experimental milestone (the fourth, out of seven she listed) along the way to a scalable, fault-tolerant quantum computer.  Well, it looks like that milestone may have been crossed.  (update: I’ve since learned from Graeme Smith, in the comments section, that the milestone crossed should really be considered the “3.5th,” since even though quantum error-correction was used, the information that was being protected was classical.  I also learned from commenter Jacob that the seven milestones Krysta listed came from a Science paper by Schoelkopf and Devorret.  She cited the paper; the forgetfulness was entirely mine.)

In more detail, the Martinis group used a linear array of 9 qubits: 5 data qubits interleaved with 4 measurement qubits. The authors describe this setup as a “precursor” to Kitaev’s surface code (which would involve a 2-dimensional array).  They report that, after 8 cycles of error detection and correction, they were able to suppress the effective error rate compared to the physical qubits by a factor of 8.5.  They also use quantum state tomography to verify that their qubits were indeed in entangled states as they did this.

Of course, this is not yet a demonstration of any nontrivial fault-tolerant computation, let alone of scaling such a computation up to where it’s hard to simulate with a classical computer.  But it pretty clearly lies along the “critical path” to that.

As I blogged back in September, Google recently hired Martinis’s group away from UC Santa Barbara, where they’ll work on superconducting quantum annealing, as a step along the way to full universal QC.  As I mentioned then, the Martinis group’s “Xmon” qubits have maybe 10,000 times the coherence times of D-Wave’s qubits, at least when you measure coherence in the usual ways.  The fact that Martinis et al. are carefully doing quantum state tomography and demonstrating beneficial error-correction before scaling up are further indications of the differences between their approach and D-Wave’s.  Of course, even if you do everything right, there’s still no guarantee that you’ll outperform a classical computer anytime soon: it might simply be that the things you can do in the near future (e.g., quantum annealing for NP-complete problems) are not things where you’re going to outperform the best classical algorithms.  But it’s certainly worth watching closely.

Meanwhile, the quantum algorithms breakthrough came in a paper last month by an extremely well-known trio down the Infinite Corridor from me: Farhi, Goldstone, and Gutmann.  In slightly earlier work, Farhi et al. proposed a new quantum algorithm for NP-hard optimization problems.  Their algorithm badly needs a name; right now they’re just calling it the “QAOA,” or Quantum Approximate Optimization Algorithm.  But here’s what you need to know: their new algorithm is different from their famous adiabatic algorithm, although it does become equivalent to the adiabatic algorithm in a certain infinite limit.  Rather than staying in the ground state of some Hamiltonian, the QAOA simply

  1. starts with a uniform superposition over all n-bit strings,
  2. applies a set of unitary transformations, one for each variable and constraint of the NP-hard instance,
  3. repeats the set some number of times p (the case p=1 is already interesting), and then
  4. measures the state in the computational basis to see what solution was obtained.

The unitary transformations have adjustable real parameters, and a big part of the game is figuring out how to set the parameters to get a good solution.

The original, hyper-ambitious goal of the QAOA was to solve the Unique Games problem in quantum polynomial time—thereby disproving the Unique Games Conjecture (which I previously blogged about here), unless NP⊆BQP.  It hasn’t yet succeeded at that goal.  In their earlier work, Farhi et al. managed to show that the QAOA solves the MAX-CUT problem on 3-regular graphs with approximation ratio 0.6924, which is better than random guessing, but not as good as the best-known classical algorithms (Goemans-Williamson, or for the degree-3 case, Halperin-Livnat-Zwick), let alone better than those algorithms (which is what would be needed to refute the UGC).

In their new work, Farhi et al. apply the QAOA to a different problem: the poetically-named MAX E3LIN2.  Here you’re given a collection of linear equations mod 2 in n Boolean variables, where each equation involves exactly 3 variables, and each variable appears in at most D equations.  The goal is to satisfy as many of the equations as possible, assuming that they’re not all satisfiable (if they were then the problem would be trivial).  If you just guess a solution randomly, you’ll satisfy a 1/2 fraction of the equations.  Håstad gave a polynomial-time classical algorithm that satisfies a 1/2+c/D fraction of the maximum number of satisfiable equations, for some constant c.  This remains the best approximation ratio that we know how to achieve classically.  Meanwhile, Trevisan showed that if there’s a polynomial-time classical algorithm that satisfies a 1/2+c/√D fraction of the max number of satisfiable equations, for a sufficiently large constant c, then P=NP.

OK, so what do Farhi et al. do?  They show that the QAOA, with suitably tuned parameters, is able to satisfy a 1/2+c/D3/4 fraction of the total number of equations in polynomial time, for some constant c.  (In particular, this implies that a 1/2+c/D3/4 fraction of the equations are satisfiable—assuming, as Farhi et al. do, that two equations directly contradicting each other, like x+y+z=0 and x+y+z=1, never appear in the same instance.)

Now, the above is a bigger fraction than the best-known classical algorithm satisfies!  (And not only that, but here the fraction is of the total number of equations, rather than the number of satisfiable equations.)  Farhi et al. also show that, if the constraint hypergraph doesn’t contain any small cycles, then QAOA can satisfy a 1/2+c/√D fraction of the equations in polynomial time, which is essentially the best possible unless NP⊆BQP.

The importance of this result is not that anyone cares about the MAX E3LIN2 problem for its own sake.  Rather it’s that, as far as I know, this is the first time that a quantum algorithm has been proved to achieve a better approximation ratio for a natural NP-hard optimization problem than the best known classical algorithm achieves.  People have discussed that as a hypothetical possibility for 20 years, but (again, unless I’m missing something) we never had a good example until now.  The big question now is whether the 1/2+c/D3/4 performance can be matched classically, or whether there truly is an NP-intermediate region of this optimization problem where quantum outperforms classical.  (The third possibility, that doing as well as the quantum algorithm is already NP-hard, is one that I won’t even speculate about.  For, as Boaz Barak rightly points out in the comments section, the quantum algorithm is still being analyzed only in the regime where solutions are combinatorially guaranteed to exist—and that regime can’t possibly be NP-hard, unless NP=coNP.)

[Above, I corrected some errors that appeared in the original version of this post—thanks to Ed Farhi and to the commenters for bringing them to my attention.]


Update (Feb. 3, 2015): Boaz Barak has left the following comment:

in a work with Ankur Moitra, Oded Regev, David Stuerer and Aravindan Vijayaraghavan we were able to match (in fact exceed) the guarantees of the Farhi et al paper via a classical efficient algorithm. (Namely satisfy 1/2 + C/√D fraction of the equations). p.s. we hope to post this on the arxiv soon

66 Responses to “Quantum computing news items (by reader request)”

  1. Jay Says:

    Always fun to read passion! 🙂 Questions please:

    Is c known? Is it the same c for both QAOA and Håstad’s algorithm? Is there any constrains for D? (for example it seems that if D=1 then the problem can be solved in P time) Could we have QAOA (or some other quantum algorithm) better than Håstad’s algorithm (or some other classical algorithm) for some (c,D)?

  2. Jay Says:

    (sorry typo: better than => worse than)

  3. Gil Kalai Says:

    Wasn’t there some examples of approximation for shortest lattice vectors where the best quantum algorithm did better than the best classical one (but obviously less than what would require for cc collapse of some sort)? And (based on even a vaguer memory), wasn’t there such an example for certain approximations for Jones polynomials?)

  4. Jeremy Stanson Says:

    I feel compelled to point out, again and again, that comparing coherence time of Martinis qubits to that of D-Wave qubits makes no sense. This compares the coherence time of one qubit in a system of ~5 to that of one qubit in a system of ~1000. Martinis qubits are specifically built to have good coherence times and are not built to be scalable. D-Wave qubits are specifically built to be scalable. When you only have a handful of qubits, it’s easy to isolate them and to program/readout each qubit with unique signal lines. When you have 100s to 1000s of qubits like D-Wave, you need to add a lot of infrastructure. We will inevitably see the coherence times for Martinis qubits fall as the number of qubits in his systems increases. If D-Wave built a simple 5-qubit processor, they could rival Martinis current coherence times for sure. But what would be the point of that?

  5. Chris Says:

    Quantum Inexact NP Optimization Algorithm, or QuINOA.

  6. Joshua Zelinsky Says:

    Possibly naive question:

    MAX E3LIN2 has an obvious generalization to MAX E3LINk for any k, and then by guessing randomly one should expect to satisfy about 1/k. Do the results for MAX E3LIN2 extend to this broader setting?

  7. Scott Says:

    Jay #1: Yes, you can read their paper to find the specific values of c. I don’t think there are any restrictions on D.

  8. Scott Says:

    Gil #3: No, I don’t think it’s known how to achieve any approximation ratio for SVP using a quantum computer, that beats what one can obtain classically using LLL-type algorithms. By Regev’s results, such an algorithm would have followed if we knew how to solve the Dihedral Hidden Subgroup Problem efficiently, but we don’t.

    I’m not counting the Jones polynomial as an example of the sort of problem I’m talking about, because that’s a problem of producing an additive approximation to a #P-complete sum of exponentially many positive and negative terms. And for that sort of problem, it’s much less surprising that there would be a quantum speedup (and indeed we have many other examples, besides the Jones polynomial).

  9. Scott Says:

    Jeremy #4:

      But what would be the point of that?

    The point would be to demonstrate the building blocks that will ultimately be needed individually, before you try to demonstrate them at scale. It might be true that D-Wave “could have” produced a 9-qubit system with Martinis-like coherence times had it wanted to, and it might also be true that Martinis “could have” produced a 500-qubit system with D-Wave-like coherence times had he wanted to (and had he had the funding). But regardless of the truth of either statement, I personally find the “first really understand what’s going on and then scale up” approach to be more promising. For like many others, I think of the “real” problem not as adding more qubits, but simply as getting over the hump of quantum fault-tolerance. Once you’ve done the latter, you can then add as many qubits as you want, with the confidence that you’ll know what they’re doing.

  10. Scott Says:

    Chris #5: That’s an awesome name! I’ll see what Farhi, Goldstone, and Gutmann think of it.

  11. Scott Says:

    Joshua #6: It seems plausible that the result would generalize to MAX E3LINk, but I really don’t know. I could ask the authors.

  12. Joshua Zelinsky Says:

    Related to Scott’s #9 and Jeremy’s #4 (and pardon if this has been addressed before) is there any example of any sort of technology where scaling it up on a massive scale and then working out the precision issues actually succeeded? Every example I’m aware of, the small scale is first done and then the integration is done. Transistors and vacuum tubes are the most prominent examples.

  13. Jay Says:

    Scott #7: sleep deprivation?

  14. Boaz Barak Says:

    If I understand correctly, what the quantum algorithm does cannot be NP hard unless NP=coNP, since what they show is that *every* 3XOR instance of max degree D has an assignment satisfying a 1/2 + Omega(D^{-0.75)}).
    In addition they show an efficient quantum algorithm to actually find such an assignment.

    Such a problem where there is a guaranteed solution for every instance cannot be NP hard (if there was a reduction from SAT to this problem then one could use it to certify that a SAT formula is unsatisfiable).

  15. Scott Says:

    Jay #13: Not really. Just, y’know, you can look these things up yourself!

  16. Scott Says:

    Boaz #14: Ah, that’s an excellent observation. We should say: assuming NP≠coNP, the only way the quantum algorithm could possibly be doing something NP-hard, would be if it could satisfy a 1/2+1/f(D) fraction of equations when possible, for some f such that a 1/2+1/f(D) fraction of equations are not always simultaneously satisfiable.

    This raises some obvious questions: can one prove, nonconstructively if necessary, that a 1/2+c/√D fraction of equations are always simultaneously satisfiable? Is 1/2+c/√D (for some c) not only the threshold for NP-hardness, but also the threshold where the problem goes from always-satisfiable to not-always-satisfiable? If so, then do these two thresholds occur at the same value of c or at different values?

  17. Graeme Says:

    Hi Scott,

    Note that the Martinis paper only extends the lifetime of classical information, not quantum. Of course, classical error correction has been demonstrated before, but what’s interesting is that this is in a system that could plausibly be used to do quantum error correction in the future. The milestone Krysta would have been talking about is improving the storage of quantum information via error correction, which is still a long way off.

  18. Jacob Says:

    By the way the seven steps Svore discussed (of which lifetime-extending QEC is the fourth) come from a Schoelkopf and Devoret outlook in Science 339, pages 1169-1174. Their next milestone is single qubit operations on logical qubits.

  19. Gil Kalai Says:

    Boaz, A “quantumized” proof (and an efficient quantum algorithm) for the statement that every 3XOR instance of max degree D has an assignment satisfying a 1/2 + Omega(D^{-0.75)}), is very interesting. So of course, even before an efficient classical algorithm it would be nice to find an “ordinary” mathematical proof (e.g. via a probabilistic argument).

    Is this problem on some of Papadimitriu’s complexity classes (for algorithms to find objects guaranteed by a mathematical theorem)?

  20. Boaz Barak Says:

    (I am actually now confused – couldn’t you have an instance where every for equation of the form x_i + x_j + x_k = 0 you also have the equation x_i + x_j + x_k = 1, and so you can never satisfy more than 1/2 of them? am guessing the paper rightly doesn’t allow such instances.)

    Ignoring technicalities such as the above (which may or may not be related to the girth condition they mention) I would guess that a 1/2+O(1/\sqrt{D}) should be the right answer. The natural way to construct a 3XOR instance that is highly unsatisfiable is to simply choose m=Dn random equations.
    Because for every particular assignment, the number of satisfied equations has expectation m/2 and standard deviation sqrt{m}, we would expect that the best assignment out of the 2^n possiblities would have an advantage of roughly sqrt{n} standard deviations. So the fraction of constraints satisfied would be

    1/2 + O(\sqrt{nm}/m) = 1/2 + O(1/\sqrt{D})

    I don’t have a strong intuition for the right value of the constant – that can be somewhat delicate, since the “roughly” sqrt{n} above is indeed rough, as there are dependencies between assignments that are close to one another.

  21. Jeremy Stanson Says:

    Scott # 9

    “The point would be to demonstrate the building blocks that will ultimately be needed individually, before you try to demonstrate them at scale.”

    But my point is that Martinis’ current building blocks are NOT what is ultimately needed individually. To demonstrate that, his small 9-qubit system would need to have all of the on-chip wiring, storage devices, readout components, shielding, etc. that would surround those same 9-qubits in a ~1000 qubit system. I don’t mean to belittle or detract from Martinis’ achievements, but the assumption that his current coherence times will scale doesn’t have any support, and any comparisons between his and D-Wave’s qubits don’t make sense unless the environments (e.g., as a result of scaling measures) are matched.

    People don’t seem to appreciate that (at least, before Google got involved) D-Wave has way more resources than Martinis. The tradeoff between coherence and scalability is, in many ways, a fabrication problem, and D-Wave has invested many, many millions more dollars in superconducting fab than Martinis. They both want the same thing, so if D-Wave shows less coherence it’s because attaining Martinis-level coherence at scale is the challenge.

  22. Douglas Knight Says:

    Jeremy, the reason for dwave to produce a five qbit computer with long coherence times is to prove that they aren’t quax. I really doubt they could if they tried.

  23. Jay Says:

    Scott #15: ok, I just thought you would have known without searching (1/22 btw, still searching for Håstad’s). But no, I don’t think the last questions were that trivial. Let me rephrase:

    Is it correct that, for D=1, MAX E3LIN2 can be solved in P time?

    Is it known or possible that, for some fixed D, Håstad is better than QuINOA?

  24. Itai Bar-Natan Says:

    @4 “… comparing coherence time of Martinis qubits to that of D-Wave qubits makes no sense”

    Comparing the coherence times of the Martinis qubits and the D-Wave qubits is a perfectly sensible apples-to-apples comparison. Your argument is that the Martinis group performing better along that metric does not make them better, but it is absurd to say that you can’t make a comparison unless it definitively resolves the dispute. Everyone agrees that the Martinis group and D-Wave are making different trade-offs with regards to qubit coherence and scalability, even as people disagree on how sensible the different trade-offs are.

  25. Scott Says:

    Graeme #17 and Jacob #18: Thanks very much for the clarifications! So then, they’ve extended the lifetime of classical information in a quantum system, by using error-correction with entangled quantum states. Maybe we should call that step 3.5? I’ll update the post accordingly.

  26. Chris D Says:

    Jeremy #21 I think most researchers accept the tradeoff between fabrication and scalability will only be overcome with fault-tolerance techniques, which is why experimental demonstration of these techniques at small scales is so important.

    If and when such demonstrations succeed, there is still a considerable architecture problem to be solved before fabrication even comes into it. Most likely a large-scale quantum computer won’t resemble anything we imagine today, which I why I think D-wave jumped the gun and the millions of dollars they spent were a bit of a waste.

  27. fred Says:

    Joshua #12

    “is there any example of any sort of technology where scaling it up on a massive scale and then working out the precision issues actually succeeded?”

    What’s interesting is that redundancy is about scaling up in order to get around errors at a lower level.
    The Martinis group uses redundancy in a very controlled manner.
    Multicore CPU manufacturing uses redundancy to get around fundamental imperfections in the processes (make N cores expecting that a fraction of them will be flawed and disabled).

  28. fred Says:

    Hi Scott,
    I don’t really understand why you qualified the post with

    “but for some reason that I don’t fully understand, both of those goals do seem to excite other people.”

    But fundamentally the QAOA thing is about the Church/Turing thesis and the complexity hierarchy, regardless whether a practical QC will ever be realized.

  29. Larry Says:

    Jeremy #4, I know what you mean. In the same way, I couldn’t understand all the hoopla when Andrew Wiles proved Fermat’s last theorem. I mean, Fermat had already proved it, right? We know that because he said he did.

  30. Jeremy Stanson Says:

    Ok, I didn’t intend to turn this into another D-Wave vs. Shtetl situation. I see D-Wave and Martinis as complementary efforts toward the same goal, and it just irks me when people dismiss D-Wave’s accomplishments based on a simplistic comparison to Martinis’ coherence times. In the post, Scott highlights that Martinis’ qubits have 10,000x the coherence time of D-Wave’s qubits, but is silent on the fact that D-Wave’s processors have ~10,000x the number of Josephson junctions as Martinis’ processors. Why the bias?

    Chris D @ 26 points out that once fault-tolerance is achieved there remains a considerable architecture problem, and this is the fundamental difference between the D-Wave and the Martinis approaches. Martinis is (or has been) working on the fault tolerance problem and leaving the architecture/scaling problem for later, adopting the common rationale that the individual building blocks need to be perfected before they can be assembled. Conversely, D-Wave is working on the architecture/scaling problem now and leaving the fault tolerance later, based on the notion that one can’t perfect the individual building blocks without shaping the blocks so that they’ll all fit together. We learn a tremendous amount from both of these streams. D-Wave is nowhere near Martinis’ coherence times, and Martinis is nowhere near the sophistication of D-Wave’s superconducting integrated circuits.

  31. Gil Kalai Says:

    As far as I remember, the seven milestones discussed in the post (from the paper of Schoelkopf and Devoret) are inspired by David DiVincenzo’s 2000 paper “The physical implementation of quantum computation“. See them also here .

  32. Nick Read Says:

    Jacob #18, Scott #22: it’s good to add that Krysta cited the paper herself in her talk, iirc (I was there too).

  33. Mike Says:

    “D-Wave is nowhere near Martinis’ coherence times, and Martinis is nowhere near the sophistication of D-Wave’s superconducting integrated circuits.”

    I guess Martinis has been working on the quantum side and D-Wave on the computer side 😉

  34. Scott Says:

    Nick Read #32: Thanks! Added that.

  35. Greg Kuperberg Says:

    Jeremy Stanton – “In the post, Scott highlights that Martinis’ qubits have 10,000x the coherence time of D-Wave’s qubits, but is silent on the fact that D-Wave’s processors have ~10,000x the number of Josephson junctions as Martinis’ processors. Why the bias?”

    If I want to build an airplane, I’d rather have two wings with a lot of lift than 20,000 wings with very little lift.

  36. Raoul Ohio Says:

    My guess is that if D-Wave ever demonstrates (through the standard scientific channels) that it is producing working, useful QD devices, Scott will be the first to congratulate them.

  37. Jeremy Stanson Says:

    Greg – weird analogy, but let’s extend it to the point where it’s a little more relevant. Let’s say that an airplane must have 20,000 wings in order to fly. Now your two wings are very good wings, but there is no way they can actually get the job done. It’s great to keep fine tuning the lift that those two wings can produce and you’ll probably learn a lot about really small, not-particulary-useful systems in the process. But it’s the behavior of the 20,000 wing system that you really care about. In that case, it’s perfectly viable, maybe even actually better from an engineering point of view, to start building 20,000-wing systems and see what they’re like.

    You might say that you can only really understand the 20,000-wing system if you really understand the 2-wing system. I’d agree. But if the composition of a wing itself fundamentally changes in going from a 2-wing system to a 20,000-wing system, then your understanding of the 2-wing system is of little use to you when you start working on the real beast. You’d be better off building the 20,000 wing system first to figure out everything that you need in such a system, and then sampling 2-wing subsystems within that larger system in order to study them. There is nothing that stops D-Wave from doing this. D-Wave’s investigations of x-qubit subsystems in larger qubit systems are, arguably, more relevant to scalable QC than Martinis’ investigations of x-qubit systems in isolated environments.

  38. Elizabeth Says:

    @Boaz #20: “(I am actually now confused – couldn’t you have an instance where every for equation of the form x_i + x_j + x_k = 0 you also have the equation x_i + x_j + x_k = 1, and so you can never satisfy more than 1/2 of them? am guessing the paper rightly doesn’t allow such instances.)”

    I believe the resolution to this is that Farhi et al (and the classical references they compare their result with) are computing an approximation ratio which is defined as: (# of clauses satisfied by the solution they produce)/(# of clauses satisfied by the optimal assignment). Therefore even if the optimal assignment only satisfies 1/2 of the clauses, the approximation ratio can be greater than 1/2.

  39. Scott Says:

    Elizabeth #38: OK, I just checked again. In their paper, they explicitly say that they can satisfy such-and-such fraction of equations for every instance—so in particular, that in every instance that fraction is satisfiable. It’s not just an approximation ratio. Then, when they go on to discuss their algorithm, they do make it clear that each equation can occur either positively, negatively, or not at all—so they are indeed excluding the case where an equation appears both positively and negatively.

  40. Ryan O'Donnell Says:

    Just FYI, for Max-Cut on 3-regular graphs, Halperin-Livnat-Zwick’01 give a .9326-approximation algorithm involving semidefinite programming, and also a simple combinatorial .8-approximation algorithm. (Actually, both algorithms work only assuming the maximum degree is 3.)

  41. Rahul Says:

    I see the DWave vs Martinis situation as something like this:

    Scaling is a challenge & so is fault tolerance. Dwave focused on the first without doing much of the latter. Unfortunately D-Wave tried to sell it as a bigger success than it really was.

    Martinis seems to have focused on fault tolerance. Fortunately, unlike D-wave he’s not been unnecessarily hyping it up.

    So also, let us not, the rest of use, make this milestone any bigger than what it really is. i.e. Martinis has made progress on error correction. But unless they can preserve these gains & scale up it’s nowhere closer to the final goal of a useful QC.

    Whether as an experimental approach, to scale first makes more sense or to error correct first I’m not very sure. I guess history does show that scale later is what has mostly worked.

    In any case, that seems a subjective choice. What’s important is to remember that neither scaling nor error correction is of much use from a QC technology POV unless they are both advanced in conjunction.

  42. Scott Says:

    Ryan #40: Thanks! Yes, Farhi did mention in his talk about this that for the degree-3 case, you can beat Goemans-Williamson by a little, still using SDP relaxation (and I see that they reference Halperin-Livnat-Zwick in their paper). I’ve now edited the post to point that out.

    Is it known whether Halperin-Livnat-Zwick is optimal for the degree-3 case, assuming the UGC?

  43. Rahul Says:

    If I were to bet, then I’m saying that within the next six months, someone is going to come up with a classical algorithm that beats Håstad’s polynomial-time algorithm.

  44. Scott Says:

    Rahul #43: Yes, that is indeed a plausible bet. Even then, though, it would still be kind of interesting that the quantum algorithm had come first.

    Incidentally, I didn’t feel bad at all about blogging the Martinis group’s work, because so many QC experiments that were so much less important have been hyped so much more! Good to even things out once in a while. 🙂

  45. Jeremy Stanson Says:

    Rahul # 41: Agreed!

  46. Rahul Says:

    Scott #44:

    Indeed! Martinis himself seems like the ideal researcher in these aspects. I think he doesn’t hype his stuff at all.

    D-Wave could learn a thing or two from him. Speaking of D-Wave they’ve been kinda lying low for some time now. Maybe the criticism finally got to them. 🙂

  47. Gil Kalai Says:

    The difference between the 3.5th milestone and the 4th milestone plays a central role in the seventh post of my 2012-debate with Aram Harrow https://rjlipton.wordpress.com/2012/09/16/quantum-repetition/
    In connection with a conjecture I made in the first post (“Conjecture 1”) Aram made the point that classical error-correction can lead to very stable encoded qubits in certain states (which is essentially the 3.5 milestone). I gave a formal description of the conjecture, which essentially asserts that the 4th milestone, namely insisting that encoded qubits allows arbitrary superpositions, cannot be reached.

  48. Michael Bacon Says:

    Gil,

    Just so I understand, you’re conjecturing that preserving a logical qubit for longer than the physical qubits cannot and will not be accomplished? If so, is it your position then, that accomplishing this task would basically disprove or at least severally weaken your general conjecture regarding the impossibility of constructing an effective quantum computer? Thanks.

  49. fred Says:

    Scott #44

    what’s the parallel here with the Shor algorithm that does not seem to have a classical counterpart?
    The problem at hand isn’t the type of “isolated island” that factorization is?

  50. Gil Kalai Says:

    Dear Michael (#48) , yes, sure! as I said many times (See, for example, the discussion in my 2012 Simons Institute videotaped lecture 2), implementation of quantum error-correction with encoded qubits which are substantially more stable than the raw qubits (and allow arbitrary superposition for the encoded qubit) will disprove my conjectures. Such stable encoded qubits are  expected from implementations of distance-5 surface code.

    Let me add, Michael, that I will be impressed to see even a realization of distance-3 (or distance-5) surface code that will give good quality encoded qubits, even if the encoded qubits will have quality which is somewhat worse than that of the raw qubits used for the encoding. These experiments, including those that were already carried out, also give various other opportunities to test my conjectures.

  51. Michael Bacon Says:

    Thanks Gil.

  52. Rahul Says:

    As an aside, I was reading the Farhi paper & they acknowledge the US-Army & NSF for funding.

    NSF i understand but why ARL? Does MAX E3LIN2 or the other stuff have any military implications? Cryptography? Just curious.

    Or does the US-Army spread its infinite $$ into more fundamental, basic research objectives?

  53. Boaz Barak Says:

    Just an update, in a work with Ankur Moitra, Oded Regev, David Stuerer and Aravindan Vijayaraghavan we were able to match (in fact exceed) the guarantees of the Farhi et al paper via a classical efficient algorithm. (Namely satisfy 1/2 + C/\sqrt{D} fraction of the equations)

  54. Boaz Barak Says:

    p.s. we hope to post this on the arxiv soon

  55. Nick Read Says:

    #53: um, so then P=NP?

  56. Joshua Zelinsky Says:

    Nick Read, #55,

    No, the choice of C will presumably be different, and likely much smaller than choice of C that would make it an NP-hard problem.

  57. Nick Read Says:

    Joshua #56: Yes, “presumably”. But is it, in fact? There will now be an interesting quantitative question: just how close can the constant in the best known algorithm (i.e. with largest constant) get to the best (smallest) known constant such that P=NP?

  58. Joshua Zelinsky Says:

    Nick, Yes, and the other question will then be whether one can get a better constant in the quantum case than the classical case.

  59. Google just hit a milestone in the development of quantum computers | TRENDING NEWS Says:

    […] physicist Scott Aaronsen pointed out in his blog that this experiment can be considered as completing 3.5 of the 7 steps needed to build a working […]

  60. Google just hit a milestone in the development of quantum computers Says:

    […] physicist Scott Aaronsen pointed out in his blog that this experiment can be considered as completing 3.5 of the 7 steps needed to build a working […]

  61. Google just hit a milestone in the development of quantum computers | Breaking News, Latest News and Current News from UStoday.org Breaking news and video. Latest Current News: U.S., World, Entertainment, Health, … Says:

    […] physicist Scott Aaronsen forked out in his blog that this examination can be deliberate as completing 3.5 of a 7 stairs indispensable to build a […]

  62. Aram Says:

    Gil, does this count?
    http://www.nature.com/nphoton/journal/v4/n10/full/nphoton.2010.168.html
    Sorry for not bringing it up before – I guess there were a lot of different points going on in that discussion.

    What bothers me a little about the conjectures is that there seems to be no mathematical principle behind them. For example, people have to ask you which demonstration would or would not refute your conjectures (until we get to something really obvious, like factoring a 2048-bit number), since they are too vague for others to figure out themselves. It is like conjecturing that computers will never be as intelligent as people, and then once computers beat people at chess, the response is that computers still cannot recognize faces.

  63. Victor Treinsoutrot Says:

    Boaz Barak and Al. have published their paper:

    http://eccc.hpi-web.de/report/2015/082/

    The section 3 deals with MAX E3LIN2.

    Their algorithm starts with a random assignment z_i.
    Now if you try to find what is the best value y_1 for the first variable and returned (y_1, z_2, …) you will satisfy on average Ω(sqrt(D_1)) more equations than random. (D_i is the number of equations the variable i is in.)

    In their algorithm, they compute all such “local improvements” y_i such that (z_1, …, z_{i-1}, y_i, z_{i+1}, … z_n) is the best thing you can output if you are only allowed to modify the value of the i-th variable.

    Then they manage (bottom of page 6) through some nicely working combinatorial trick to combine all those local improvements into a globally improved x_i, with some loss. (In the paper, they only improve compared to random by a third of the sum of how much the local improvements improve compared to random.) The trick works for all Max EkLIN2 with odd k.

  64. Shtetl-Optimized » Blog Archive » Five announcements Says:

    […] Back in January, I blogged about a new quantum optimization algorithm by Farhi, Goldstone, and Gutmann, which was notable for being, as far as anyone could tell, the […]

  65. How many theoreticians does it take to approximate Max 3LIN? | in theory Says:

    […] which it would have been rejected without consideration anyways), I saw a comment by Boaz Barak on Scott Aronson’s blog announcing the same results, so we got in contact with Boaz, who welcomed us to the club of people […]

  66. Shtetl-Optimized » Blog Archive » Quantum. Crypto. Things happen. I blog. Says:

    […] algorithm turns out not to beat the best classical algorithms on the Max E3LIN2 problem (see here and here)—still, whatever the algorithm does do, at least there’s no polynomial-time […]