## Quantum computing news items (by reader request)

Within the last couple months, there was a major milestone in the quest to build a scalable quantum computer, and also a major milestone in the quest to figure out what you would *do* with a quantum computer if you had one. As I’ve admitted many times, *neither* of those two quests is really the reason why I got into quantum computing—I’m one of the people who would still want to study this field, even if there were no serious prospect either of building a quantum computer or of doing anything useful with it for a thousand years—but for some reason that I don’t fully understand, both of those goals do seem to excite other people.

So, OK, the experimental breakthrough was the Martinis group’s use of quantum error-correction with superconducting qubits, to preserve a logical bit for several times longer than the underlying physical qubits survived for. Shortly before this came out, I heard Krysta Svore give a talk at Yale in which she argued that preserving a logical qubit for longer than the physical qubits was the next experimental milestone (the fourth, out of seven she listed) along the way to a scalable, fault-tolerant quantum computer. Well, ~~it looks like that milestone may have been crossed.~~ (**update:** I’ve since learned from Graeme Smith, in the comments section, that the milestone crossed should really be considered the “3.5th,” since even though quantum error-correction was used, the information that was being protected was classical. I also learned from commenter Jacob that the seven milestones Krysta listed came from a *Science* paper by Schoelkopf and Devorret. She cited the paper; the forgetfulness was entirely mine.)

In more detail, the Martinis group used a linear array of 9 qubits: 5 data qubits interleaved with 4 measurement qubits. The authors describe this setup as a “precursor” to Kitaev’s surface code (which would involve a 2-dimensional array). They report that, after 8 cycles of error detection and correction, they were able to suppress the effective error rate compared to the physical qubits by a factor of 8.5. They also use quantum state tomography to verify that their qubits were indeed in entangled states as they did this.

Of course, this is not yet a demonstration of any nontrivial fault-tolerant computation, let alone of scaling such a computation up to where it’s hard to simulate with a classical computer. But it pretty clearly lies along the “critical path” to that.

As I blogged back in September, Google recently hired Martinis’s group away from UC Santa Barbara, where they’ll work on superconducting quantum annealing, as a step along the way to full universal QC. As I mentioned then, the Martinis group’s “Xmon” qubits have maybe 10,000 times the coherence times of D-Wave’s qubits, at least when you measure coherence in the usual ways. The fact that Martinis et al. are carefully doing quantum state tomography and demonstrating beneficial error-correction before scaling up are further indications of the differences between their approach and D-Wave’s. Of course, even if you do everything right, there’s *still* no guarantee that you’ll outperform a classical computer anytime soon: it might simply be that the things you can do in the near future (e.g., quantum annealing for NP-complete problems) are not things where you’re going to outperform the best classical algorithms. But it’s certainly worth watching closely.

Meanwhile, the quantum algorithms breakthrough came in a paper last month by an extremely well-known trio down the Infinite Corridor from me: Farhi, Goldstone, and Gutmann. In slightly earlier work, Farhi et al. proposed a new quantum algorithm for NP-hard optimization problems. Their algorithm badly needs a name; right now they’re just calling it the “QAOA,” or Quantum Approximate Optimization Algorithm. But here’s what you need to know: their new algorithm is *different* from their famous adiabatic algorithm, although it does become equivalent to the adiabatic algorithm in a certain infinite limit. Rather than staying in the ground state of some Hamiltonian, the QAOA simply

- starts with a uniform superposition over all n-bit strings,
- applies a set of unitary transformations, one for each variable and constraint of the NP-hard instance,
- repeats the set some number of times p (the case p=1 is already interesting), and then
- measures the state in the computational basis to see what solution was obtained.

The unitary transformations have adjustable real parameters, and a big part of the game is figuring out how to set the parameters to get a good solution.

The original, hyper-ambitious goal of the QAOA was to solve the Unique Games problem in quantum polynomial time—thereby disproving the Unique Games Conjecture (which I previously blogged about here), unless NP⊆BQP. It hasn’t yet succeeded at that goal. In their earlier work, Farhi et al. managed to show that the QAOA solves the MAX-CUT problem on 3-regular graphs with approximation ratio 0.6924, which is better than random guessing, but not as good as the best-known classical algorithms (Goemans-Williamson, or for the degree-3 case, Halperin-Livnat-Zwick), let alone *better* than those algorithms (which is what would be needed to refute the UGC).

In their new work, Farhi et al. apply the QAOA to a different problem: the poetically-named MAX E3LIN2. Here you’re given a collection of linear equations mod 2 in n Boolean variables, where each equation involves exactly 3 variables, and each variable appears in at most D equations. The goal is to satisfy as many of the equations as possible, assuming that they’re *not* all satisfiable (if they were then the problem would be trivial). If you just guess a solution randomly, you’ll satisfy a 1/2 fraction of the equations. Håstad gave a polynomial-time classical algorithm that satisfies a 1/2+c/D fraction of the maximum number of satisfiable equations, for some constant c. This remains the best approximation ratio that we know how to achieve classically. Meanwhile, Trevisan showed that if there’s a polynomial-time classical algorithm that satisfies a 1/2+c/√D fraction of the max number of satisfiable equations, for a sufficiently large constant c, then P=NP.

OK, so what do Farhi et al. do? They show that the QAOA, with suitably tuned parameters, is able to satisfy a 1/2+c/D^{3/4} fraction of the *total* number of equations in polynomial time, for some constant c. (In particular, this implies that a 1/2+c/D^{3/4} fraction of the equations *are* satisfiable—assuming, as Farhi et al. do, that two equations directly contradicting each other, like x+y+z=0 and x+y+z=1, never appear in the same instance.)

Now, the above is a bigger fraction than the best-known classical algorithm satisfies! (And not only that, but here the fraction is of the *total* number of equations, rather than the number of satisfiable equations.) Farhi et al. also show that, if the constraint hypergraph doesn’t contain any small cycles, then QAOA can satisfy a 1/2+c/√D fraction of the equations in polynomial time, which is essentially the best possible unless NP⊆BQP.

The importance of this result is not that anyone cares about the MAX E3LIN2 problem for its own sake. Rather it’s that, as far as I know, *this is the first time that a quantum algorithm has been proved to achieve a better approximation ratio for a natural NP-hard optimization problem than the best known classical algorithm achieves.* People have discussed that as a hypothetical possibility for 20 years, but (again, unless I’m missing something) we never had a good example until now. The big question now is whether the 1/2+c/D^{3/4} performance can be matched classically, or whether there truly is an NP-intermediate region of this optimization problem where quantum outperforms classical. (The third possibility, that doing as well as the quantum algorithm is already NP-hard, is one that I won’t even speculate about. For, as Boaz Barak rightly points out in the comments section, the quantum algorithm is still being analyzed only in the regime where solutions are combinatorially *guaranteed* to exist—and that regime can’t possibly be NP-hard, unless NP=coNP.)

*[Above, I corrected some errors that appeared in the original version of this post—thanks to Ed Farhi and to the commenters for bringing them to my attention.]*

**Update (Feb. 3, 2015):** Boaz Barak has left the following comment:

in a work with Ankur Moitra, Oded Regev, David Stuerer and Aravindan Vijayaraghavan we were able to match (in fact exceed) the guarantees of the Farhi et al paper via a classical efficient algorithm. (Namely satisfy 1/2 + C/√D fraction of the equations). p.s. we hope to post this on the arxiv soon

Comment #1 January 12th, 2015 at 1:09 pm

Always fun to read passion! 🙂 Questions please:

Is c known? Is it the same c for both QAOA and Håstad’s algorithm? Is there any constrains for D? (for example it seems that if D=1 then the problem can be solved in P time) Could we have QAOA (or some other quantum algorithm) better than Håstad’s algorithm (or some other classical algorithm) for some (c,D)?

Comment #2 January 12th, 2015 at 1:22 pm

(sorry typo: better than => worse than)

Comment #3 January 12th, 2015 at 2:33 pm

Wasn’t there some examples of approximation for shortest lattice vectors where the best quantum algorithm did better than the best classical one (but obviously less than what would require for cc collapse of some sort)? And (based on even a vaguer memory), wasn’t there such an example for certain approximations for Jones polynomials?)

Comment #4 January 12th, 2015 at 2:36 pm

I feel compelled to point out, again and again, that comparing coherence time of Martinis qubits to that of D-Wave qubits makes no sense. This compares the coherence time of one qubit in a system of ~5 to that of one qubit in a system of ~1000. Martinis qubits are specifically built to have good coherence times and are not built to be scalable. D-Wave qubits are specifically built to be scalable. When you only have a handful of qubits, it’s easy to isolate them and to program/readout each qubit with unique signal lines. When you have 100s to 1000s of qubits like D-Wave, you need to add a lot of infrastructure. We will inevitably see the coherence times for Martinis qubits fall as the number of qubits in his systems increases. If D-Wave built a simple 5-qubit processor, they could rival Martinis current coherence times for sure. But what would be the point of that?

Comment #5 January 12th, 2015 at 2:55 pm

Quantum Inexact NP Optimization Algorithm, or QuINOA.

Comment #6 January 12th, 2015 at 3:26 pm

Possibly naive question:

MAX E3LIN2 has an obvious generalization to MAX E3LINk for any k, and then by guessing randomly one should expect to satisfy about 1/k. Do the results for MAX E3LIN2 extend to this broader setting?

Comment #7 January 12th, 2015 at 3:48 pm

Jay #1: Yes, you can read their paper to find the specific values of c. I don’t think there are any restrictions on D.

Comment #8 January 12th, 2015 at 3:54 pm

Gil #3: No, I don’t think it’s known how to achieve any approximation ratio for SVP using a quantum computer, that beats what one can obtain classically using LLL-type algorithms. By Regev’s results, such an algorithm

would havefollowed if we knew how to solve the Dihedral Hidden Subgroup Problem efficiently, but we don’t.I’m not counting the Jones polynomial as an example of the sort of problem I’m talking about, because that’s a problem of producing an

additiveapproximation to a #P-complete sum of exponentially many positive and negative terms. And for that sort of problem, it’s much less surprising that there would be a quantum speedup (and indeed we have many other examples, besides the Jones polynomial).Comment #9 January 12th, 2015 at 4:05 pm

Jeremy #4:

But what would be the point of that?

The point would be to demonstrate the building blocks that will ultimately be needed individually, before you try to demonstrate them at scale. It might be true that D-Wave “could have” produced a 9-qubit system with Martinis-like coherence times had it wanted to, and it might also be true that Martinis “could have” produced a 500-qubit system with D-Wave-like coherence times had

hewanted to (and had he had the funding). But regardless of the truth of either statement, I personally find the “first really understand what’s going on andthenscale up” approach to be more promising. For like many others, I think of the “real” problem not as adding more qubits, but simply as getting over the hump of quantum fault-tolerance. Once you’ve done the latter, you can then add as many qubits as you want, with the confidence that you’ll know what they’re doing.Comment #10 January 12th, 2015 at 4:07 pm

Chris #5: That’s an awesome name! I’ll see what Farhi, Goldstone, and Gutmann think of it.

Comment #11 January 12th, 2015 at 4:09 pm

Joshua #6: It seems plausible that the result would generalize to MAX E3LINk, but I really don’t know. I could ask the authors.

Comment #12 January 12th, 2015 at 4:09 pm

Related to Scott’s #9 and Jeremy’s #4 (and pardon if this has been addressed before) is there any example of any sort of technology where scaling it up on a massive scale and then working out the precision issues actually succeeded? Every example I’m aware of, the small scale is first done and then the integration is done. Transistors and vacuum tubes are the most prominent examples.

Comment #13 January 12th, 2015 at 4:11 pm

Scott #7: sleep deprivation?

Comment #14 January 12th, 2015 at 4:11 pm

If I understand correctly, what the quantum algorithm does cannot be NP hard unless NP=coNP, since what they show is that *every* 3XOR instance of max degree D has an assignment satisfying a 1/2 + Omega(D^{-0.75)}).

In addition they show an efficient quantum algorithm to actually find such an assignment.

Such a problem where there is a guaranteed solution for every instance cannot be NP hard (if there was a reduction from SAT to this problem then one could use it to certify that a SAT formula is unsatisfiable).

Comment #15 January 12th, 2015 at 4:19 pm

Jay #13: Not really. Just, y’know, you can look these things up yourself!

Comment #16 January 12th, 2015 at 4:27 pm

Boaz #14: Ah, that’s an excellent observation. We should say: assuming NP≠coNP, the only way the quantum algorithm could

possiblybe doing something NP-hard, would be if it could satisfy a 1/2+1/f(D) fraction of equations when possible, for some f such that a 1/2+1/f(D) fraction of equations arenotalways simultaneously satisfiable.This raises some obvious questions: can one prove, nonconstructively if necessary, that a 1/2+c/√D fraction of equations are always simultaneously satisfiable? Is 1/2+c/√D (for some c) not only the threshold for NP-hardness, but

alsothe threshold where the problem goes from always-satisfiable to not-always-satisfiable? If so, then do these two thresholds occur at the same value of c or at different values?Comment #17 January 12th, 2015 at 4:32 pm

Hi Scott,

Note that the Martinis paper only extends the lifetime of classical information, not quantum. Of course, classical error correction has been demonstrated before, but what’s interesting is that this is in a system that could plausibly be used to do quantum error correction in the future. The milestone Krysta would have been talking about is improving the storage of quantum information via error correction, which is still a long way off.

Comment #18 January 12th, 2015 at 4:35 pm

By the way the seven steps Svore discussed (of which lifetime-extending QEC is the fourth) come from a Schoelkopf and Devoret outlook in Science 339, pages 1169-1174. Their next milestone is single qubit operations on logical qubits.

Comment #19 January 12th, 2015 at 4:56 pm

Boaz, A “quantumized” proof (and an efficient quantum algorithm) for the statement that every 3XOR instance of max degree D has an assignment satisfying a 1/2 + Omega(D^{-0.75)}), is very interesting. So of course, even before an efficient classical algorithm it would be nice to find an “ordinary” mathematical proof (e.g. via a probabilistic argument).

Is this problem on some of Papadimitriu’s complexity classes (for algorithms to find objects guaranteed by a mathematical theorem)?

Comment #20 January 12th, 2015 at 5:14 pm

(I am actually now confused – couldn’t you have an instance where every for equation of the form x_i + x_j + x_k = 0 you also have the equation x_i + x_j + x_k = 1, and so you can never satisfy more than 1/2 of them? am guessing the paper rightly doesn’t allow such instances.)

Ignoring technicalities such as the above (which may or may not be related to the girth condition they mention) I would guess that a 1/2+O(1/\sqrt{D}) should be the right answer. The natural way to construct a 3XOR instance that is highly unsatisfiable is to simply choose m=Dn random equations.

Because for every particular assignment, the number of satisfied equations has expectation m/2 and standard deviation sqrt{m}, we would expect that the best assignment out of the 2^n possiblities would have an advantage of roughly sqrt{n} standard deviations. So the fraction of constraints satisfied would be

1/2 + O(\sqrt{nm}/m) = 1/2 + O(1/\sqrt{D})

I don’t have a strong intuition for the right value of the constant – that can be somewhat delicate, since the “roughly” sqrt{n} above is indeed rough, as there are dependencies between assignments that are close to one another.

Comment #21 January 12th, 2015 at 5:26 pm

Scott # 9

“The point would be to demonstrate the building blocks that will ultimately be needed individually, before you try to demonstrate them at scale.”

But my point is that Martinis’ current building blocks are NOT what is ultimately needed individually. To demonstrate that, his small 9-qubit system would need to have all of the on-chip wiring, storage devices, readout components, shielding, etc. that would surround those same 9-qubits in a ~1000 qubit system. I don’t mean to belittle or detract from Martinis’ achievements, but the assumption that his current coherence times will scale doesn’t have any support, and any comparisons between his and D-Wave’s qubits don’t make sense unless the environments (e.g., as a result of scaling measures) are matched.

People don’t seem to appreciate that (at least, before Google got involved) D-Wave has way more resources than Martinis. The tradeoff between coherence and scalability is, in many ways, a fabrication problem, and D-Wave has invested many, many millions more dollars in superconducting fab than Martinis. They both want the same thing, so if D-Wave shows less coherence it’s because attaining Martinis-level coherence at scale is the challenge.

Comment #22 January 12th, 2015 at 5:27 pm

Jeremy, the reason for dwave to produce a five qbit computer with long coherence times is to prove that they aren’t quax. I really doubt they could if they tried.

Comment #23 January 12th, 2015 at 6:35 pm

Scott #15: ok, I just thought you would have known without searching (1/22 btw, still searching for Håstad’s). But no, I don’t think the last questions were that trivial. Let me rephrase:

Is it correct that, for D=1, MAX E3LIN2 can be solved in P time?

Is it known or possible that, for some fixed D, Håstad is better than QuINOA?

Comment #24 January 12th, 2015 at 6:37 pm

@4 “… comparing coherence time of Martinis qubits to that of D-Wave qubits makes no sense”

Comparing the coherence times of the Martinis qubits and the D-Wave qubits is a perfectly sensible apples-to-apples comparison. Your argument is that the Martinis group performing better along that metric does not make them better, but it is absurd to say that you can’t make a comparison unless it definitively resolves the dispute. Everyone agrees that the Martinis group and D-Wave are making different trade-offs with regards to qubit coherence and scalability, even as people disagree on how sensible the different trade-offs are.

Comment #25 January 13th, 2015 at 2:45 am

Graeme #17 and Jacob #18: Thanks very much for the clarifications! So then, they’ve extended the lifetime of classical information in a quantum system, by using error-correction with entangled quantum states. Maybe we should call that step 3.5? I’ll update the post accordingly.

Comment #26 January 13th, 2015 at 3:20 am

Jeremy #21 I think most researchers accept the tradeoff between fabrication and scalability will only be overcome with fault-tolerance techniques, which is why experimental demonstration of these techniques at small scales is so important.

If and when such demonstrations succeed, there is still a considerable architecture problem to be solved before fabrication even comes into it. Most likely a large-scale quantum computer won’t resemble anything we imagine today, which I why I think D-wave jumped the gun and the millions of dollars they spent were a bit of a waste.

Comment #27 January 13th, 2015 at 9:14 am

Joshua #12

“is there any example of any sort of technology where scaling it up on a massive scale and then working out the precision issues actually succeeded?”

What’s interesting is that redundancy is about scaling up in order to get around errors at a lower level.

The Martinis group uses redundancy in a very controlled manner.

Multicore CPU manufacturing uses redundancy to get around fundamental imperfections in the processes (make N cores expecting that a fraction of them will be flawed and disabled).

Comment #28 January 13th, 2015 at 9:25 am

Hi Scott,

I don’t really understand why you qualified the post with

“but for some reason that I don’t fully understand, both of those goals do seem to excite other people.”

But fundamentally the QAOA thing is about the Church/Turing thesis and the complexity hierarchy, regardless whether a practical QC will ever be realized.

Comment #29 January 13th, 2015 at 9:41 am

Jeremy #4, I know what you mean. In the same way, I couldn’t understand all the hoopla when Andrew Wiles proved Fermat’s last theorem. I mean, Fermat had already proved it, right? We know that because he said he did.

Comment #30 January 13th, 2015 at 10:31 am

Ok, I didn’t intend to turn this into another D-Wave vs. Shtetl situation. I see D-Wave and Martinis as complementary efforts toward the same goal, and it just irks me when people dismiss D-Wave’s accomplishments based on a simplistic comparison to Martinis’ coherence times. In the post, Scott highlights that Martinis’ qubits have 10,000x the coherence time of D-Wave’s qubits, but is silent on the fact that D-Wave’s processors have ~10,000x the number of Josephson junctions as Martinis’ processors. Why the bias?

Chris D @ 26 points out that once fault-tolerance is achieved there remains a considerable architecture problem, and this is the fundamental difference between the D-Wave and the Martinis approaches. Martinis is (or has been) working on the fault tolerance problem and leaving the architecture/scaling problem for later, adopting the common rationale that the individual building blocks need to be perfected before they can be assembled. Conversely, D-Wave is working on the architecture/scaling problem now and leaving the fault tolerance later, based on the notion that one can’t perfect the individual building blocks without shaping the blocks so that they’ll all fit together. We learn a tremendous amount from both of these streams. D-Wave is nowhere near Martinis’ coherence times, and Martinis is nowhere near the sophistication of D-Wave’s superconducting integrated circuits.

Comment #31 January 13th, 2015 at 12:20 pm

As far as I remember, the seven milestones discussed in the post (from the paper of Schoelkopf and Devoret) are inspired by David DiVincenzo’s 2000 paper “The physical implementation of quantum computation“. See them also here .

Comment #32 January 13th, 2015 at 12:27 pm

Jacob #18, Scott #22: it’s good to add that Krysta cited the paper herself in her talk, iirc (I was there too).

Comment #33 January 13th, 2015 at 12:33 pm

“D-Wave is nowhere near Martinis’ coherence times, and Martinis is nowhere near the sophistication of D-Wave’s superconducting integrated circuits.”

I guess Martinis has been working on the quantum side and D-Wave on the computer side 😉

Comment #34 January 13th, 2015 at 12:37 pm

Nick Read #32: Thanks! Added that.

Comment #35 January 13th, 2015 at 1:47 pm

Jeremy Stanton – “In the post, Scott highlights that Martinis’ qubits have 10,000x the coherence time of D-Wave’s qubits, but is silent on the fact that D-Wave’s processors have ~10,000x the number of Josephson junctions as Martinis’ processors. Why the bias?”

If I want to build an airplane, I’d rather have two wings with a lot of lift than 20,000 wings with very little lift.

Comment #36 January 13th, 2015 at 3:33 pm

My guess is that if D-Wave ever demonstrates (through the standard scientific channels) that it is producing working, useful QD devices, Scott will be the first to congratulate them.

Comment #37 January 13th, 2015 at 4:25 pm

Greg – weird analogy, but let’s extend it to the point where it’s a little more relevant. Let’s say that an airplane must have 20,000 wings in order to fly. Now your two wings are very good wings, but there is no way they can actually get the job done. It’s great to keep fine tuning the lift that those two wings can produce and you’ll probably learn a lot about really small, not-particulary-useful systems in the process. But it’s the behavior of the 20,000 wing system that you really care about. In that case, it’s perfectly viable, maybe even actually better from an engineering point of view, to start building 20,000-wing systems and see what they’re like.

You might say that you can only really understand the 20,000-wing system if you really understand the 2-wing system. I’d agree. But if the composition of a wing itself fundamentally changes in going from a 2-wing system to a 20,000-wing system, then your understanding of the 2-wing system is of little use to you when you start working on the real beast. You’d be better off building the 20,000 wing system first to figure out everything that you need in such a system, and then sampling 2-wing subsystems within that larger system in order to study them. There is nothing that stops D-Wave from doing this. D-Wave’s investigations of x-qubit subsystems in larger qubit systems are, arguably, more relevant to scalable QC than Martinis’ investigations of x-qubit systems in isolated environments.

Comment #38 January 13th, 2015 at 4:42 pm

@Boaz #20: “(I am actually now confused – couldn’t you have an instance where every for equation of the form x_i + x_j + x_k = 0 you also have the equation x_i + x_j + x_k = 1, and so you can never satisfy more than 1/2 of them? am guessing the paper rightly doesn’t allow such instances.)”

I believe the resolution to this is that Farhi et al (and the classical references they compare their result with) are computing an approximation ratio which is defined as: (# of clauses satisfied by the solution they produce)/(# of clauses satisfied by the optimal assignment). Therefore even if the optimal assignment only satisfies 1/2 of the clauses, the approximation ratio can be greater than 1/2.

Comment #39 January 14th, 2015 at 12:44 am

Elizabeth #38: OK, I just checked again. In their paper, they explicitly say that they can satisfy such-and-such fraction of equations for every instance—so in particular, that in every instance that fraction is satisfiable. It’s not just an approximation ratio. Then, when they go on to discuss their algorithm, they do make it clear that each equation can occur either positively, negatively, or not at all—so they are indeed excluding the case where an equation appears both positively and negatively.

Comment #40 January 14th, 2015 at 8:59 pm

Just FYI, for Max-Cut on 3-regular graphs, Halperin-Livnat-Zwick’01 give a .9326-approximation algorithm involving semidefinite programming, and also a simple combinatorial .8-approximation algorithm. (Actually, both algorithms work only assuming the maximum degree is 3.)

Comment #41 January 15th, 2015 at 1:05 am

I see the DWave vs Martinis situation as something like this:

Scaling is a challenge & so is fault tolerance. Dwave focused on the first without doing much of the latter. Unfortunately D-Wave tried to sell it as a bigger success than it really was.

Martinis seems to have focused on fault tolerance. Fortunately, unlike D-wave he’s not been unnecessarily hyping it up.

So also, let us not, the rest of use, make this milestone any bigger than what it really is. i.e. Martinis has made progress on error correction. But unless they can preserve these gains & scale up it’s nowhere closer to the final goal of a useful QC.

Whether as an experimental approach, to scale first makes more sense or to error correct first I’m not very sure. I guess history does show that scale later is what has mostly worked.

In any case, that seems a subjective choice. What’s important is to remember that neither scaling nor error correction is of much use from a QC technology POV unless they are both advanced in conjunction.

Comment #42 January 15th, 2015 at 1:07 am

Ryan #40: Thanks! Yes, Farhi did mention in his talk about this that for the degree-3 case, you can beat Goemans-Williamson by a little, still using SDP relaxation (and I see that they reference Halperin-Livnat-Zwick in their paper). I’ve now edited the post to point that out.

Is it known whether Halperin-Livnat-Zwick is optimal for the degree-3 case, assuming the UGC?

Comment #43 January 15th, 2015 at 1:17 am

If I were to bet, then I’m saying that within the next six months, someone is going to come up with a classical algorithm that beats Håstad’s polynomial-time algorithm.

Comment #44 January 15th, 2015 at 8:54 am

Rahul #43: Yes, that is indeed a plausible bet. Even then, though, it would still be kind of interesting that the quantum algorithm had come first.

Incidentally, I didn’t feel bad at all about blogging the Martinis group’s work, because so many QC experiments that were so much less important have been hyped so much more! Good to even things out once in a while. 🙂

Comment #45 January 15th, 2015 at 9:23 am

Rahul # 41: Agreed!

Comment #46 January 15th, 2015 at 10:30 am

Scott #44:

Indeed! Martinis himself seems like the ideal researcher in these aspects. I think he doesn’t hype his stuff at all.

D-Wave could learn a thing or two from him. Speaking of D-Wave they’ve been kinda lying low for some time now. Maybe the criticism finally got to them. 🙂

Comment #47 January 16th, 2015 at 3:21 am

The difference between the 3.5th milestone and the 4th milestone plays a central role in the seventh post of my 2012-debate with Aram Harrow https://rjlipton.wordpress.com/2012/09/16/quantum-repetition/

In connection with a conjecture I made in the first post (“Conjecture 1”) Aram made the point that classical error-correction can lead to very stable encoded qubits in certain states (which is essentially the 3.5 milestone). I gave a formal description of the conjecture, which essentially asserts that the 4th milestone, namely insisting that encoded qubits allows arbitrary superpositions, cannot be reached.

Comment #48 January 16th, 2015 at 9:46 am

Gil,

Just so I understand, you’re conjecturing that preserving a logical qubit for longer than the physical qubits cannot and will not be accomplished? If so, is it your position then, that accomplishing this task would basically disprove or at least severally weaken your general conjecture regarding the impossibility of constructing an effective quantum computer? Thanks.

Comment #49 January 16th, 2015 at 9:51 am

Scott #44

what’s the parallel here with the Shor algorithm that does not seem to have a classical counterpart?

The problem at hand isn’t the type of “isolated island” that factorization is?

Comment #50 January 17th, 2015 at 11:04 am

Dear Michael (#48) , yes, sure! as I said many times (See, for example, the discussion in my 2012 Simons Institute videotaped lecture 2), implementation of quantum error-correction with encoded qubits which are substantially more stable than the raw qubits (and allow arbitrary superposition for the encoded qubit) will disprove my conjectures. Such stable encoded qubits are expected from implementations of distance-5 surface code.

Let me add, Michael, that I will be impressed to see even a realization of distance-3 (or distance-5) surface code that will give good quality encoded qubits, even if the encoded qubits will have quality which is somewhat worse than that of the raw qubits used for the encoding. These experiments, including those that were already carried out, also give various other opportunities to test my conjectures.

Comment #51 January 17th, 2015 at 4:28 pm

Thanks Gil.

Comment #52 January 20th, 2015 at 1:18 am

As an aside, I was reading the Farhi paper & they acknowledge the US-Army & NSF for funding.

NSF i understand but why ARL? Does MAX E3LIN2 or the other stuff have any military implications? Cryptography? Just curious.

Or does the US-Army spread its infinite $$ into more fundamental, basic research objectives?

Comment #53 February 3rd, 2015 at 10:54 am

Just an update, in a work with Ankur Moitra, Oded Regev, David Stuerer and Aravindan Vijayaraghavan we were able to match (in fact exceed) the guarantees of the Farhi et al paper via a classical efficient algorithm. (Namely satisfy 1/2 + C/\sqrt{D} fraction of the equations)

Comment #54 February 3rd, 2015 at 10:54 am

p.s. we hope to post this on the arxiv soon

Comment #55 February 5th, 2015 at 7:00 pm

#53: um, so then P=NP?

Comment #56 February 15th, 2015 at 10:22 am

Nick Read, #55,

No, the choice of C will presumably be different, and likely much smaller than choice of C that would make it an NP-hard problem.

Comment #57 February 15th, 2015 at 3:10 pm

Joshua #56: Yes, “presumably”. But is it, in fact? There will now be an interesting quantitative question: just how close can the constant in the best known algorithm (i.e. with largest constant) get to the best (smallest) known constant such that P=NP?

Comment #58 February 15th, 2015 at 10:38 pm

Nick, Yes, and the other question will then be whether one can get a better constant in the quantum case than the classical case.

Comment #59 March 9th, 2015 at 3:12 pm

[…] physicist Scott Aaronsen pointed out in his blog that this experiment can be considered as completing 3.5 of the 7 steps needed to build a working […]

Comment #60 March 9th, 2015 at 10:06 pm

[…] physicist Scott Aaronsen pointed out in his blog that this experiment can be considered as completing 3.5 of the 7 steps needed to build a working […]

Comment #61 March 11th, 2015 at 10:02 am

[…] physicist Scott Aaronsen forked out in his blog that this examination can be deliberate as completing 3.5 of a 7 stairs indispensable to build a […]

Comment #62 March 30th, 2015 at 10:21 am

Gil, does this count?

http://www.nature.com/nphoton/journal/v4/n10/full/nphoton.2010.168.html

Sorry for not bringing it up before – I guess there were a lot of different points going on in that discussion.

What bothers me a little about the conjectures is that there seems to be no mathematical principle behind them. For example, people have to ask you which demonstration would or would not refute your conjectures (until we get to something really obvious, like factoring a 2048-bit number), since they are too vague for others to figure out themselves. It is like conjecturing that computers will never be as intelligent as people, and then once computers beat people at chess, the response is that computers still cannot recognize faces.

Comment #63 May 15th, 2015 at 3:15 pm

Boaz Barak and Al. have published their paper:

http://eccc.hpi-web.de/report/2015/082/

The section 3 deals with MAX E3LIN2.

Their algorithm starts with a random assignment z_i.

Now if you try to find what is the best value y_1 for the first variable and returned (y_1, z_2, …) you will satisfy on average Ω(sqrt(D_1)) more equations than random. (D_i is the number of equations the variable i is in.)

In their algorithm, they compute all such “local improvements” y_i such that (z_1, …, z_{i-1}, y_i, z_{i+1}, … z_n) is the best thing you can output if you are only allowed to modify the value of the i-th variable.

Then they manage (bottom of page 6) through some nicely working combinatorial trick to combine all those local improvements into a globally improved x_i, with some loss. (In the paper, they only improve compared to random by a third of the sum of how much the local improvements improve compared to random.) The trick works for all Max EkLIN2 with odd k.

Comment #64 May 16th, 2015 at 11:08 pm

[…] Back in January, I blogged about a new quantum optimization algorithm by Farhi, Goldstone, and Gutmann, which was notable for being, as far as anyone could tell, the […]

Comment #65 May 18th, 2015 at 5:14 pm

[…] which it would have been rejected without consideration anyways), I saw a comment by Boaz Barak on Scott Aronson’s blog announcing the same results, so we got in contact with Boaz, who welcomed us to the club of people […]

Comment #66 March 6th, 2016 at 2:30 pm

[…] algorithm turns out not to beat the best classical algorithms on the Max E3LIN2 problem (see here and here)—still, whatever the algorithm does do, at least there’s no polynomial-time […]