(says ‘almost’ there to NP Complete)

]]>As I wrote here last week, I was genuinely thrilled to see D-Wave finally publish a paper that starts to address the relevant questions, and I immediately gave credit where it was due. I hope D-Wave succeeds at scaling things up and demonstrating entanglement and other characteristic quantum effects. But it’s a long way from a quantum-mechanical temperature dependence in 8 qubits to anything commercially useful, and it’s an insult to people’s intelligence to expect them to take the former as evidence for the latter.

]]>http://www.scottaaronson.com/blog/?p=306

In 2011, Dwave announced the sale of their Dwave One Quantum computer to Lockheed Martin.

http://www.dwavesys.com/en/pressreleases.html#lm_2011

Lockheed Martin Corporation (NYSE: LMT) has entered into an agreement to purchase a quantum computing system from D-Wave Systems Inc.

]]>*Decent talk Scott but I’d much rather see video of the lecture you gave at CMU.*

I just checked in with the CMU folks; they say that they’re working on it and the video will be available soon.

]]>The question which computational complexity class are hypothetically describing “realistic physical model” even in a very non realistic senes is indeed very interesting. A major difficulty is that relating physics to computational complexity is rather difficult but physics seems much closer to quantum error correction.

Another “computational class” (in a loose sense of the term) of interest to quantum skeptics is the following: Consider the power of noisy quantum computers with arbitrary small but positive level of noise just after Shor’s algorithm was found (and after Bernstein-Vazirani paper) but before quantum error correction and fault tolerance were discovered. Such “computers” do not represent an asymptotic exponential speedup but for the problem of factoring large numbers which is infeasible on classical computers “all you need” is to lower the noise-level below some fixed positive constant. This suggests that even before the “threshold theorem” (and certainly after) the “first order” reason for skepticism is that the noise level

for quantum computers (which actually carry complicated quantum evolutions) will scale up with the number of qubits.

Suppose that we allow QC circuits of mixed bits and qubits, together with qubit creation and measurement gates. If at certain time intervals, a quantum executioner comes along and destroys all of the qubits but leaves the bits, then you get (I think) BPP^BQNC, assuming that the executioner allows only a polylog amount of time between death sweeps.

But you could argue that that is too contrived or too unfair. Suppose instead that the executioner works asynchronously, so that there does not exist a quantum path through the circuit of more than polylog depth. Or heck, of no more than constant depth. Then, remark: Using quantum teleportation, you can get all of BQP! Teleportation is the quantum Twitter that keeps the resistance alive even though every rebel is soon executed. 🙂

]]>simple mind – Well, that’s a good question too. I would suppose that an equation like C^C = C is not enough. For one reason, because you would want reasonable restrictions one what it should mean to exponentiate complexity classes, or in other words what it should mean to call an oracle. I don’t have any great ideas for such axioms, nor what else should be required.

Anyway, Scott, one particular reasonable response to the devil’s advocate criticism of BQP is that unitary operations and measurements is the wrong definition of quantum probability/computing/mechanics to begin with! It’s the simplest definition, but it’s simple to the point of being simplistic. A better definition is that the gates or operators should be general quantum operations. If you do it this way, then measurement IS part of the theory’s dynamical content, and observers can also exist on the dynamical stage. Besides, the definition using unitary gates does not yield the correct definition of QSPACE(n^α) for any fixed α (unless you allow measurement-adapted quantum gates, I guess), because Stinespring’s theorem is not a space-efficient construction.

]]>As I pointed out in the talk, quantum computing with separable mixed states isn’t known to be invariant under *any* of these transformations (even between real and complex amplitudes).

On the other hand, at least *one* of the models I talked about—the one-clean qubit model—is surprisingly robust (among other things, it equals the 2-clean-qubit model, the 1-clean-qutrit model, and so on).

To play devil’s-advocate, suppose BQP had been invented before quantum mechanics was discovered. Then I imagine someone would’ve said: “sure, that’s an interesting complexity class, but it obviously isn’t part of any consistent physical universe or computational universe, since you had to introduce this bizarre ‘measurement’ rule that isn’t part of the theory’s dynamical content…” 🙂

And yes, there would’ve been reasonable responses to such a person. All I’m saying is that, once you’ve decided to play this game of imagining how the “laws of efficient computation” could be other than we currently believe they are, you have to be willing to break *something*! And, if you always find yourself breaking too much, then maybe the end result really will be increased confidence that the laws couldn’t have been otherwise. But it’s not as if the effort that’s gone into thinking up BQP alternatives has been that enormous…

>A serious model of computation could be expressed as an

>entire range of complexity classes which must eventually be

>self-consistent in various ways.

If I understand C^C = C is one of these ways. What are the other ways you’re thinking off?

]]>