I have a project that I’m simply now operating on, and I’ve been at the glance out for such information.

]]>“So we need some realistic error model which is between the extremes of nearly local, independent, errors and completely arbitrary errors. If somebody can come up with a good model of error, we can start trying to decide if it is possible to protect against these errors.”

I agree with this as well. I think that we differ in what we regard as a “realistic error model,” not so much regarding the term “realistic” but rather regarding the term “model.” We may return to this sometime.

It can be useful to relate this to Michel Dyakonov’s paper discussed in the post. Michel suggests (implicitly) that the line between realistic and non-realistic quantum systems is very complicated, that scalable quantum computers are well within the non-realistic area, that this may be related to the uncharted territory of non-linear behavior of quantum systems (see point no. 4 in Scott’s remark 133), and that it is quite possible that there is no exciting physics to be learned from the whole episode. Michel’s scenario is certainly a possibility, but I tend to disagree with his assessment regarding the relevance to physics. I expect a clear dividing-line, described by quantum fault-tolerance, for quantum systems. It is left to be seen if this dividing line represents the gap between today’s and tomorrow’s technology, or the gap between today’s and tomorrow’s understanding.

]]>The standard model of error for quantum computation says that the rate of errors in a time interval is proportional to the number of length of the interval times the number of qubits, which is indeed more than the amount of computation. So the standard quantum fault tolerance models cover this case. The problem is when the errors are not independent. If you just take John’s model of error bounded by epsilon independent of locality, this allows for malicious errors, and even for classical fault tolerant quantum computation, we can’t handle these. (This is unlike classical error correcting codes, which can handle this model, but which are usually evaluated using a much more realistic random error model.)

So we need some realistic error model which is between the extremes of nearly local, independent, errors and completely arbitrary errors. If somebody can come up with a good model of error, we can start trying to decide if it is possible to protect against these errors.

]]>(re:223) of course it is reasonable to assume that the errors are not malicious, but there is a difference between “adversarial” and “malicous.” If you take your bicycles for a ride around the block and suddenly change your mind and decide to use them for a sponteneous manned mission to Mars, you may encounter some difficulties. This is not because the gods are reading your mind and maliciously try to fail you but because you need a different equipment to go to Mars.

It is also useful to note that even in the most standard models of errors, there are dependencies of the errors on the evolution which may have adversarial effects. For example, the rate of errors in a time interval is not taken to be proportional to the length of the interval but rather it is proportional to the amount of computation (# of computer cycles) performed in the time interval. ]]>

* Adversary gets to specify any noise operator whatsoever (unbounded, nonlocal, nonunitary, etc.), and you know in advance what it is: scalable QC is impossible. (Why? The adversary could just replace everything by the maximally mixed state.)

* Adversary gets to specify an unbounded, nonlocal noise operator, but it needs to be unitary, and you know in advance what it is: scalable QC is still impossible. (Why? The adversary could scramble the state in such a complicated way that you had no hope of unscrambling it.)

* Adversary gets to specify a bounded, nonlocal noise operator, which need not be unitary, and you know in advance what it is: I don’t think any known fault-tolerance theorems cover this case, but I also don’t see an obvious reason why scalable QC **must** be impossible here. Same thing if the operator is unitary.

* Adversary gets to specify an unbounded, local noise operator, which need not be unitary, and you know in advance what it is: scalable QC is impossible. (Why? Because the adversary can still replace your state by the maximally mixed state using local operations only.)

* Adversary gets to specify an unbounded, local noise operator which is *unitary*, and you know in advance what it is: scalable QC is **possible**. (Why? Because knowing the local unitary errors, you can simply reverse them yourself!)

* Adversary gets to specify a bounded, local noise operator which is nonunitary, and you know in advance what it is: scalable QC is **possible**. (This is precisely the case covered by the standard fault-tolerance theorems.)

Modulo the “bounded, nonlocal” case, I believe that covers all the cases where the noise operator is known to us in advance. I would make a similar list for the case where the noise operator is chosen adversarially only *after* you specify the quantum computation, but (a) you’ve already said in comment #223, quite sensibly, that you’re willing to assume non-adversarial noise, and (b) in 5 minutes I need to get into a cab with Peter Shor to go back to LAX.

Which reminds me: I’m sorry I forgot to ask John Preskill about adversarial noise! But now I’ll have a a whole cab ride to ask Peter about it.

**Executive Summary:** The “combinatorial” list above might make the situation seem confusing. But an excellent way to summarize the matter is that, at present, we don’t know any noise models that definitely kill scalable QC, except for those that are so strong that they also kill scalable *classical* computation. Furthermore, of the noise models that *don’t* kill classical computation, many (most?) of them are now known, because of the threshold theorems, not to kill scalable QC either. There might be a little bit of gray area, though (e.g., the bounded, nonlocal case), which would be great to clarify.