To paraphrase one of the commenters, when do we get the quantum internet I can’t wait for lolcats that are alive and dead at the same time.

]]>Hi Scott, as you correctly said, there are non-linear phenomena in quantum physics just as there are in classical physics. You explained that studying these phenomena is harder, and that the linear aspects already have highly non-trivial exciting phenomena. As you explained, there is nothing mysterious about such non-linearity and nothing which is in conflict with linearity of Schrödinger equation. (Indeed, just like there are interesting non linear aspects of probability theory.) So, the fact that there are non-linear phenomena in quantum physics just like in classical physics is not mysterious, but non linear phenomena in quantum physics is a large uncharted scientific territory which may have a lot of mysteries. Now, all the models of noise/errors regarding quantum computation have ingredients which are nonlinear. The dependence of the rate of noise on the rate of computation is a non-linear phenomenon even for very mundane models that allow fault-tolerant quantum computation, as well as for noise models like mine which probably do not allow fault-tolerance.

Does your duck example, Scott, applies also to three-sexes and five-heads creatures that “almost certainly” have discovered the “Zork’s bloogorithm,” described on this 2D blog? 🙂 Do you believe that *everything* written on paper can be lifted up to reality?

Regarding Robert Alicki’s point of view, a recent U-tube lecture describes informally Alicki’s recent arXiv:1305.4910 paper which put forward an idea of a fundamental conflict between stability and reversibility (also relevant to the discussion on quantum computing).

]]>However, the whole point of my explanation was that there’s *nothing mysterious whatsoever* about this linear/nonlinear issue! It results entirely from a confusion between abstraction layers: the Schrödinger equation is about an amplitude vector rotating through Hilbert space, while classical physics is typically formulated in terms of particles moving around in real space. It would be as if we were soberly discussing how to reconcile the word “duck” printed on a piece of paper, with the fact that real ducks are *not* made of paper—and then speculating that this paradox might forever prevent ducks from quacking. Well, maybe ducks *can’t* quack (I’m just a layperson, not an ornithologist 🙂 )—but asking about “the large, uncharted territory of 3-dimensional effects in real ducks, as compared to the 2-dimensionality of the *word* ‘duck’ written on paper” seems like an unpromising start to figuring out why. 😀

“Using this approach, coherence times up to 82 ms have been reported [148]. This time was further increased to 30 s by adding dynamic decoherence control [149].”

]]>#73

The possibility, that you cite: “So attempts to build scalable quantum computers will fail and yet they will not leave us with anything to show for our efforts.”

does look likely. But true, we should try to learn lessons from our failure (provided there are lessons to learn).

I think that the coming failure is somehow related to the question, why any phyisical quantity can be measured with a limited precision only?

When I tell this to a mathematician, they automatically respond that precision is just a question of time and resources (they have in mind the number of digits of pi that one can calculate). In physics things are different, one will NEVER be able to measure, say, temperature with a precision 10^(-12) .

As to the very interesting linear vs nonlinear issue in quantum mechanics, I am not ready to discuss it professionally, but I do believe that it might be important.

]]>Quantum memories do not last, and cannot last forever, even with continuous error correction. The error-correction code has operators associated with it which cause a transition between the encoded states, and if we have independent noise on each qubit, there is always some finite probability of making this transition within an error correction cycle. The trick is to make the encoding to have a large distance (meaning that the operators that transition between logical states act non-trivially on a large number of qubits), and hence rely on the binomial distribution to keep such high weight errors improbable. So we do need to decide on a target lifetime for our qubits.

In the thesis I linked to, the author finds that two levels of the Steane [23,1,7] code suffice to protect the qubits long enough to factor a 1024 bit number. I would suggest that this is a proper answer to the question, not as asked, but as it might be reasonably reformulated (i.e. how do you protect a qubit for long enough to perform x number of logical gates, where x is sufficient to accomplish some task of interest).

1000 decoherence times is a bit of a moving target, since the exact encoding you need depends not just on this factor of 1000, but also on the ratio of gate time versus decoherence times (and potentially on T1 vs T2), and it is only possible if your gate times are much faster than the decoherence rate (since this ratio effectively gives you the error rate).

But if it is only a factor of 1000 you need, then it might be achievable by dynamic decoupling, decoherence free subspaces or other means, depending on the system. I believe quite a few qubits have seen decoherence times increased by factors of this magnitude by employing clever strategies to combat noise. The point of using quantum error correction is so that we can choose any lifetime we desire and choose an encoding which can protect our qubits for that length of time.

]]>I am afraid their is some (mutual) misunderstanding.

Joe Fitzsimons #67 says: I think your “simple challenge” is in fact anything but simple. As I understand your challenge, you are asking to be pointed to a paper showing how to construct a logical qubit which never succumbs to noise

A logical qubit, which never succumbs to noise, BECAUSE of continuous error correction. I believe that this is what quantum memory means. Am I wrong?

It is believed that with error correction indefinitely long quantum cimputation can be performed, and even “arbitrary accurately”. Am I correct, that a trivial case is to indefinetly store just one qubit by using appropriate error correction?

If not indefinetly, I would be content to save it for 1000 decoherence times.

THIS is the specific system I have in mind and the specific task that I wish to accomplish. Is it asking too much?

The challenge consists in providing a detailed description of the required sequence of elementary operations or, if it has been done already, to provide a reference.

]]>“You [Michel] ask about how to reconcile the nonlinearity of classical physics with the linearity of the Schrödinger equation. While this isn’t directly relevant to the “main” discussion, I think I can answer that. Quantum mechanics is linear only at the level of the *amplitude vectors*. It can be just as nonlinear as classical physics at the level of measurable objects, whether they’re qubits, electrons in a field, or whatever. Conversely, if you formulate (say) classical statistical physics abstractly enough, as a theory of transformations of probability vectors into other probability vectors by means of stochastic matrices, then it too will be linear; it’s only nonlinear if you look at the actual elements of the ensemble rather than at the distributions. Therefore, this isn’t a classical vs. quantum issue at all, but simply an issue of which level of description you’re interested in. (It so happens that in the quantum case, people more often prefer the abstract level of description, both because that level already contains the highly-nontrivial phenomenon of interference, and because a “concrete” description tends to be difficult or impossible. But again, classical probability is just as linear as quantum probability when you formulate them in a parallel way.)”

I remember reading at the time this paragraph several times and finding it completely reasonable! Scott talked about a large uncharted territory of non-linear phenomena in quantum physics (just like in classical physics), and explained that people mainly studied the abstract “linear” level as it already exhibited highly-nontrivial phenomena and because moving forward is “difficult or impossible”.

Of course, when we try to implement quantum computation we may not be able to shut off the large uncharted territory of nonlinear effects in quantum physics whose study is “difficult or impossible.” My interpretation of Michel Dyakonov’s opinion (but correct me if I am wrong, Michel) is that such non-linear phenomena will exclude quantum computation (or anything remotely close to it) for many ad-hoc reasons that cannot be clearly put into principles. So attempts to build scalable quantum computers will fail and yet they will not leave us with anything to show for our efforts. This is certainly a possibility. Robert Alicki’s opinion is that thermodynamics is the crucial area where obstacles to quantum computation comes from, and his research is centered around quantum thermodynamics. John Sidles raised (over here, and previously) the possibility that eventually further understanding of the foundation of particle physics will reflect negatively on the possibility of quantum computers. My point of view is that understanding the major phase transition reflected by quantum fault-tolerance, which I call *the fault-tolerance barrier,* is crucial, and that the impossibility of crossing this barrier is a basic physics principle, needed to be formalized and explored. (Of course, Robert’s attitude and mine are complementary and are not in contradiction also with John’s view.)

So my expectation is that the novel concepts of quantum error-correction and quantum fault-tolerance will not lead to the constructions of superior quantum computers, but rather will supply theoretical basis needed to move forwards in these “difficult or impossible” theoretical areas of nonlinear aspects of quantum physics, and will eventually explain why QC are not possible.

Regarding my debate with Aram Harrow mentioned by Jay, I would recommend reading the debate posts themselves. My summaries are personal, subjective, and very non-technical. (And attempt to pick entertaining moments.) Let me mention that for me personally, while exploring the secrets of our computational quantum world is a terrific challenge, understanding of how different people (sometimes coming from different disciplines) *think* about the issue is very exciting as well. I really try to understand how different people view this matter, especially since progress both in theory and in experiments is slow and it will take years for things to unfold. It is also surprising how many different issues are related to the debate regarding quantum computers (see this post and its follow-ups), and even the free-will question is somehow related 🙂 .

I do not ask for anything diffrent from the standard error-correcting methods, I just want them applied to just one logical qubit (not thousand).

#65: You must have n>4 systems with two basic values encoded in particular states of that systems. You must also have permanent supply of auxiliary qubits/systems in fixed/zero states to “transfer” noise from your “memory” to that auxiliary qubits

I will accept whatever you want, just tell me what do you want to do to keep my logical qubit intact. This is called quantum memory (in its simplest form).

]]>