## Happy New Year! My response to M. I. Dyakonov

A couple weeks ago M. I. Dyakonov, a longtime quantum computing skeptic, published a new paper setting out his arguments (maybe “grievances” is a more accurate word) against quantum computing research. Looking for a way to procrastinate from other work I have to do, I decided to offer some thoughts in response.

To me, perhaps the most striking aspect of Dyakonov’s paper is what it *doesn’t* claim. Unlike Leonid Levin, Oded Goldreich, and several other quantum computing skeptics I’ve engaged, Dyakonov never seriously entertains the possibility of a general principle that would explain why scalable quantum computing is *not* possible. (Thus, my $100K prize presumably isn’t relevant to him.) He even ridicules discussion of such a principle (see the end of this post). The unwillingness to say that scalable QC *can’t* work, or to articulate a reason why, saves Dyakonov from the need to explore what else would need to be true about the physical world if scalable QC were impossible. For example, would there then be an efficient algorithm to simulate arbitrary quantum systems on a classical computer—or at least, all quantum systems that can plausibly arise in Nature? Dyakonov need not, and does not, evince any curiosity about such questions. In his game, it’s only the quantum computing proponents who are on trial; there’s no need for examination of the other side.

That being so, Dyakonov focuses on what he sees as unrealistic assumptions in known versions of the Quantum Fault-Tolerance Theorem, covering well-trodden ground but with some strange twists. He accuses quantum computing researchers of a “widespread belief that the |0〉 and |1〉 states ‘in the computational basis’ are something absolute, akin to the on/off states of an electrical switch, or of a transistor in a digital computer.” He then follows with a somewhat-patronizing discussion of how no continuous quantity can be manipulated perfectly, and how |0〉 and |1〉 are just arbitrary labels whose meanings could change over time due to drift in the preparation and measurement devices. Well, yes, it’s obvious that |0〉 and |1〉 don’t have absolute meanings, but is it not equally obvious that we can *give* them meanings, through suitable choices of initial states, gates, and measurement settings? And if the meanings of |0〉 and |1〉 drift over time, due to the imprecision of our devices … well, if the amount of drift is upper-bounded by some sufficiently small constant, then we can regard it as simply yet another source of noise, and apply standard fault-tolerance methods to correct it. If the drift is unbounded, then we do need better devices.

(Fault-tolerance mavens: please use the comments for more detailed discussion! To my inexpert eyes, Dyakonov doesn’t seem to engage the *generality* of the already-known fault-tolerance theorems—a generality traceable to the fact that what powers those results is ultimately just the linearity of quantum mechanics, not some fragile coincidence that one expects to disappear with the slightest change in assumptions. But I’m sure others can say more.)

Anyway, from his discussion of fault-tolerance, Dyakonov concludes only that the possibility of scalable quantum computing in the real world should be considered an open question.

Surprisingly—since many QC skeptics wouldn’t be caught dead making such an argument—Dyakonov next turns around and says that, well, OK, fine, even if scalable QCs *can* be built, they still won’t be good for much. Shor’s factoring algorithm is irrelevant, since people would simply switch to other public-key cryptosystems that appear secure even against quantum attack. Simulating quantum physics “would be an interesting and useful achievement, but hardly revolutionary, unless we understand this term in some very narrow sense.” And what about Grover’s algorithm? In an endnote, Dyakonov writes:

Quantum algorithms that provide (with an ideal quantum computer!) only polynomial speed-up compared to digital computing, like the Grover algorithm, became obsolete due to the polynomial slow-down imposed by error correction.

The above is flat-out mistaken. The slowdown imposed by quantum error-correction is polylogarithmic, not polynomial, so it doesn’t come close to wiping out the Grover speedup (or the subexponential speedups that might be achievable, e.g., with the adiabatic algorithm, which Dyakonov doesn’t mention).

But disregarding the polylog/polynomial confusion (which recurs elsewhere in the article), and other technical issues about fault-tolerance, up to this point *many quantum computing researchers could happily agree with Dyakonov—and have said similar things many times themselves*. Dyakonov even quotes Dorit Aharonov, one of the discoverers of quantum fault-tolerance, writing, “In a sense, the question of noisy quantum computation is theoretically closed. But a question still ponders our minds: Are the assumptions on the noise correct?”

(And as for QC researchers coming clean about *limitations* of quantum computers? This is just hearsay, but I’m told there’s a QC researcher who actually chose “Quantum computers are not known to be able to solve NP-complete problems in polynomial time” as the tagline for his blog!)

Dyakonov fumes about how popular articles, funding agency reports, and so forth have overhyped progress in quantum computing, leaving the conditions out of theorems and presenting incremental advances as breakthroughs. Here I sadly agree. As readers of *Shtetl-Optimized* can hopefully attest, I’ve seen it as my professional duty to spend part of my life battling cringeworthy quantum computing claims. Every week, it feels like I talk to another journalist who tries to get me to say that this or that QC result will lead to huge practical applications in the near future, since that’s what the editor is looking for. And every week I refuse to say it, and try to steer the conversation toward “deeper” scientific questions. Sometimes I succeed and sometimes not, but at least I never hang up the phone feeling dirty.

On the other hand, it would be interesting to know whether, in the history of science, there’s *ever* been a rapidly-developing field, of interest to large numbers of scientists and laypeople alike, that *wasn’t* surrounded by noxious clouds of exaggeration, incomprehension, and BS. I can imagine that, when Isaac Newton published his *Principia*, a Cambridge University publicist was there to explain to reporters that the new work proved that the Moon was basically an apple.

But none of that is where Dyakonov loses me. Here’s where he does: from the statements

**A) The feasibility of scalable quantum computing in the physical world remains open*** , *and

**B) The applications of quantum computing would probably be real but specialized**,

he somehow, unaided by argument, arrives at the conclusion

**C) Quantum computing is a failed, pathological research program, which will soon die out and be of interest only to sociologists.**

Let me quote from his conclusion at length:

I believe that, in spite of appearances, the quantum computing story is nearing its end, not because somebody proves that it is impossible, but rather because 20 years is a typical lifetime of any big bubble in science, because too many unfounded promises have been made, because people get tired and annoyed by almost daily announcements of new “breakthroughs”, because all the tenure positions in quantum computing are already occupied, and because the proponents are growing older and less zealous, while the younger generation seeks for something new …

In fact, quantum computing is not so much a scientific, as a sociological problem which has expanded out of all proportion due to the US system of funding scientific research (which is now being copied all over the world). While having some positive sides, this system is unstable against spontaneous formation of bubbles and mafia-like structures. It pushes the average researcher to wild exaggerations on the border of fraud and sometimes beyond. Also, it is much easier to understand the workings of the funding system, than the workings of Nature, and these two skills only rarely come together.

The QC story says a lot about human nature, the scientific community, and the society as a whole, so it deserves profound psycho-sociological studies, which should begin right now, while the main actors are still alive and can be questioned.

In case the message isn’t yet clear enough, Dyakonov ends by comparing quantum computing to the legend of Nasreddin, who promised the Sultan that he could teach a donkey how to read.

Had he [Nasreddin] the modern degree of sophistication, he could say, first, that there is no theorem forbidding donkeys to read. And, since this does not contradict any known fundamental principles, the failure to achieve this goal would reveal new laws of Nature. So, it is a win-win strategy: either the donkey learns to read, or new laws will be discovered.

Second, he could say that his research may, with some modifications, be generalized to other animals, like goats and sheep, as well as to insects, like ants, gnats, and flies, and this will have a tremendous potential for improving national security: these beasts could easily cross the enemy lines, read the secret plans, and report them back to us.

Dyakonov chose his example carefully. Turnabout: consider the first person who had the idea of domesticating a wild donkey, teaching the beast to *haul people’s stuff on its back*. If you’d never seen a domestic animal before, *that* idea would sound every bit as insane as donkey literacy. And indeed, it probably took hundreds of years of selective breeding before it worked well.

In general, if there’s no general principle saying that X can’t work, the truth *might* be that X can probably never work, but the reasons are too messy to articulate. Or the truth might be that X *can* work. How can you ever find out, except by, y’know, *science*? Try doing X. If you fail, try to figure out why. If you figure it out, share the lessons with others. Look for an easier related problem Y that you can solve. Think about whether X is impossible; if you could *show* its impossibility, that might advance human knowledge even more than X itself would have. If the methods you invented for X don’t work, see if they work for some other, unrelated problem Z. Congratulations! You’ve just reinvented quantum computing research. Or really, any kind of research.

But there’s something else that bothers me about Dyakonov’s donkey story: its *specificity*. Why fixate on teaching a *donkey*, only a donkey, how to read? Earlier in his article, Dyakonov ridicules the diversity of physical systems that have been considered as qubits—electron spin qubits, nuclear spin qubits, Josephson superconducting qubits, cavity photon qubits, etc.—seeing the long list as symptomatic of some deep pathology in the field. Yet he never notices the tension with his donkey story. Isn’t it obvious that, if Nasreddin had been a quantum computing experimentalist, then after failing to get good results with donkeys, he’d simply turn his attention to teaching cows, parrots, snakes, elephants, dolphins, or gorillas how to read? Furthermore, while going through the zoo, Nasreddin might discover that he *could* teach gorillas how to recognize dozens of pictorial symbols: surely a nice partial result. But maybe he’d have an even better idea: why not *build his own reading machine*? The machine could use a camera to photograph the pages of a book, and a computer chip to decode the letters. If one wanted, the machine could be even be the size and shape of a donkey, and could emit braying sounds. Now, maybe Nasreddin would fail to build this reading machine, but even then, we know today that it would have been a noble failure, like those of Charles Babbage or Ted Nelson. Nasreddin would’ve failed only by being too far ahead of his time.

**Update (Jan. 7):** See Dyakonov’s response to this post, and my response to his response.

Comment #1 January 2nd, 2013 at 2:58 am

Do you know any good, and readable, studies on research/funding bubbles? That sounds more interesting.

Comment #2 January 2nd, 2013 at 6:03 am

But Scott for that you need money to survive, and no one would give a grant to Nasreddin to build a reading machine. No one would even understood what he is talking about, or similar to Hero steam engine, it will not find any practical application, and again due to sociological reasons. You banned my comment, as nonsense, saying that I can build a fusion device in 10-15 years given just the amount of resources to support my family to survive during this time and be able concentrate on one thing + minimal amount for experimental devices, e.g. ~$20 mln for the rest of my life. I’m not saying I’m Nasreddin, I’m saying you are not consistent in your statements, and I’m saying that the granting agencies will behave similar to you, and I think this is the point in the sited long part of the paper – the granting is not based on the scientific outcome, but on the ability of some people to (continuously) convince granting agencies.

Comment #3 January 2nd, 2013 at 9:15 am

mkatkov: How could granting possibly be based on the “scientific outcome,” when no one knows the outcome at the time of the grant? It has to be based instead on various proxies, like the track record of those applying at doing related things in the past. In any case, I don’t have $20 million, but have no objection to you seeking it from someone else (Bill Gates? Yuri Milner? the NSF?) in order to build your fusion device. But further discussion of this is off-topic. Happy New Year!

Comment #4 January 2nd, 2013 at 10:23 am

Oh my. Dyakonov appears to have proved that useful digital computers are impossible. Transistors are manifestly analog components, with non-zero error rates and instabilities. Yet, instead of asking how many continuous variables it takes to describe the state of a digital computer, Dyakonov insists that the state of a transistor in a digital computer is absolute, thus completely missing a truly revolutionary result!

Comment #5 January 2nd, 2013 at 11:06 am

Scott #3: How could granting possibly be based on the “scientific outcome,” when no one knows the outcome at the time of the grant?

This is the whole point. And it seems that the granting agencies are considering the variant of gradient descent in their policy. And this approach support the search of local minimum, which you know better than me. When one build his/her reputation, one is trying to reduce the risks. When one is young and does not have a reputation he/she has the greatest chance to do significant advances (at least empirically). So the system is constructed on you as adviser to believe in the potential of the student to advance your vision, and your ability to convince granting agency that you have solid reputation. If you are outside the system you are lost, especially, if do not want to exaggerate your abilities.

I think you are great writer, and you cover in the posts more than you may be intended to cover. So, may be you would be able to consider to loosen the subjective definition of the off-topic.

Happy new year!

Comment #6 January 2nd, 2013 at 12:23 pm

Quantum mechanics has been around long enough that history holds plenty of lessons, and in particular many elements of what Clifford Truesdell called “the tragicomical history of thermodynamics” are being (regrettably) replayed within the quantum computing community.

Therefore, here is a modest proposal for restoring rationality, civility, and rigor to quantum computing discourse:

In this regard, the attention of

Shtetl Optimizedreaders is drawn in particular to four arxiv articles:(1) arXiv:quant-ph/0609159: “Does the Third Law of Thermodynamics hold in the Quantum Regime?” (Summary: yes, Nernst unttainability holds).

(2) arXiv:1208.1015: “Quantum bath refrigeration towards absolute zero: unattainability principle challenged” (Summary: no, Nernst unttainability does not hold).

(3) arXiv:0911.1584: “Dissipative Quantum Systems and the Heat Capacity Enigma” (Summary: we’re unsure whether Nernst unttainability holds).

(4) arXiv:1212.4501: “Performance bound for quantum absorption refrigerators” (Summary (?): in consequence of quantum nonlocality effects, the Nernst unttainability principle is ill-defined).

The semi-serious point is that,

after a full century of experimental and theoretical work, upon a problem that ismuch easier than deciding the feasibility of quantum computing, the physics communityhas reached no consensus on a fundamental principle of thermodynamics. Which is a sobering historical lesson, eh?That is why, before taking seriously anyone’s opinion regarding the feasibility of quantum computing, it is perhaps reasonable to apply the litmus test: “Is there any evidence that this person is reasonably qualified to offer an informed opinion upon any thermodynamical topic whatsoever?”

Carlton Caves, Charles Bennett, and John Preskill all surely belong to this elite thermodynamically-qualified group … it’s not clear that there are whole lot of other qualified members.

Comment #7 January 2nd, 2013 at 2:32 pm

Happy New Year!

Seems, there are couple of math misconceptions, indeed, but there is also natural criticism of known problems around quantum computing activity. BTW, due to the Popper’s criteria, even the very fact of your bet is not an argument against such a kind of attack, i.e., even if the prize would not be awarded, it is not necessary a good sign, because “a pseudoscience may not be falsified”.

Comment #8 January 2nd, 2013 at 2:50 pm

Alexander #7: The point I keep trying to make is that the question,

“what is the computational complexity of simulating the laws of physics?,”is not something quantum computing forced on the world. It was there before anyone ever seriously thought about quantum computing. And even if quantum computing research dies out—as Dyakonov and other skeptics hope and expect—the question will still be there, demanding an answer from anyone sufficiently curious. Is the complexity of simulating physics BPP? BQP? something in between? What’s the skeptics’ proposal for how to investigate such questions? If they have no alternative proposal, no hope for an alternative, and no interest in an alternative, then their ground for accusing others of “pseudoscience” seems weak.

Comment #9 January 2nd, 2013 at 3:15 pm

By including the word “thermodynamical” we allow for

the simple answer “P”, and we thus embrace a middle ground between Gil Kalai and Aram Harrow, in which resources in P suffice to simulate ordinary computation processes (in which individual transistors are spatially localized and have temperatures) and ordinary linear optics experiments (in which individual diodes sources/sinks are spatially localized and have temperatures) but not quantum computers or Aaronson-Arkhipov experiments (in which gate/detector/source dynamical states are sufficiently quantum-entangled as to possess neither spatial locality nor local entropy).Is it feasible to create macroscopic systems having spatially delocalized entropy? That question is open, longstanding, and hard … and very fortunately, circumstances seldom require us to answer it!

Comment #10 January 2nd, 2013 at 3:18 pm

>Dyakonov concludes only that the possibility of scalable quantum computing in the real world should be considered an open question.

Many would agree that QM validity at the high entanglement frontier shoud be considered an open question. Topsy-dopsy to see this as a point against investigation.

>Simulating quantum physics “would be an interesting and useful achievement, but hardly revolutionary, unless we understand this term in some very narrow sense.”

Ok, I do understand this term in some very narrow sense. Feel free to show us anything more revolutionary.

>US system of funding scientific research (which is now being copied all over the world)

Nothing deep but as a non-USer this one was the funniest part, as in “The United Kingdom has long copied the language of the United States.”

Comment #11 January 2nd, 2013 at 3:53 pm

I am somewhat sympathetic with Dyakonov’s view that the theory community should focus more attention on nailing down the assumptions underlying the quantum accuracy threshold, and on relating these assumptions to realistic noise models.

However, his sweeping statements regarding how this issue has been ignored may reflect incomplete knowledge of the literature. For example, two of my papers that partially address some of his concerns are these:

Sufficient condition on noise correlations for scalable quantum computing

Fault-tolerant quantum computation versus Gaussian noise

There’s a lot more to be done, of course …

Comment #12 January 2nd, 2013 at 3:55 pm

John Sidles #9: Saying “very fortunately, circumstances seldom require us to answer it!” is a lot like saying “very fortunately, circumstances seldom require us to know the mass of the Higgs boson!”

In both cases, it depends on the circumstances. If you’re building technologies based on well-understood science, then both statements are obviously true. On the other hand, if your goal is to push forward the frontiers of knowledge, then you might spend enormous effort

creatingcircumstances thatdodepend on the answers to these questions.Comment #13 January 2nd, 2013 at 4:01 pm

Scott #11: “

Yes, yes … you’re absolutely right!”Comment #14 January 2nd, 2013 at 4:08 pm

John Preskill #11: Thanks for the refs!!

Comment #15 January 2nd, 2013 at 4:09 pm

John, surely the supposedly open question you pose has already been answered with slow light and similar experiments (i.e. quant-ph/0106057).

Comment #16 January 2nd, 2013 at 4:16 pm

I should clarify that my above comment was aimed at John Sidles rather than John Preskill, since both end their comments with references to open problems.

Comment #17 January 2nd, 2013 at 4:29 pm

Joe Fitzsimons #15(±1), the glib answer is that the dynamical trajectories of quant-ph/0106057 (and similar experiments) have been artfully pulled-back onto an embedded submanifold (of Hilbert space) having sufficiently low dimensionality that the thermodynamical Laws do not apply … and *that’s* why the quant-ph/0106057 class of experiments can be simulated with resources in P.

A little less glibly, it’s difficult even to conceive experiments that provably require exponential-dimension state-spaces to simulate … much less perform them … and that’s why I so greatly respect the Aaronson-Arkhipov BosonSampling class of experiments … even if (upon analysis and experience) they prove to be unscalable.

Comment #18 January 2nd, 2013 at 6:40 pm

I am also inclined to agree with Dyakonov that simulation of quantum systems is the most important known application for quantum computers, and that theorists should do a better job of characterizing the advantage of quantum computers in simulation problems, and of conceiving applications that might have high impact on science and engineering. Probably most people working in our field would agree.

Having just read through the whole paper more systematically (I had not noticed it until today), I am having difficulty distilling any concrete objection from Dyakonov’s critique of quantum error correction, other than the general feeling that it is misguided to prove theorems based on clearly articulated assumptions.

Dyakonov’s earlier papers here and here may have been a bit more useful in that respect; there he emphasized the importance of analyzing fault tolerance applied to spins subject to a realistic decoherence model with a suitable power spectrum of bath fluctuations. I attempted to move the theory in that direction in the papers I cited in my earlier Comment #11, though I would like to go further.

Dyakonov is a highly knowledgeable expert on spin physics. I would welcome suggestions from him or others regarding how theorists can make a more convincing case that scalable fault-tolerant quantum computing will work in the real world.

Comment #19 January 2nd, 2013 at 6:41 pm

Dyakonov’s analysis would benefit from citing the (many) successful research programs that (initially) over-promised and (in the short term) under-delivered:

Hmmmm … that’s a 500-to-1 lowball by Zworykin!

One

benefitof Dyakonov’s analysis is that his article does pose an apt challenge: “It would be quite interesting to see an updated [QIST-style] roadmap for the next decade.”History teaches us that the most valuable results from the past decade’s QIT/QC/QSE programs are very likely

alreadyin our hands … if we have the foresight to recognize their value.Comment #20 January 2nd, 2013 at 10:09 pm

The book Dyakonov referred to at the very first sentence of the article happens to be edited by my supervisor. I would like to know about his comments too.

Comment #21 January 3rd, 2013 at 12:08 am

John: I’m not suggesting that it isn’t simulable. I was suggesting that it is an answer to your question “Is it feasible to create macroscopic systems having spatially delocalized entropy?”

Comment #22 January 3rd, 2013 at 6:01 am

Happy New Year Scott!! Don’t know whether it’s worth posting but this blog has been mentioned in the list of mathematics blogs here-> http://www.talkora.com/science/List-of-mathematics-blogs_112 (look for entry #3 in the list)

Comment #23 January 3rd, 2013 at 6:16 am

Joe #21, just to summarize some already-referenced papers:

[1] arXiv:quant-ph/0106057v2 Julsgaard, Kozhekin, Polzik, “Experimental long-lived entanglement of two macroscopic objects” (2001) doesn’t even

mentionthe word “entropy.”[2] arXiv:0810.4953 Ng and Preskill “Fault-tolerant quantum computation versus Gaussian noise” (2008) briefly mentions entropy, but only an broad informatic context, not in a thermodynamically specific 2nd Law and 3rd Law context.

[3] arXiv:quant-ph/0609159, R. F. O’Connell, “Does the Third Law of Thermodynamics hold in the Quantum Regime?” (2006) gives a thorough discussion of thermodynamic entropy, however in the context of a class of exactly-solvable noise models that may be insufficiently realistic for analyzing FTQC.

This everyone agrees with entirely! One broad objective is of course to unite the virtues of the above-mentioned articles: the experimental verification of [1], the broad relevance to FTQC of [2], and the non-perturbative rigor of [3].

History indicates that unifications are seldom easy or quick;

e.g.the exact models of O’Connell and colleagues [3] give us concrete grounds to be respectful of nonperturbative renormalization effects that are associated to dynamically coupling/decoupling ancilla qubits from thermal reservoirs (regarding the cavity QED vacuum as itself a thermal reservoir). Such effects are notoriously difficult to calculate exactly, approximate reliably, and observe experimentally.Comment #24 January 3rd, 2013 at 8:59 am

Hi Scott, you wrote (in #8): “The point I keep trying to make is that the question,

‘what is the computational complexity of simulating the laws of physics?,’is not something quantum computing forced on the world. It was there before anyone ever seriously thought about quantum computing.”

I certainly agree that this question is of great interest, and although the debate between Aram Harrow and me was mainly about quantum error-correction, this issue was also quite central. (I think that quantum error-correction is more closely related to the actual physics than computational complexity is, but let’s put this aside for now.)

It can be useful to point out one appealing (yet extreme) possibility regarding the answer. It is not just that the answer is BPP, but it is actually much stronger than that as follows:

All the non-trivial computations around us require “classical control.” Non-trivial computations (at any scale) require a mechanism (like a repetition code) of classical information that emerges from the quantum mess around. Without such a mechanism, you cannot even come close to BPP;Nearly-pure evolutions can only be bounded depth (in fact, small depth), and pure states that cannot be approximated by bounded-depth quantum circuits, cannot be realistically approximated at all. (So this can serve as a Sure/Shor separator.)This possibility would be, of course, very very interesting and quite important, but it does not involve collapse or non linearity of QM. (This is also a complicated issue…) The two extreme possibilities for our “computational universe,” motivated the title of the last part of our debate: “Quantum supremacy or classical control.”

There are various other fascinating questions regarding simulating realistic physical evolutions. (Those were raised in the debate but were not central to it, and we did not get to some of them.) For example, does impossibility of “quantum computational supremacy” imply that all physical processes can be simulated classically?, and if yes, what is the algorithm? And what can be inferred on computations from physics (mainly from quantum field theory) that require exponential time on our digital computers,? and some related questions. We may come back to these questions.

Comment #25 January 3rd, 2013 at 9:54 am

Scott #8, I may only encourage the point, but it is not widespread and I could imagine situation, then such computational complexity simply may not be defined, because the World is too complex to use ideas from our “teenager” CS.

Comment #26 January 3rd, 2013 at 10:20 am

Alexander #25:

I could imagine situation, then such computational complexity simply may not be defined, because the World is too complex to use ideas from our “teenager” CS.

I don’t understand what that means. You could walk up to any scientist trying to make actual progress on something, and say, “ah, but the World is so incomparably deeper than your ‘teenage’ mind can fathom” … well, alright then, let’s hear some better ideas!

Also, how is the “teenager” CS ever going to grow up or become more sophisticated, if we don’t try to develop it? So, as Jay #10 put it, it seems “topsy-dopsy” to see any of this as an argument

againstquantum computing research.Comment #27 January 3rd, 2013 at 11:14 am

The very existence of CS is demonstration of set of new problems that exist only because computers and even genial people may not realise the problems before, despite they formally could be formulated even before computer.

Comment #28 January 3rd, 2013 at 11:28 am

John, I must admit I am somewhat flummoxed by your phrase “spatially delocalized entropy”. Since we can define an entropy for any system, I took it to be short-hand for a system composed of two subsystems such that the entropy of the system as a whole is less than the sum of the entropy of the subsystems. This was the only sensible meaning I could extract from the phrase, but perhaps that is showing some deep naivety. If this is indeed what you meant, then showing entanglement is necessarily a demonstration that you have created such a state independent of whether the authors chose to use the word “entropy” in their paper, since for entangled states the entropy of the joint system is lower than the some of it’s parts.

Comment #29 January 3rd, 2013 at 11:28 am

Alexander: But that leaves out the other half of the chicken-and-egg story! Computers didn’t just fall from the sky: it was only because of a prior recognition of

problemsrequiring massive computation, that people invented computers in the first place. Of course the invention then inspired new questions (e.g., the P vs. NP question or the possibility of public-key cryptography), and those questions led to new discoveries, which in turn led to more questions, etc, in the never-ending process that we call science! It’s not a circle, it’s a spiral, since we keep being able to answer more questions than before, even if the number of questions that wecan’tanswer at a given time is always infinite.Comment #30 January 3rd, 2013 at 11:41 am

Invention of computer was a spin of from yet another failing attempt to understand how we are thinking…

Comment #31 January 3rd, 2013 at 11:49 am

If that was a “failing attempt,” then may I be blessed with a thousand such failures! Or even just one.

(Though note that the attempt to understand human thought was far from the only historical origin of the computer — there was also electrical engineering, and mathematical logic, and, y’know,

practical applications, from breaking enemy codes to processing financial data.)Comment #32 January 3rd, 2013 at 12:00 pm

At least Turing wanted to model thinking. I do not know how about Post or other origins.

Comment #33 January 3rd, 2013 at 12:31 pm

Joe #28, there is a macroscopic sense in which thermodynamics is all about: (1) the transport of conserved quantities (energy, mass, charge), and (2) this transport is driven by spatial gradients of thermodynamic potentials (inverse temperature, chemical potential, voltages), and (3) entropy is a measure that is globally nondecreasing (2nd Law) and lower-bounded (3rd Law).

Quantitatively we have the following statement of the 2nd Law (in a geometrically natural notation that I hope works!):

$latex \partial_t\,\mathcal{S}(t) = \int_{V} \ast \langle\sharp d\boldsymbol{\mu},\sharp d\boldsymbol{\mu}\rangle_{\mathcal{G}} \ge 0$

where $latex \mathcal{S}$ is the global entropy, $latex d$ is the exterior derivative, $latex \ast$ is the Hodge dual, $latex \boldsymbol{\mu}$ is a vector of thermodynamic potentials, $latex V$ is the spatial volume of the system, and $latex \mathcal{G}$ is a positive-definite structure associated to Onsager’s kinetic coefficients.

The above-referenced survey article O’Connell (arXiv:quant-ph/0609159 and the many references therein) can be read (by me anyway) as being all about the microscopic quantum foundations of the above macroscopic thermodynamical relation. It’s evident (from O’Connell’s survey) that these quantum foundations are: (1) subtle and incompletely understood at present, and (2) directly relevant to noise models in FTQC, and (3) physically associated to student-friendly questions like “What can we sensibly say about the body temperature of Schrödinger cats in physically realistic rooms? Are these cats warm (alive) or cool (dead)? And how long does it take us to decide?”

We are a long way (well I am anyway) from having a thorough grasp of these issues!

Comment #34 January 3rd, 2013 at 12:47 pm

Once you strip away the sarcasm and belligerence, and ignore a few misunderstandings, Dyakonov makes some reasonable points:

(1) Quantum computing is overhyped.

(2) Experimental progress, though impressive in some ways, has fallen short of some optimistic projections.

(3) Large-scale quantum computing is still a long way off and might never work.

(4) Conclusions about the scalability of fault-tolerant quantum computing rest on assumptions about noise that might prove to be unwarranted in some physical systems.

(5) Applications of quantum computing are limited.

I agree with these points. Probably most readers of Shtetl Optimized also agree.

The sarcastic tone is a stylistic choice by the author. It is not the choice I would have made, but it gives the paper an edge, which may be the reason we are discussing it on this blog.

Regarding sociology, in many areas of science we are driven by optimistic expectations; these motivate us and keep us going, which is important. Productive work may result even in cases where our optimism is not fully justified. Now and then, though, we should pause to reflect about where we have been and where we are going. Critics can help to keep us on our toes.

Whether quantum computing is a bubble history will ultimately judge. The answer may depend on whether we adopt a short-term or long-term viewpoint.

Comment #35 January 3rd, 2013 at 12:53 pm

John, I think I am still failing to see your point, because it seems clear that any experimental paper showing entanglement between ensembles necessarily implies a non-local correlation between there entropies in exactly the kind of warm cat/cool cat way you describe.

Comment #36 January 3rd, 2013 at 1:43 pm

John Preskill #34: For me, the issue wasn’t only a belligerent

tone; it was also explicitassertionsabout QC being a failed research program that didn’t (as far as I could see) follow from anything that preceded them.But yes, if you strip away those assertions,

andthe belligerent/sarcastic tone,andthe misunderstandings about fault-tolerance, then I think that you, me, and whatever is left of Dyakonov’s paper would all be in complete agreement with each other!I think you’re completely right, in particular, that Dyakonov makes some valid points. As I said in the post:

disregarding the polylog/polynomial confusion (which recurs elsewhere in the article), and other technical issues about fault-tolerance, up to this point many quantum computing researchers could happily agree with Dyakonov—and have said similar things many times themselves …

Dyakonov fumes about how popular articles, funding agency reports, and so forth have overhyped progress in quantum computing, leaving the conditions out of theorems and presenting incremental advances as breakthroughs. Here I sadly agree.

Comment #37 January 3rd, 2013 at 1:45 pm

As a book-related confection supplemental to #33 (above), it is striking that the standard text on transport theory (Bird, Stewart and Lightfoot

Transport Phenomena, Revised 2nd Edition, known as “BSL”) presently holds the astonishingly high Amazon sales rank of #7,163 (amongall (!)books) … while Nielsen and Chuang’sQuantum Computation and Quantum Informationlanguishes at #443,557.Given that BSL and similar popular engineering texts presently contain (shockingly!) essentially *zero* discussion of quantum dynamics, the answer to Dyakonov’s question is that there exists plenty of room for

growthin quantum information-theoretic research … provided that QIT researchers do not unduly restrict their own research domain.Comment #38 January 3rd, 2013 at 2:08 pm

But can non-locality of entropy be sustained for dynamical trajectories on arbitrarily large-dimension state-spaces for arbitrarily extended durations, as (1) macroscopic thermodyamics assumes and (2) scalable FTQC requires?

To borrow John Preskill’s nice phrase “probably most readers of Shtetl Optimized agree” that this class of questions is (per comment #9) “open, longstanding, and hard.”

Comment #39 January 3rd, 2013 at 2:24 pm

Scott #36: Right, sounds like you and I agree, then.

Dyakonov is entitled to express his opinions, and that’s what he is doing when he says, for example:

“I believe that, in spite of appearances, the quantum computing story is nearing its end, not because somebody proves that it is impossible, but rather because 20 years is a typical lifetime of any big bubble in science, because too many unfounded promises have been made, because people get tired and annoyed by almost daily announcements of new “breakthroughs”, because all the tenure positions in quantum computing are already occupied, and because the proponents are growing older and less zealous, while the younger generation seeks for something new.”

While I admit to bias, I feel Dyakonov is being unfair when he speaks of average researchers who commit “wild exaggerations on the border of fraud and sometimes beyond.” Having worked in other areas of physics, I find the quantum computing community to be quite admirable generally speaking. Most of the active researchers I know, including many young people, are very smart and have high intellectual integrity.

Comment #40 January 3rd, 2013 at 2:47 pm

John Sidles: But we can never hope to show anything in ‘arbitrarily large-dimension state-spaces’, since every experiment deals with a fixed size system (let’s ignore continuous variables for the moment). The Hilbert space of their system is incredibly large when you take the effect of noise into account (i.e. the density matrix should have rank on the order of 2^(10^12)). The fact that you can make some approximations to get a handle on what is happening without vast computing power doesn’t alter the fact that these are enormous systems which have been put in a non-local state which exhibits the very kind of non-locality in entropy that you were talking about.

Comment #41 January 3rd, 2013 at 3:29 pm

Scott #29 in #27 I rather have told about other topic: if it was not possible to realise some ideas of complexity before access to real computers, why do you think that estimating of compexity of physical laws may be done without access to appropriate “hardware”.

Comment #42 January 3rd, 2013 at 3:40 pm

Joe, if you (or anyone) knows (even in principle) how to scale-up quantum memories from a few qubits to arbitrarily many qubits, and concomitantly scale-up coherence life-times to arbitrarily long duration (via error-correction) … with anything like the facility that chemical engineers scale-up fractional distillation columns from

benchtop-sizetohumongous-size… then I’m impressed!Comment #43 January 3rd, 2013 at 3:47 pm

Alexander #41: Well, we in quantum computing are

tryingto build new “hardware”! What more do you want? On the other hand, it’s simply not possible to predict where future breakthroughs will come from, since if you could predict them, you would’ve made them already. Recall that Einstein discovered general relativity with not much more “hardware” than paper and pencil; in our own time, almost all breakthroughs in computational complexity (including, say, Shor’s algorithm) have come about the same way. Why shouldn’t one expect some more breakthroughs?More generally, it gets tiresome for people building a skyscraper to have to hear, “what on earth would make you think there’s not some vastly better way to build it, which no one has thought of yet?”, from people sitting on a nearby bench sipping lemonade.

Comment #44 January 3rd, 2013 at 3:51 pm

John Preskill #39: I’d never suggest that Dyakonov isn’t entitled to express his opinions; indeed, I’ve even given those opinions a somewhat larger audience than they might’ve gotten otherwise, in the course of exercising my own right to respond to them! (I especially enjoyed thinking about how to reply to his donkey parable…)

Comment #45 January 3rd, 2013 at 4:26 pm

Scott #42, Einstein also used data from real experiments, but q-computational complexity uses mathematical models of nonrelativistic (in fact even spaceless) QM together with other idealizations for systems we never tested. Could you say, for example how many qubits may be used, before some particular approximation stop work? The Gil’s et all discussion about noise is only particular example of such problem.

Comment #46 January 3rd, 2013 at 4:31 pm

John Sidles: I’m not suggesting that at all. What I am suggesting is that experiments such as those in the paper mentioned above demonstrate entanglement between objects which are at least close to macroscopic, and that suggesting that this remains an open problem which is likely impossible, even when couched in somewhat obfuscating thermodynamics language, is incorrect.

Comment #47 January 3rd, 2013 at 5:07 pm

Joe Fitzsimons, please understand that my colleagues and I are ultra-orthodox in our faith that macroscopic objects are quantum-mechanical:

AFAIK, nine picograms is *still* the mass-record for experimental macroscopic quantum-entanglement, isn’t it?

What’s not evident (to me … or anyone else?) is how to scale these experiments to large numbers of qubits, whose trajectories unravel on large-dimension submanifolds of Hilbert space, with quantum coherence sustained over long periods of time.

Comment #48 January 3rd, 2013 at 5:35 pm

Alexander #45: Based only on thought experiments, mathematical simplicity, the demand for general covariance, the need for consistency with previous theories, and essentially

onepreviously-unexplained data point (namely, the perihelion precession of Mercury), Einstein made the far-reaching extrapolation that spacetime should be a pseudo-Riemannian manifold that bends and sags in the presence of energy (and that among other things, it should bend light).He turned out to be right, of course. But in retrospect,

it would’ve been even more interesting if he’d been wrong, for that would’ve meant that the mathematically simplest possibility was not the one Nature chose.Today, quantum computing makes the far-reaching extrapolation that quantum mechanics, which has passed every experimental test for a century,

is literally correctwhen it talks about the state of n qubits being a vector in a 2^{n}-dimensional Hilbert space, undergoing unitary transformations—that, while there might be simpler descriptions in special cases, there isn’t a simpler description in general. Maybe that extrapolation is correct and maybe it’s not: just like in Einstein’s case, the latter possibility would be vastly more interesting and exciting than the former! Either way, though, I don’t see any way to find out besidestesting QM in the relevant regime to see what happens—i.e., precisely what quantum computing research is trying to do.You mentioned Gil Kalai; it’s to his great credit that despite his skeptical stance on QC, he understands the above logic and has often repeated it himself. Somehow, most other QC skeptics twist the fact that we don’t know what’s going to happen into an argument

againstdoing research to find out, instead of an argumentforsuch research! And that’s where they lose me.Comment #49 January 3rd, 2013 at 5:55 pm

I haven’t done more than glance through Dyakonov’s recent anti-fault-tolerance papers, but that was enough to see that he’s working from a position of substantial misconceptions.

In particular, he claims a number of the assumptions behind the threshold theorem haven’t been examined for robustness, but all of the ones he names have, in fact, either been considered explicitly or implicitly, as a consequence of more general results. For instance, the notion that ancilla 0 states might not be perfect is just preparation errors, a standard part of fault-tolerant error models. Systematic errors and cross-talk errors (whether between gates or inactive qubits) are both special cases of error correlation, and are covered by the same general theorems. Leakage errors can be handled by using Knill’s approach to fault-tolerant error correction, which combines teleportation with syndrome measurements; it reduces a leakage error to — at worst — the same status as any other kind of error. Massive parallelism is actually needed, but not 100% parallelism. Lesser parallelism hurts the threshold for storage errors since qubits have to wait around longer before being corrected. None of this is really exotic stuff.

Like John Preskill, I agree there is more to be done considering these assumptions. For instance, relying on general results about correlated errors gives lousy quantitive values for the threshold. It’s worth doing some optimization against, say, cross-talk errors to see what can actually be handled, and as far as I know, this hasn’t been looked at very seriously. Part of the problem, however, is that many of the interesting variations are simply very hard to handle numerically because they involve non-Pauli errors.

We *do* know, contrary to Dyakonov’s belief, that there are fault-tolerant thresholds when we relax these assumptions. And actually the thresholds are a complicated high-dimensional surface because there are many potential types of errors and you can relax multiple assumptions at once.

Comment #50 January 3rd, 2013 at 6:19 pm

Seeking evidence, I browsed through the 99 arxiv abstracts that a full-text search for “quantum computing” AND skeptic* finds.

The most striking find? Dynakov’s 2001 preprint “Quantum computing: a view from the enemy camp” (

arXiv:cond-mat/0110326) bore a rude title, yet made a strikingly accurate prediction:Save possibly for Dynakov’s, I found no arxiv preprint expressing skeptical sentiments that could be described even as impolite, much less unreasonable.

On the evidence, Preskill’s affirmation reasonably can be strengthened and extended as follows: “Both enthusiasts and skeptics of quantum computing are quite admirable generally speaking.” And most importantly, best wishes for a Happy New Year are extended to everyone who reads

Shtetl OptimizedComment #51 January 4th, 2013 at 1:20 am

There is a standard QC skepticism position that amounts of a conviction that in some way or another, QC only works because it’s finely tuned. This is similar, or at this fuzzy level you could say equivalent, to the well-known analog cheat in complexity theory. It is very easy to define a complexity class which clearly isn’t realistic, but which looks realistic because no bound has been placed on numerical precision. So, in the case of BQP, either that the noise models are finely tuned, or maybe that quantum mechanics itself is finely tuned. Or maybe that those two possibilities don’t really differ. Either way, according to this stance, the QC model is only a formal victory.

Part of my standard answer is to emphasize that quantum mechanics is actually a misnomer, and that the real foundation of quantum computing is quantum probability. Like classical probability, quantum probability always follows the same laws regardless of specific mechanical realizations. The more you understand quantum probability as a probability theory, the more you realize that actually, there is no evidence that QC is finely tuned. Yes, the field briefly missed an important point, which is that any realistic model of probability or computation also has noise, which leads to error and fault tolerance problems that have to be addressed. In hindsight, that was the gist of Unruh’s important paper on the matter in 1994. But by now, everyone in QC knows to worry about fault tolerance. You can’t keep coming back to Unruh’s paper as if it hasn’t already been written.

Of course, there could be some serious, unforeseen problem in the noise models for fault tolerant QC. Or even conceivably in quantum probability itself. You can’t ever rule out the unforseeable. However, if you wander away from the lamppost and into the dark not because you have to, but just because you can, then good luck to you. At the very least, you should have the humility to realize that success is difficult. Dyakonov doesn’t seem to have that humility.

On the other hand, some of the skeptics who seemed to say that QC is the analog cheat, in particular Goldreich and Levin, haven’t dragged out any quarrel over the matter as far as I know. They said what they thought some 5 or 10 years ago, and then stopped. In research, if you don’t want to continue in a debate, then that’s not completely unreasonable, and similar to admitting that you could be wrong.

Comment #52 January 4th, 2013 at 5:04 am

Daniel #49: Thanks for the extremely informative comment!

Greg #51: As a small point of information, I run into Levin—who I like personally a great deal—once or twice a year, and every single time he continues the argument, often adding new colorful analogies (“you might as well believe that if you pray to a piece of roadkill on the street every alternate Tuesday, it will factor a 10,000-digit number for you…”). To get him started, he needs no more provocation than overhearing the word “quantum” in some other context, and sometimes no provocation at all. The only reason he hasn’t written more about it is that he thinks his short essay on quantum computing already conclusively settled the matter.

Comment #53 January 4th, 2013 at 6:29 am

Searching the arxiv server for fulltext “quantum computation” “skepticism” yields plenty of terrific material. E.g., in Peter Shor’s seminal “Fault-tolerant quantum computation” ( arXiv:quant-ph/9605011,1996) we read

Peter’s gracious acknowledgements is a concrete presentation of what

biologist Ed Wilson describes in his autobiographyNaturalistAn instructive reasing of the passionately pro-Hilbert posts on

Shtetl Optimized(like Scott’s #48 and Greg’s #51 for example) is as pleas for Wilson-style “brilliant enemies” to take the field. So please let me name Carlton Caves as that longed-for brilliant enemy, in virtue of a prescient remark that can be found in “Dreams versus Reality: Plenary Debate Session on Quantum Computing” (arXiv:quant-ph/0310130, 2003)Roger Penrose opened and closed his book

The Emperor’s New Mind(1999) with a joke whose punch-line is, and in light of this week’s well-deserved Wolf Prize award to Ignacio Cirac and Peter Zoller, it’s evident that Carlton Caves 2003 audience should not have laughed.“Whatever they should have done, they should not have laughed“For what Carlton Caves described in 2003, is precisely the procedure that Ignacio Cirac (and colleagues) have pioneered with state-spaces that are secant varieties of Segre varieties (to give these state-spaces their proper geometric name, per Landsberg’s “Geometry and the complexity of matrix multiplication” (arXiv:cs/0703059 2007) or at a more elementary level, Harris’s

Algebraic Geometry: a First Course, 1992), by which quantum simulationists adaptively extend the geometry of their non-linear state-spaces to preserve theappearanceof a linear Hilbert-space. And of course the dynamical trajectories on these state-spaces unravel not as Hamiltonian flows, but as the stochastically decoherent flows pioneered by Peter Zoller (and colleagues).Is it heresy to conceive — as Carlton Caves suggests — that the state-space geometry of Nature (and/or our simulations of Nature) may be similarly adaptive? Yes, definitely it is heresy … and surely it is the very *best* variety of heresy, namely, heresy that challenges us to conceive and apply new mathematics to open problems, both practical and fundamental.

ConclusionIn researchers like Caves, Cirac, and Zoller, today’s enthusiasts for quantum computing continue to be blessed with brilliant enemies, and long may their marvelous hastilude prosper!Comment #54 January 4th, 2013 at 8:01 am

This is the best post+discussion I have seen this year!

Congratulations to everybody and a nod to John Siddles for mentioning Clifford Truesdell. Maybe in the future a super-google will have a feature, why not called “the idiot’s fugitive search” in the honor of Truesdell, for checking the beneficial or damaging influence skeptics and fashion had on, by then, established subject.

On a more serious note, as a mathematician outside cs and physics, I think that the elephant in the room is the fact that there is no mathematics for big, but not infinite, discrete physical systems. It is in the making and probably cs and combinatorics (like Terry Tao et al. approximate groups results) will provide some big part of it.

Comment #55 January 4th, 2013 at 8:46 am

On the use of QC against crypto:

* it’s not just factoring (RSA), but also discrete logarithms (thus ElGamal, elliptic-curve cryptography, etc.)

* there’s a field of research called “post-quantum cryptography”; see http://pqcrypto.org

Comment #56 January 4th, 2013 at 10:49 am

John, perhaps I have pushed to hard, but I understand that you are relatively skeptical about the applicability of the approach favoured in the QC community to larger systems, and I feel compelled to try to convince you that there is reason for optimism.

In the question you referred to as open, and which kicked off my stream of comments, you used the phrase “macroscopic”, though your most recent comment focuses on mass. These are not really the same thing, since macroscopic refers to length scales rather than mass, and so the two are only related via assumptions about density. Such assumptions fail for the gas cells used in the experiment in question which are approximately 3cm in size (see Fig 1 of the above-mentioned paper), and so are unquestionably macroscopic.

The question of mass is a subtle one. Cantilevers move mass about, so locally the system is not in, or even close to, a mass eigenstate. This is a pretty cool aspect of the cantilever research, but is not fundamentally connected to either entanglement or the thermodynamic properties such as local entropy vs system wide entropy which you seem to favour, since ultimately these properties depend only on the eigenspectrum of the density matrix and the spatial distinctness of subsystems, and not on what physical properties the vectors correspond to. As such, the low mass involved shouldn’t be much of an issue. Indeed, computation too depends only on the state-space and the dynamics of the system, and not on which physical degrees of freedom are used to implement it, so quantum computation can be accomplished within a degenerate eigenspace of mass. Thus we could achieve large scale quantum computation without ever surpassing the current cantilever record.

Comment #57 January 4th, 2013 at 11:07 am

Scott #48, I am talking about boldfaced sentence in your #8 post, but you are often answering on some other ideas partially assigned to me by youself. May be Greg’s #51 just gave some clarification with his “analog” example.

Comment #58 January 4th, 2013 at 12:28 pm

John,

thanks for the update on Bird, Stewart and Lightfoot. I was unaware that I had any best sellers in my attic. Now I want to read it again.

Are there any other top selling non elementary STEM books out there? Knuth?

Comment #59 January 4th, 2013 at 12:32 pm

Joe Fitzsimons, please let me say that I agree entirely with your comment #56, to the effect that macroscopic objects *are* verifiably quantum. And similarly, in regard to (what I took to be) Daniel Gottesman’s key point regarding the lamentable theoretical intractability of “non-Pauli errors” — and also in a metaphorical tribute to Lewis Carroll — perhaps skeptics and enthusiasts alike might reasonably be less concerned with the

cheerfully evanescent (quantum) smile of Schrödinger’s (macroscopic) Cheshire Cat, and more concerned withthe fearful Boojum of the (coherence-destroying & state-renormalizing & entropy-increasing) quantum vacuum… that unceasingly seeks to dissipate all forms of information (both classical and quantum) by subtle means that have remained largely mysterious for more than a century.Albeit the year is

veryyoung, eh? Still, let’s keep it up! And many thank are due to Scott especially, for initiating and hosting this lively discourseComment #60 January 4th, 2013 at 12:32 pm

PS: Scott #48, Certainly, nonrelativistic QM (especially with finite-dimensional Hermitian spaces) could not pass every experimental test during century and QFT is needed not only as some correction, QFT may explain absolutely natural phenomena those even could not be introduced with QM.

Comment #61 January 4th, 2013 at 1:20 pm

Scott – Re Levin, that is just bizarre. Still, if he doesn’t publish papers or give talks in support of his views, then basically he’s off in his own world. Yes, his essay appeared as a section of a paper, but it’s not actually a research publication. It’s part of a longer paper that was accepted for publications for other reasons, and even that was a historical review rather than new research.

Remark 1: I would be surprised if Levin ever allowed that classical probability could be non-linear. I think that he understands why that is an extreme avant-garde idea. Yes, maybe “physicists” should look, but there is no conception of what they should look for in support of such a possibility. So I imagine that he would dismiss the phrase “quantum probability” as an essentially formal concept.

Remark 2: I see this surprising sentence in Levin’s essay: “It is worth noting that the reasons why QC must fail are by no means clear; they merit thorough investigation.” This is surprising because I don’t see how he endorses this assertion, if at the same time he thinks that his essay is conclusive.

It has been said that even though QC skepticism is expressed as a claim that QC is finely tuned or the analog cheat, that the real resistance comes from a rooted belief in the polynomial Church-Turing thesis. If Levin says that he’s argued conclusively that QC is impossible, and if he says that it would be interesting to find out why it’s impossible, then that does suggest that the polynomial Church-Turing thesis is the real belief. This like weapons of mass destruction: We *know* that Iraq has them, and it would be interesting to find out what kind.

Comment #62 January 4th, 2013 at 1:36 pm

Alexander #60: If it’s QFT that you’re worried about, there’s no known reason whatsoever to think anything in it would create a problem for quantum computing (In fact, current research, e.g. by Jordan-Lee-Preskill, has been in the opposite direction, working to show that QFT doesn’t lead to even

morepower than BQP! Which is probably true, but which requires a lot of work to prove for various technical reasons.)I’ve actually been reading QFT textbooks as my “hobby” for the past few months, not only because I find it fascinating, but because I think it could become increasingly valuable for QC. Yet everything that I’ve learned has only confirmed what the physicists I asked assured me of since the beginning: that

nothing in QFT changes the formal structure of QM at all.If you like, QFTs are various application programs that run on QM as their operating system. They’re not new operating systems.Comment #63 January 4th, 2013 at 2:01 pm

QFT/QIT Student A“Except for divergent renormalizations, that is!”QFT/QIT Student B“And Fadeev-Popov pullbacks, we’d be lost without them!”QFT/QIT Student C“Don’t forget unitarity violations at event horizons!”Scott“Yes, yes, but apart from divergent renormalizations, Fadeev-Popov pullback, and unitarity violations,”nothing in QFT changes the formal structure of QM at all.Comment #64 January 4th, 2013 at 2:09 pm

Scott:

“Nothing in QFT changes the formal structure of QM at all.If you like, QFTs are various application programs that run on QM as their operating system. They’re not new operating systems.”This is an excellent point! It is also a rare place where I agree completely with Scott’s conclusion

as well aswith the argument based on the application programm/operating systems metaphore!Let me try to ask you this: Do you regard QFT as something that can be derived from the formal structure of QM? Or, as your operating system/application program metaphore suggests, as an additional layer of physical laws.

Comment #65 January 4th, 2013 at 2:27 pm

Scott – Right, that is one way to see that the word “mechanics” in quantum mechanics is misleading. Quantum field theory leads to a vast array changes to the mechanics of particle interactions. It has no effect on quantum probability whatsoever.

No known, serious model of quantum gravity makes any changes to quantum probability either. String theory, which is arguably the only known, serious model despite its unfinished condition, certainly doesn’t. Quantum gravity leads to certain subtle changes to the locality concept, e.g., the holographic principle says that available entropy is proportional to surface area rather than volume. Otherwise the basics of quantum probability in string theory are identical to the Heisenberg-Born-Jordan rules from 1925.

Comment #66 January 4th, 2013 at 2:46 pm

John #63, now find yet another 97 changes.

Comment #67 January 4th, 2013 at 3:25 pm

John Sidles #63:

1. Divergent renormalizations are exactly the sort of thing that could make a beginning student

worrythat QFT changes the structure of QM—especially since the subject is usually explained so unbelievably confusingly. That’s why it was a profound insight, worthy of the Nobel Prizes it received, that renormalization does nothing of the kind. In the modern, “Wilsonian” view, renormalization is nothing more than a tool for extracting low-energy (i.e., “long”-distance) predictions, even though we don’t yet know what Hamiltonian applies at extremely high energies (i.e. short distances). It’s a way of encapsulating our ignorance of everything that goes on at extremely high energies into a finite number of low-energy parameters, which need to be measured experimentally—but which, once theyaremeasured, let you calculate infinitely manyotherlow-energy numbers (scattering amplitudes, etc.). The S-matrix relating initial to final states is still a unitary matrix, and quantum mechanics is upheld.2. As for Fadeev-Popov ghosts: while I’m not an expert, I’m pretty sure they’re purely an artifact of the path-integral formalism, which disappear when you do (e.g.) lattice QFT. As an aside, I put the blame squarely on Richard Feynman for confusing generations of physics students! The path-integral formalism might be a wonderful calculational tool, but it’s full of nonsensical-sounding artifacts (ghosts, virtual particles traveling backward in time, etc. etc.) that can lead people who don’t know any better to overinterpret them, and think that QM or even causality itself are being modified, when

nothing of the kind is happening.Again, the way to reassure yourself is to adopt the “modern, Wilsonian perspective,” and formulate everything on a lattice (which is indeed what people do, when they simulate e.g. QCD on computers). As soon as you do that, all of what I now think of as the “crazy Feynman artifacts” go away.3. Because of AdS/CFT, today there’s a very strong consensus that black hole evaporation is perfectly unitary—and that a faraway observer will indeed see the infalling information come out (after a mere 10

^{70}years or whatever). But even if that hadn’t been the case, now you’re talking aboutquantum gravity, not about QFT. Most physicists today (not only all the string theorists, but most of their opponents!) believe that QM will survive unscathed even when it’s combined with general relativity, but a few (like Penrose) take the opposite view, and I personally think it has to be regarded as a profound open problem. But it has nothing to do with what we’re talking about.Comment #68 January 4th, 2013 at 3:34 pm

LOL … this is a rare place where I get to disagree with

bothGil Kalai (#64) and Scott Aaronson (#62). Let us recall Carlton Caves’ skeptical prophecy (of #53):A lucid introductory-level discussion of this point can be found in Tony Zee’s marvelous textbook

Quantum Field Theory in a Nutshell, specifically in “Chapter III.4: a photon can find no rest“. As for the geometric motivation (which arises in classical Hamiltonian dynamics) there’s no better starting-point than the original literature: Fadeev “Symplectic structure and quantization of the Einstein gravitation theory” (1971) and Faddeev and Popov “Covariant quantization of the gravitational field”.Caveatnone of the above works presents the key ideas in the modern language of geometric dynamics … that translation is up toShtetl Optimizedreaders. And as for the application of Fadeeb and Popov’s seminal ideas in applying and extending the work of Cirac and Zoller (per #53) … where these ideas fit very naturally … that too is up toShtetl Optimizedreaders.ConclusionIt was Dirac who defined a Golden Era as “An era in which ordinary people can make extraordinary contributions.” Thanks largely to a vigorous cadre of “quantum skeptics” that includes Landauer, Unruh, Caves, Cirac, Zoller, Fadeev, Popov, and Kalai (and many more), and thanks also to the innumerable applications of quantum theory to the great challenges of our era (both practical and fundamental), we appreciate that the 21st century’s Golden Era of quantum sciencealreadyis well-begun!Comment #69 January 4th, 2013 at 3:35 pm

Gil Kalai #64:

Let me try to ask you this: Do you regard QFT as something that can be derived from the formal structure of QM? Or, as your operating system/application program metaphore suggests, as an additional layer of physical laws.

I think there’s no question that QFT represents an additional layer of physical laws. And the proof is simply that we can write down zillions of theories that are quantum-mechanical, but that have nothing to do with QFT (for example, theories involving finite numbers of qubits)!

On the other hand, if you’re willing to assume quantum mechanics,

anda Hilbert space consisting of point particles flying around in Euclidean space,andspatial locality,andinvariance under the Lorentz transformations, then there’s a sense in which you’re uniquely led to QFT. (That’s one of the things that the books keep telling me, and that I believe, but that I need to understand better!)Comment #70 January 4th, 2013 at 3:39 pm

Maybe difference between the (QC version of) QM and QFT may be also compared with difference between abacus and smartphone? I.e. nothing changes in subset of functions common for both devices, smartphone is simply specific set of application programs.

Comment #71 January 4th, 2013 at 3:55 pm

Scott #67 (since our comments have temporally overlapped) in specific regard to the renormalization issues you raise, the references supplied with Comment #6, especially “Does the Third Law of Thermodynamics hold in the Quantum Regime?” (arXiv:quant-ph/0609159) establish that divergences in wave-function renormalization encompass divergent entanglement between qubits and thermal reservoirs … a topic not explicitly addressed in most QFT texts … and yet surely of great consequence regarding the feasibility (or not) of FTQC.

History has shown us plainly that these renormalization-related phenomena are

verytricky, eh?That is why (as it seemed to me) Daniel Gottesman was right to express concern (in comment #49 ) regarding “non-Pauli errors” … because it is consistent with our present quantum thermodynamical understanding (well mine anyway) that time-dependent thermal reservoir renormalizations (as qubits are coupled-decoupled to each other and to various reservoirs including detectors) induce non-Pauli errors generically.

Comment #72 January 4th, 2013 at 4:14 pm

Scott – You’re right that Faddeev-Popov ghosts aren’t true particles. They are artifacts of choosing a gauge. If you take path integrals seriously as integrals, then a change of gauge is a change of variables, and a change of variables is only valid with a Jacobian factor. Even if you only choose one gauge, you can expect an intrinsic Jacobian factor. The function of ghost particles is to provide the Jacobian factor generated by a gauge choice. Now, that does not mean that they couldn’t have any physical interpretation, but I think that they mostly don’t.

Tachyonic particles, and particles traveling backwards in time, are a different matter. Those are considered to be sort-of real in quantum field theory. It is true that information can only travel forward in time and as a corollary only below the speed of light; that’s an important sanity check for any candidate relativistic QFT. However, in physics in general, anything other than information can in principle travel faster than light or backwards in time! Everyone knows that a Moire pattern can travel faster than light. If you think of a particle as an oscillation rather than an information carrier, then it can be kind-of like a Moire pattern. Another related point is that a virtual particle is a description of quantum tunneling.

For various reasons such as the tunneling interpretation, it becomes a prohibitive nuisance to separate true particles from virtual particles in quantum field theory. If the particle only exists for a short period of time, there is no clear way to say whether it’s virtual. Sort-of by definition, a non-virtual particle is one that escapes to infinity. (Compared to the length scale defined by its energy. You can have very long “wavelength” virtual particles to express a static electric field that’s as big as a football field.) More generally, physicists speak of on-shell and off-shell particles. An on-shell particle is one whose energy-momentum four-vector has the right Lorentzian length, i.e., it lies on the correct hyperboloid shell. An off-shell particle is any particle that travels “illegally”, or more accurately, propagates in a way that isn’t sustainable out to infinity.

To add to the point, by the same rules, an unstable particle cannot stay on shell because its rest mass is a complex number. All unstable particles are virtual, although if they are not very unstable, they might not be very virtual.

Again, though, for reasons that I don’t understand, ghost particles are generally not blessed as physical, even though virtual particles are.

Comment #73 January 4th, 2013 at 6:06 pm

Greg #72: Thanks as always!

One thing that helped me make sense of virtual particles was this page by Matt Strassler. Strassler “authorizes” me to do the thing I wanted to do anyway: just

drop particles completely from my ontology, and treatfieldsas the basic ingredients of the world. A “real” particle is a ripple in a field that’s relatively-stable and travels by itself a considerable distance, while a “virtual” particle is a ripple in a field that’s unstable (being directly caused, for example, by other fields), and that dies out as soon as its immediate cause is gone.Muchclearer!Comment #74 January 4th, 2013 at 7:06 pm

Matt Strassler’s page is excellent … thank you Scott! … not least because FTQC enthusiasts and skeptics alike can find comfort in its concluding paragraph:

In seeking to achieve scalable FTQC there is very little doubt that “complications are central.” Moreover, Matt’s summary statement nicely complements two challenges that Tony Zee identifies in

Quantum Field Theory in a NutshellA great virtue of FTQC research is that it motivates us to unite Matt Strassler’s hard-noised

quantenfeldtheoretisch Dreckeffecten(as Pauli disdainfully called field-theoretic transport-and-renormalization phenomena) with Tony Zee-style radically post-Hilbert quantum dynamics.Heck, it *does* sound like fun.

Comment #75 January 5th, 2013 at 11:24 am

Regarding QFT, I am not sure that John’s view (#63, #68..) is really different from what Scott, Greg and me are saying (#62, #64, #65..). Our thinking is that nothing in QFT (or in other quantum physical theories), deviates from the basic laws of quantum probability and those theories can be described as additional layers of physical laws on top of the basic framework of QM. What John is saying (with nice examples) is that, in practice, the mathematical-computational gadgets used in QFT apply a different mathematical language. So with Scott’s analogy of operating-systems and application-programs we can think about QFT and other advanced physical theories as application-programs written in some advanced computer language, were the “compiler” to translate it into the “machine language” of QM is not always easy or even clear to us.

Comment #76 January 5th, 2013 at 1:17 pm

Gil Kalai #75, there is some risk to get an analogue of “stone soup” in such approach.

Comment #77 January 5th, 2013 at 1:22 pm

Gil, please let me say that your Comment #75 is terrific. To extend it just a little bit, it is striking — and aesthetically satisfying too — that the great

practicalchallenges of FTQC (e.g. nonPauli/nonunitary qubit interactions with thermal reservoirs and/or state observers and/or photon sources) are turning out to be naturally dual to the greatfundamentalchallenges of QM field theory (e.g. vacuum interactions, event horizons, causality) andvice versa. This duality promises well for the 21st century vitality of both disciplines, not least because everyone’s mathematicalarmamentariumis steadily increasing in depth and breadth.Comment #78 January 5th, 2013 at 2:17 pm

Incidentally, Greg Kuperberg #61:

I’ve long thought that the sentence of Levin you quoted—

“It is worth noting that the reasons why QC must fail are by no means clear; they merit thorough investigation.”

is possibly the most striking sentence in the history of QC skepticism. We can just forget entirely about QC, and consider the general logic:

“It is worth noting that the reasons why X must fail are by no means clear…”

But

of courseit must fail, even if we can’t articulate any reason!Now, a sentence like the above might be fine if it were talking about (say) a perpetual-motion machine. But in that case, it would only be fine

becauseit could be expanded into something like the following:“This machine must fail

because otherwiseit would violate the laws of thermodynamics. But it’s worth noting that thedetailedreasons, specific to this machine, why it fails to be a perpetual-motion machine despite appearances, are by no means clear…”So I agree with you that the only way to make sense of Levin’s remark is that, without making the slightest attempt to defend the move, he’s silently elevated the (classical) Extended Church-Turing Thesis to the same sort of status as the laws of thermodynamics.

Comment #79 January 5th, 2013 at 2:59 pm

Sorry for coming late to the party, but I think the situation with regard to physical (as opposed to mathematical) QC may be analogous to classical computing at the center of the sun—not fundamentally impossible, but FAPP so. Is collapse/decoherence simply a matter of thermodynamics, or is there more to it? Perhaps the key to physical QC is better refrigerators! (and more shielding)

Comment #80 January 5th, 2013 at 3:15 pm

I think both sides are in a sense missing the point. Consider another controversial recent theory, string theory. From a physics point of view (not that I am an expert) it seems that it has not delivered on its pomises or is getting close to that. However, it has been a fertile ground for a lot of spectacular ideas in mathematics, which by themselves justify the theory. Mirror symmetry and problems in enumerative algebraic geometry, the intersection theory of the moduli space of curves (Witten’s conjectures proved by Kontsevich), the insights into the geometric Langlands conjecture, stability conditions on derived categories, Seiberg-Witten invariants, Gromov Witten invariants, progress on Calabi-Yau manifolds and Lagrangian submanifolds, the Verlinde formulas……and who knows what in the future (holographic principle…)

I think quantum computation is a great model. Its based on the abstract framwork of quantum mechanics so the model is inspired by physics and its stronger then turing machines that are modeled on classical mechanics thinking. Thinking about this model produced great insights, beautiful mathematical results, improved understanding of problems from classical computational complexity, improved understanding of tensor products in linear algebra…

As long as it continues to do that its a viable research program regardless of the very intriguing question, whether the nature of noise in actual physical systems which tend to department from clean models allows us to construct scalable (in any case, only finite) devices.

Comment #81 January 5th, 2013 at 3:23 pm

A central claim that Scott is making for many years is that the possibility of quantum computers is a logical consequence of quantum mechanics and he (and a few others) held the view that skeptics of QC are actually skeptics of QM. Scott’s claim is flawed.

Recently, Oded Goldreich, (in the 2011 poll by Bill Gasarch) explained his point of view: “

Quantum computers:Their conjectured power is based on a philosophically flawed inference by which what our current theory of QM does not rule out is actually doable in real life.” To a large extent, Oded’s one-sentence analysis is a very short explanation why Scott’s argument is wrong. A similar flawed inference is Scott’s assertion a few posts ago that QM guarantees that every natural quantum evolution can, in principle, be time-reversed.More can be said and this is related to our discussion above. I consider QM (the framework of quantum probability, to be precise,) as a mathematical framework that allows to express all laws of nature. This framework indeed accommodates the possibility of universal quantum computers. My point of view, and I was glad to see that Scott and Greg have a similar point of view, is that there are additional layers of physical laws which do not

followfrom the general framework of quantum probability but can beexpressed(in principle) in the language of quantum probability. Scott’s metaphor of application-programs on top of an operating system is very good. These additional physical laws may or may not allow for superior quantum computation. We talked above about QFT (quantum field theory) and about thermodynamics and to this list we can add also the laws of classical Newtonian mechanics which, in principle, can also be expressed (via certain classical limits) in the QM mathematical framework.The issues of quantum noise and approximations in quantum physics are especially important. Perhaps the most shining example of Scott’s mistake is this statement about the “threshold theorem” from the 2006 post “reasons to believe II”:

“Many quantum computing researchers talk about this theorem as the knight in shining armor who rode in unexpectedly to vindicate all their hopes. They’re entitled to do so, but to me, the theorem has always felt more like a beautiful, detailed working-out of something that couldn’t possibly have been false.”

While noise and approximations are not an essential part of the general framework of QM, the nature of quantum noise and the nature of quantum approximations, are crucial in the additional layers of physical laws describing our natural reality. They are crucial to approximation and computation methods in QFT, they are at the heart of thermodynamics, and, as I learned more recently (and discussed on my blog) they are crucial to the mathematical interface between Newtonian mechanics and quantum mechanics described by symplectic geometry. It is indeed an open question if the additional layer of physical laws allow for quantum fault tolerance or not. But it is a mistake to take it for granted. To a large extent my long debate with Aram Harrow was aimed to examine specific suggestions (from my papers) towards a negative answer.

In the poll, Oded moved on to describe his personal belief and said: “I don’t believe that QC is significantly more powerful than TM.” Indeed the skeptical belief regarding QC is that, when the additional layers of physical laws are examined, the computational complexity relevant to our physical world is BPP. (The rationale for holding such a belief is that it is easier to believe an additional layer in theory which excludes superior QC, than the amazing reality with superior QC.)

One nice feature of Scott’s logic is the implication from “X might be possible in principle” to “X must be possible in principle.”

(Alexander #76, I agree.)

Comment #82 January 5th, 2013 at 3:46 pm

Scott, please let me say that your Comment #78 is terrific. To extend it just a little bit (echoing #75⇔#77), it is striking — and aesthetically satisfying too — that the

weakestskeptical FTQC critiques (e.g. Levin’s and Dyakonov’s) are turning out to be naturally dual to theweakestFTQC proposals (e.g., the work cited by Levin and Dyakonov’s articles!). Namely the weakest FTQC skepticism, critiquing the weakest FTQC science, doesn’t hold much significance.In this regard, the ongoing (unproductive bitter futile polemic sardonic) debate regarding climate change holds important lessons (as it seems to me) for FTQC enthusiasts and skeptics. We all appreciate that much heat and little light results when the weakest [climate-change/FTQC] science is critiqued by the weakest [climate-change/FTQC] skepticism. And conversely, history shows us plainly that an enormously beneficial illumination results when the strongest science respectfully acknowledges and responsibly addresses the strongest criticism …

the collection of climate-change articles that NASA scientist James Hansen has posted on the arxiv serverare (as it seems to me) exemplary of science that is mindful and respectful of this mutual dependence.————–

AsideTo express a mildly heterodox opinion, itdoesseem (to me) that the STEM toolset arising from the past three decades of research in QIT, is proving to beimmenselyuseful in addressing the planetary-scale resource challenges that Hansen’s research illuminates).————–

SummaryTo the extent that the quantum computing community remains mindful and respectful that the vitality of the strongest science hasalwaysdepended utterly upon the vitality of the strongest skepticism (andvice versa), then without regard for the feasibility (or otherwise) of FTQC, the scientists who study quantum computing need have no apprehension regarding its future.Comment #83 January 5th, 2013 at 4:57 pm

Gil Kalai #81:

A central claim that Scott is making for many years is that the possibility of quantum computers is a logical consequence of quantum mechanics and he (and a few others) held the view that skeptics of QC are actually skeptics of QM. Scott’s claim is flawed.

Your characterization of my position is itself flawed. As I’ve explained many times (including in discussions with you), what I believe is that

for scalable QC to be impossible, there would need to be something profoundly inadequate in how QM has beenunderstoodfrom the 1920s till the present—and figuring out what it was would entail a revolution in physics.Now, the most “obvious” way that our understanding of QM could be wrong, would be if QM

itselfwas wrong. But I freely admit that that’s not the only conceivable way. As you say, there could also be some novel principle “on top of” QM—e.g., some rule restricting which quantum evolutions can actually be implemented in our world—that had the effect of prohibiting QC.On the other hand, at present we don’t even have any reasonable

candidatefor such a principle, something that would kill QC while not grossly contradicting the quantum-mechanical experiments that have already been done. (And I suspect you know that as well as anyone, since to your credit, you’re one of the few people who’s been seriously looking for such a principle.)From my perspective, it’s not surprising that no such principle has been found, because

universality is much harder to avoid than to achieve. Incidentally, this is just as true in the classical world as in the quantum one. Imagine that before the computer era, someone wanted to argue that scalableclassicalcomputing could never work. To justify that belief, the person might try to construct a toy model (say, a cellular automaton) that led to rich, unexpected, complicated behaviors like the real world does, but which, because of noise or whatever, didnotallow universal Turing machines.Now, maybe the person would succeed in finding such a model, and maybe not. But it would

definitelybe a challenging task, far more challenging than finding a model thatwasTuring-complete! We learned from Church, Turing, and their successors that once you construct a sufficiently-rich programming language,chances areit will be universal for whatever is the appropriate model of computation; the surprise is if it fails to be.For that reason, if the “application-level laws” of quantum field theory (or whatever) failed to be universal for BQP, I hold that that would be

almost as great a surprise as a failure of the quantum-mechanical “operating system” itself.For in essence, it would tell us that the QM operating system was overkill, that there’s somesmallerorweakerphysical framework (a non-BQP-universal one) that actually suffices to run all our application programs. And as far as I can see, any such discovery would necessarily be a revolution in physics, fully deserving of multiple Nobel Prizes.Comment #84 January 5th, 2013 at 6:00 pm

Scott #67: These are good answers. For a computer scientist, you somehow know a lot about physics!

Scott #83: I’m with you here, too. Limitations on what we can do come from “the Hamiltonian of the world” and “the state of the world” (determined by cosmology, and itself presumably a consequence of fundamental physical laws). If these limitations prevent us as a matter of principle from operating large scale quantum computers I would be quite astonished. Of course, to do in practice what can be done in principle may be a formidable engineering challenge.

Comment #85 January 5th, 2013 at 6:00 pm

Although your opinion is reasonable Scott, surely the

oppositeopinion also is entirely reasonable. For with reference the citations of comment #6, we can envision the following plausible sequence:• Experiments suggest that zero-entropy states are unattainable.

• This result is formalized as the

Nernst unattainability principle• Experiments suggest that scalable FTQC is unattainable.

• This result is formalized as the

FTQC unattainability principle.It’s true that advances in thermodynamical science have historically been judged Nobel-worthy, as attested by (in chronological order) the awards to: van’t Hoff, Ostwald, van der Waals, Planck, Haber, Nernst, Einstein, Kamerlingh-Onnes, Giauque, Onsager, Anfinsen, and Prigogine.

Given the precedent of Walther Nernst’s Nobel in particular, it seems entirely reasonable (to me) to assess “Nobel for proving FTQC unattainability” as having comparable likelihood to “Nobel for proving FTQC attainability”. And of course, the quantum information community is the

realwinner, either way!Comment #86 January 5th, 2013 at 7:13 pm

John Preskill #84:

These are good answers. For a computer scientist, you somehow know a lot about physics!

And for a physicist, you somehow know a lot about CS—even including how to compliment computer scientists!

But the truth is that I’ve been

strugglingto understand this stuff for months (discussions with Greg Kuperberg, Ed Farhi, and Seth Lloyd helped a lot, as did Matt Strassler’s blog). The QFT textbook I hunger for—one that would(1) assume lots of familiarity with quantum information, but as little classical physics as possible,

(2) use a nonperturbative lattice approach (from the beginning and almost exclusively), and say from the outset exactly what the Hilbert space, initial states, and observables are,

(3) phrase the dynamics, whenever possible, in terms of Hamiltonians rather than Lagrangians, and in terms of sums rather than integrals,

(4) explain as many concepts as is feasible (symmetry-breaking, connection fields, mass gaps…) by analogies with simpler model systems like lattices of qubits,

(5) provide pathways by which everything used in a derivation can be traced back either to a general principle (e.g., Lorentz-invariance, unitarity, conservation of energy) or to a clearly-labeled “brute experimental fact” (e.g., that the W and Z are not massless),

(6) minimize “now we’re going to do some mathemagical performance art, which can’t be justified in terms of any fact or principle we stated anywhere before, but which must be right because it leads to the number they measured at SLAC, plus or minus a few percent”

—that textbook seems not to exist. I have a dream of someday writing it myself, but first I’ll need to learn a helluva lot more!

Comment #87 January 5th, 2013 at 7:33 pm

Dear John,

I have a question regarding your interpretation of my conjectures in your comment above and in the paper.

You wrote: “For scalability to fail as a matter of principle then, either quantum mechanics must fail for complex highly entangled systems (as ‘t Hooft [8] has suggested), or

else either the Hamiltonian or the quantum state of the world must impose noise correlations that overwhelm fault-tolerant quantum protocols (as Alicki et al. [9, 10, 11] and Kalai [12, 13, 14, 15, 16] have suggested).”

Of course, I never refer in my conjectures or models to the Hamiltonian of the world (or to any cosmology), and I never saw that the standard models of noise are described in such terms. I am very curious where does your interpretation come from.

Comment #88 January 5th, 2013 at 7:46 pm

Kalai #81

“My point of view, and I was glad to see that Scott and Greg have a similar point of view, is that there are additional layers of physical laws which do not follow from the general framework of quantum probability but can be expressed (in principle) in the language of quantum probability. ”

You may be interest in the following citation

“Quantum electrodynamics is built up within the framework of quantum mechanics, but it contains specific rules not determined by quantum dynamics. The relationship of quantum mechanics to specific physical theoryies like quantum electrodynamics is rather like the relationship of a computer’s operating system to specific application software – the operating system sets certain basic parameters and mode of operation”

That’s from the Nielsen&Chuang classic book.

http://www.squint.org/qci/qinfo-book-nielsen-and-chuang-toc-and-chapter1-nov00.pdf

Comment #89 January 5th, 2013 at 7:47 pm

Kalai #81

“the nature of quantum noise and the nature of quantum approximations, are crucial in the additional layers of physical laws describing our natural reality.”

Two things please:

1) Quantum approximation? You lost me here. You just said that all you’re talking about is laws that can be expressed (in principle) in the language of quantum probability. What are you calling “quantum approximation” in this context?

2) about quantum noise, Daniel Grossman said above that

“We *do* know, contrary to Dyakonov’s belief, that there are fault-tolerant thresholds when we relax these assumptions.”

So it seems you disagree on this topic. Do you indeed have a model of noise that can break threshold-theorem makers (or at least a model of noise that has not already been proven to respect threshold theorems)?

Comment #90 January 5th, 2013 at 7:48 pm

Scott #78

Is there any good reason to name the (Classical) Extended CT Thesis and the (Quantum) Extended CT these ways? It’s ugly and confusing. Why not calling it the Godel Lost Thesis and the Feynman Bottom Thesis?

http://rjlipton.wordpress.com/the-gdel-letter/

http://link.springer.com/content/pdf/10.1007%2FBF02650179

Comment #91 January 5th, 2013 at 8:00 pm

Scott’s lament is universal … and so is his dream!

In the meantime, please allow me to commend Howard Carmichael’s field-theoretic works in general (he was the first to describe open-system quantum trajectories as “unravelling”), and commend in particular Carmichael’s recent text

Statistical Methods in Quantum Optics 2: Non-Classical Fields(2007), and to commend especially that text’s chapter 17 “Quantum Trajectories I: Background and Interpretation” as essential reading for anyone who wonders whether Aaronson-Arkhipov experiments are scalable.Are these experiments scalable (in principle and/or in practice)? Don’t ask me! Because the first thing one learns from Carmichael’s (uniquely experiment-oriented!) field-theory texts, is humility in the face of these tough questions.

In regard to field theory in general, Richard Feynman’s

Lectures on Physicsdeserve both praise and opprobrium, for tricking impressionable students into believing that field-theoretic issues are simple. Feynman cherry-picks problems for simplicity … but FTQC does not permit us that luxury (as far as we know). As for Howard Carmichael, his texts (as it seems to me) deserve immense praise for tackling real-world field-theoretic issues head-on … and infinitesimal opprobrium for spelling “unravelling” (1993) and “unraveling” (2007) inconsistently!Comment #92 January 5th, 2013 at 10:37 pm

Gil #87: You have said that you are not proposing a failure of quantum mechanics; rather you are proposing that noise correlations obstruct scalability of quantum computing. If we know the state of the system and its environment, and we know the Hamiltonian that governs the evolution of the system and its environment, then we know everything about the noise.

I think we had this discussion already over at Goedel’s Lost Letter …

Comment #93 January 6th, 2013 at 4:57 am

Gil, I hope you don’t mind me adding a comment regarding your comment (87) to John.

You mention standard noise models not being related to the “Hamiltonian of the world”. This is actually quite an important point, because it is exactly where certain simplified noise models break. One very common assumption in noise models is that the noise is Markovian. However, if you think about a Hamiltonian description, you simply can’t get purely Markovian noise, because the second system has to be infinitely strongly coupled internally to immediately move away any information leaked from the primary system, which has the effect of negating the coupling between the two systems (this can be seen as being an embodiment of either dynamic decoupling or Anderson localization or the Zeno effect depending on your particular perspective). As a result, we know that Markovian noise is fundamentally impossible (assuming that a Hamiltonian description of dynamics is indeed correct), and while noise can appear Markovian at a sufficiently large time scale, on shorter timescales this property breaks down. Indeed, this is exactly why various refocusing and decoupling techniques work.

While many sources of noise observed in nature can be well approximated as Markovian for the timescales we may care about, it is not clear that arbitrarily postulated noise models can well approximate any Hamiltonian dynamics except on timescales which scale exponentially in the size of the system.

Comment #94 January 6th, 2013 at 5:31 am

Hi Jay, Thanks for the reference (#88). Excellent questions (#89).

1) “Quantum approximations” refer to a description of a quantum system when some (or most) relevant degrees of freedom are neglected. When we talk about additional layers of physical laws based on QM, the issue of such approximations is crucial and central. When we talk about noisy quantum computers models for such approximations is what we refer to as quantum noise.

2a) I agree with Daniel Gottesman that several concrete concerns by Michel Dyakonov are settled by current theory of quantum fault-tolerance, and I think I even made a similar remark here five or six years ago. Michel’s paper reflects, to a large extent, his guts-feeling that eventually QC will not work. I don’t understand Dyakonov’s hostility towards the QC endeavor.

2b) “Do you indeed have a model of noise that can break threshold-theorem makers (or at least a model of noise that has not already been proven to respect threshold theorems)?”

Yes, I have several suggestions (in various degrees of generalities) for such models. Those are described and discussed quite extensively in the recent debate over the blog “Godel lost letter and NP=P,” and also in my papers.

Comment #95 January 6th, 2013 at 5:34 am

Hi Scott, thanks for your response!

Let me try to write down what I think we (and most of us) more or less agree about (with different weights and levels of confidence):

1) The basic framework to describe our physical reality is the framework of QM. The “operating system” of our physical reality is written in this language.

2) There are additional layers of physical laws (or specific physical models) that we refer to as “program applications” written in the language of QM. Practically speaking, in some cases those layers require specific (sometimes very complicated) mathematical formulations that we do not always know how to express in terms of QM.

3) Failure of universal QC require a profoundly different way to understand QM, again, on the level of the additional layers of physical laws.

4) Building universal QC (or “just” computationally superior QC) represent a completely new reality in terms of controlled and observed quantum evolutions, and also a new computational complexity reality. (But most likely not the ability to solve NP-complete problems.)

5) The idea/model of quantum computers itself is a profound revolutionary way to look at quantum physics (I completely agree with Ethan about that). The notions of quantum error-correction and quantum fault-tolerance are additional profound new theoretical physics.

Comment #96 January 6th, 2013 at 5:42 am

Hi John (Preskill #92). Thanks for your response!

Yes, I guess I did not understand this point of view already in the GLL discussion. My main concern about it is that it may impose on you wrong intuitions.

Specifically , your most recent paper on this matter (the one partially motivated by my visit) describes noise models where the threshold theorem is proved based on certain norm being small. I have mathematical reasons to believe that for general noisy quantum evolutions this norm must be unbounded as the system get large. The reason is quite simple: Your assumptions imply that the distribution of the number of qubit errors has Gaussian concentration, which I regard as an unrealistic feature of noise. (Robert Alicki has some different insights coming from his experience with open quantum systems also to believe that this norm will be unbounded for realistic situations.)

Now, is the statement that in realistic situations this norm scales up with the size of the system something about cosmology? (If yes, let’s explore it!), and is it really useful to think about my claim about the norm, or about your own assumptions about it, as statements about the quantum Hamiltonian of the entire universe? (It feels like a red herring.)

Comment #97 January 6th, 2013 at 6:25 am

Scott #48: There is a difference between Einstein situation and QC situation. Einstein was working with unexplained experimental data, whereas QC seem to push theory to unmeasured data, much like saying in late XIX century everything is explained. On the other hand, we have biological systems, that may behave non-classically, e.g. we might have build non-classical computers. In this respect can anyone give a qualified opinion on the following paper http://precedings.nature.com/documents/6954/version/1 and arxiv version http://arxiv.org/abs/1212.0109 . I’m thinking of running similar experiments on human beings, and would like to get warnings from experts before increasing the amount of junk and misconceptions.

Comment #98 January 6th, 2013 at 7:34 am

Scott #86, have you read about history of the QFT e.g. in S. Weinberg QFT textbook? Did you see D.Deutsch’s qubit field theory? And, finally, what do you think about J. Weizenbaum’s famous book? – I think some ideas there also may be relevant.

Comment #99 January 6th, 2013 at 8:01 am

If the boundary between what “we [people] know” and what “we [oracles] know” were more scrupulously delineated, then (as it seems to me) many misunderstandings relating to both quantum computing and complexity theory might be averted.

This is (what I take to be) the motivation for Tony Zee’s remark in

Quantum Field Theory in a NutshellConsider for example the following particularization of Preskill’s general assertion

The struggle for understanding, by many researchers, that led to Onsager’s solution (in 1944) of the 2D Ising model strikingly illustrates the crucial distinction between

.~~we know~~oracles knowAs an antecedant to Tony Zee’s prescription, the complexity-theoretic work of Juris Hartmanis in the 1970s and 80s reads naturally (it seems to me) as an argument for a careful distinction between “we [people] know” and “we [oracles] know”, and on Dick Lipton and Ken Regan’s fine weblog

Gödel’s Lost Letter and P=NPI have posted a seriously whimsical description of (what I take to be).the complexity-theoretic world that Hartmanis envisionedThe Quantum Information Science and Technology (QIST) roadmaps of 2002 and 2004 (LA-UR-02-6900 and LA-UR-04-1778) were of course oracular documents, and Dyakonov’s preprint (arXiv:1212.3562) is well-justified in calling for discourse regarding these documents, as the QIST authors themselves agree:

The first QIST prophecy proved true. The second QIST prophecy was not fulfilled, in that no QIST version 3.0 ever appeared … and it is lamentable too that few (or none?) of the QIST panel members comments regularly here on

Shtetl Optimizedor on any public discourse forum.And finally, in regard to John Preskill’s (outstandingly excellent as it seems to me) preprint “Sufficient condition on noise correlations for scalable quantum computing” (arXiv:1207.6131v2), and Gil Kalai’s comment (#87) upon it, the one-paragraph summary-and-citation of quantum computing skepticism given by Preskill is (as it seems to me) about as fair as any one-paragraph summary could be … the entire introductory section of this preprint is wholly admirable … I wish only (selfishly) that this outstanding preprint were even longer and even broader!

—————

ConclusionThe disciplines of quantum computing and complexity theory would benefit from the Hartmanis/Zee/Dyakonov prescription to [1] carefully study the sobering lessons of history (especially thermodynamical history) and in light of those lessons [2] distinguish more scrupulously between what “we [people] know” and what “we [oracles] know”.Comment #100 January 6th, 2013 at 10:09 am

Gil Kalai #95: yes, yes, yes, qualified yes, and yes!

For your point 4, I’m hesitant to say that QC would “represent a completely new reality,” for the simple reason that chemists and physicists have been dealing with the generally-enormous computational complexity of solving the many-particle Schrödinger equation for almost a century. For me, QC would represent a radical new way of harnessing and controlling that reality.

But it’s clear that we agree on a great deal, and in particular, that |Me-You|<<|Me-Dyakonov| !

(the above exclamation mark is not a factorial )

Comment #101 January 6th, 2013 at 10:33 am

Incidentally, there’s a wonderful concrete research problem for somebody, which is directly related to what Gil, Joe, John Sidles and others have been discussing above. Namely:

Prove that simulating the ϕ^{4}quantum field theory, in some number of dimensions, is a BQP-hard problem.This would be an obvious counterpart to the result of Jordan-Lee-Preskill, who showed that the problem is

inBQP.Comment #102 January 6th, 2013 at 12:48 pm

Joe (#93) your comment is an eye-opener thank you very much. (Didnt see when I made my comments). Scott (#100) Factorial??

Comment #103 January 6th, 2013 at 1:14 pm

Jay #90

I think Scott meant that ECT is not something classical/fundamental law as the thermodynamic laws are.

(Actually lots of serious people think it’s simply false, so Levin does something like declaring “there are finitely many twin primes” as an obvious law of nature without explicitly mentioning it)

So not as in quantum/classical I guess.

John Preskill:

Could you please summarize your thoughts on ‘t Hooft’s dissipative-determinism QG paper

and QC for me?

In my opinion this is the type and order of advance someone has to make in improving current understanding to a degree when it both recovers the succes of particle physics/QM experiments of past decades AND shows that the complexity available is actually weaker than _anything_ that may follow from recognizing QM,QFT (and even string theory as noted above) as descriptions of physical reality rather than merely sort of models.

1) Does BQP computation follow from the QM framework? If not, how about boson statistics?

The possibility of creating somewhat bigger coherent states?

2) Does anyone think that these theories are ultimate?

As with the latter; certainly not, but you’d expect to recover them if you built a bigger mathematical machinery which can explain why it reduces in case of 3 qubits/photons and why not with 30. My argument is then that you cant avoid making some suprmeme unification and progress while climbing the tree of mathematical physics to superclass everything to a point when all the possibilities in 1) * look like theoretical artifacts. If someone calls this tricky-noise-model-correlations then be it, but it would sound a bit of an understatement for me.

* so this leaves some room for concerns about BQP,

but the BQP-hardness proof that Scott asks for would

clearly be a big step to resolve this OS/application worries which i dont really share. This “layers” in #95 Kalai have to stop somewhere and give rise to something that can review them.

Comment #104 January 6th, 2013 at 1:18 pm

Scott #48, cf No experiment ever done here on Earth has contradicted this model, yet the model has not very clear relation with a quantum computational network with n qubits.

Comment #105 January 6th, 2013 at 1:23 pm

A complexity-theoretic particularization of John (Preskill’s) comment #92, in reference to the experimental particularization of comment #99, arises when we reflect that proof-checking is a

physicalprocess:The great lesson of complexity theory is that the formal-system enthusiast Hilbert was wrong (despite the plausibility and well-received popularity of Hilbert’s general arguments) and the formal-system skeptic Gödel was proved right (via Gödel’s painstaking consideration of particular cases).

Similarly a BosonSampling version is

Conclusions[1] John Preskill has provided us with an illuminating example of a principle that appears manifestly true when we consider it generally, yet generically proves false when we analyze it particularly, [2] The QIST Roadmap (in release 1.0 and 2.0) severely underestimated both the programmatic perils and the strategic possibilities that are inherent in particularization, and [3] Both complexity theory and quantum computing are endowed with a relative abundance of enthusiastic Hilbert-style generalists and a relative paucity of skeptical Gödel-style particularists.Predictions arising from a synthesis of enthusiasm and skepticism[1] A Nobel-worth (in Scott’s phrase) thermodynamical proof, in the particularized skeptical style of Nernst, Onsager, and Gödel, microscopically grounded in orthodox quantum field theory (per the Carmichael/Zee/Weinberg texts, etc.) ofThe BosonSampling unattainability principle, as a 21st century Fourth Law of quantum thermodynamics that naturally extends the 20th century’s Third Law in its Nernst unattainability form (per comment #85) will be found beforeanyof the FTQC milestones of QIST 1.0 and 2.0 are achieved. Similarly, [2] the methods of the BosonSamplng unattainability proof will substantially illuminate the (far more difficult) question of FTQC scalability, and most importantly [3] The methods of the BosonSamplng unattainability proof, when applied to the practical problems of the 21st century, and also by their illumination of the foundations of quantum theory, will prove to be of immensely greater value than any quantum computing application presently envisioned.RemarkNeedless to say, we can all reasonablyhopefor this synthesis of enthusiasm and skepticism to be achieved, and this hope is (at present) more significant as whether it isin factachievable. Because hope we can share immediately, whereas the future must always remain in doubt!Comment #106 January 6th, 2013 at 1:31 pm

John Sidles #105:

The great lesson of complexity theory is that the formal-system enthusiast Hilbert was wrong (despite the plausibility and well-received popularity of Hilbert’s general arguments) and the formal-system skeptic Gödel was proved right (via Gödel’s painstaking consideration of particular cases).

Huh?? Not only is that not “the great lesson of complexity theory,” it has essentially

nothing to dowith complexity theory, and the understanding of those issues predates thefoundingof complexity theory by decades.Comment #107 January 6th, 2013 at 1:40 pm

Alexander Vlasov #104:

No experiment ever done here on Earth has contradicted this model [the Standard Model Lagrangian, as summarized by Sean Carroll], yet the model has not very clear relation with a quantum computational network with n qubits.

Well, the Standard Model Lagrangian

alsolacks a “very clear relation” with the boiling of water, or rainbows, or volcanic eruptions, or frogs, orclassicaldigital computers, or billions of other phenomena that we can experimentally observe here on Earth! The fact that there isn’t a “very clear relation” doesn’t mean there isn’tarelation, proceeding through several different abstraction layers. Elucidating those abstraction layers is what science and engineering are all about.If you’re still in doubt, ask Sean Carroll whether he thinks the details of the Standard Model present any obstacle whatsoever to scalable QC.

Comment #108 January 6th, 2013 at 1:54 pm

Scott #107, please elucidate … or any hint … my own paper about that was rejected from PRA as “not very clear”

Comment #109 January 6th, 2013 at 2:02 pm

Scott #107, you are changing topic. I have wrote about comparison of quantum computation network model and model that standard in QFT approach.

Comment #110 January 6th, 2013 at 2:05 pm

Scott, isn’t the thesis of Juris Hartmanis’ much-cited and well-respected (by many including me) monograph

precisely this:Feasible computations and provable complexity propertiesIt is of course true that Hartmanis’ skeptical views regarding the oracular dependence of modern-day complexity classes are not presently well-covered in

mainstreamcomplexity theory texts … but don’t skeptical ideasby definitionreside outside the mainstream? If complexity theory and/or quantum computing are stagnating a bit (as job-seeking students justly perceive), is that from too-many too-forceful too-irritating unfairly skeptical critiques (like Levin’s and/or Dyakonov’s)?Or does the present era require (what von Neumann called) “a rejuvenating return to the source: the reinjection of more or less directly empirical ideas” … that empirical ideas that the very best skeptical gadflies (like Hartmanis and Kalai, as it seems to me) have historically provided?

Comment #111 January 6th, 2013 at 2:34 pm

Alexander Vlasov #108: Well, one natural way to realize the qubits in a QC is by using spin-1/2 particles with pairwise-interaction Hamiltonians. As it happens, the Standard Model provides such particles. One well-known example is the electron. Another is the proton. The interaction, in these cases, would be the electromagnetic one. Or you could also use larger composite objects like nuclei. Also, the particles might or might not be identical fermions or bosons—for present purposes it doesn’t matter since we’re only using the spin degrees of freedom. The Standard Model describes all sorts of higher-order effects on each particle—e.g., an electron is continually interacting with the photon field, the Higgs field, the neutrino field, and so on—but as long as no other particle actually gets

emitted, all those interactions are in some sense “the electron’s own private business,” and won’t cause decoherence. For more details, seethe entire friggin’ literature on QC implementations.But I fear I’m wasting my breath (or rather, finger-strength). So far, unfortunately, none of your comments have shown the slightest tendency to want to

learnanything from what I or others have tried to tell you…Comment #112 January 6th, 2013 at 3:00 pm

have you read about history of the QFT e.g. in S. Weinberg QFT textbook?

Yes, I’ve read about the history of QFT, but I haven’t yet tackled Weinberg’s QFT textbook. I want to get through some easier books first, like Zee’s.

Did you see D.Deutsch’s qubit field theory?

I was actually meaning to revisit that paper lately, but I haven’t done so yet.

And, finally, what do you think about J. Weizenbaum’s famous book?

I haven’t read that book, but only because the excerpts I’ve read of it were a poorly-argued, cringe-inducing mess. I felt embarrassed for the author. The book was published in 1976 and

reallydidn’t age well; its “analysis” of the impact of computers on society seems like it missed the personal computer revolution and the Internet completely, and the ability of the computer (like almostanytechnology) to be either a liberating forceora force for repression, simply depending on how it’s used.Weizenbaum earned a place in history for the ELIZA chatbot, but seems to have gone crazy thereafter. There’s a hilarious story in Steven Pinker’s

The Blank Slateabout a lecture Weizenbaum gave in the 1970s, in which Weizenbaum explained that AI research should be boycotted because its onlypossibleapplication was to help the CIA and Pentagon kill the Vietcong, and declared himself “absolutely certain” that everyone in the audience would be dead by the year 2000. On point after point, Weizenbaum seems like he was about as dramatically wrong as it’s possible to be.Comment #113 January 6th, 2013 at 3:39 pm

I regard the statement I made in #92 as a banal truism. Likewise, I’m not being disingenuous when I say that I don’t understand why anyone would disagree with my statement quoted by Gil in #87.

Naturally, when I said “we know everything about the noise,” I did not mean that humans are able to derive all the properties of the noise; I just meant that all properties of the noise are logically determined.

Regarding Gil #96, I did not mean to assert that it is necessarily “really useful” to think about the noise from so fundamental a viewpoint. Rather, physics typically progresses through the analysis of effective Hamiltonians, which provide a good approximate description of dynamics at sufficiently long distance and time scales.

In this paper I tried to clarify that point by saying on page 2:

“We should emphasize that this Hamiltonian containing many-qubit terms is not meant to be the fundamental Hamiltonian describing the interactions among elementary particles and fields; rather it is the effective coarse-grained Hamiltonian describing the residual interactions among qubits that have been “dressed” by integrating out high-frequency short-distance degrees of freedom. These residual interactions might be weak, and usefully characterized by an upper bound on the operator norm, even though the underlying interactions that dress the qubits are relatively strong. The qubits themselves might be quite complex composite objects.”

The point of the paper is to show that scalable quantum computing works for a class of noise models. The conclusion is that if scalability fails then the noise model is wrong — i.e. cannot be realized under any conceivable physically realistic conditions.

This result does not by any means settle the debate regarding whether large-scale quantum computing is possible. To quote from the conclusions:

“The modest results derived here can hardly be expected to assuage the quantum computing skeptics, but may nevertheless help to clarify the debate.”

See the introduction and conclusions of the paper for further discussion.

Comment #114 January 6th, 2013 at 4:45 pm

John Preskill makes two points that (as it seems to me) are key to the preset discussion:

The subtle and intricate field-theoretic considerations of works like Carmichael’s “

Quantum Trajectories I: Background and Interpretation” and “Quantum Trajectories II: the Degenerate Parametric Oscillator“, in confluence with the practical requirement of high-coherence photon sources that couple strongly and adaptively to the high-efficiency detector sinks, as associated to scalable Aaronson-Arkhipov experiments (and scalable FTQC devices too), definitely *do* help clarify the debate.Namely, they show us plainly why the effective Hamiltonians and state-spaces that describe the Hamiltonian dynamics and the entropic thermodynamics of strongly-coupled systems (both classical and quantum) have historically been extraordinarily difficult to guess, observe, prove, and understand. Moreover, Carmichael’s analysis falls far short of the level of realism and detail required to assess the feasibility (or not) of scalable Aaronson-Arkhipov experiments, which in turn falls far short of the engineering sophistication required to assess the feasibility (or not) of scalable FTQC.

That is why even works of outstanding merit (like Preskill’s arXiv:1207.6131v2 in particular) only barely begin (in Preskill’s well-chosen phrase) to “assuage the concerns of quantum computing skeptics” … for the common-sense reason that skeptical concerns are entirely legitimate, being well-founded in the intricacies of quantum field theory, and well-founded too in the history of science, and well-founded too in post-QIST progress toward FTQC.

And so the great merits of quantum computing skepticism are that it pushes us to advance our theoretical understanding, and to appreciate the historical significance of our work, and to chart roadmaps that are feasible and flexible, toward 21st century objectives that are great and crucial. So, what’s not to like?

Comment #115 January 6th, 2013 at 5:01 pm

Gil Kalai,

Look, I must confess I was not thinking anti-QC positions as potentially serious. Your single post #94 made me change my view, thank you.

Not that I’m convinced that you provided a model of noise that could do the job (maybe I just missed it in the GLL debate (which was a bit messy (or that’s me who was not trying hard enough)), but I think I now understand how it could happens without violating QM.

Would you agree that at least QED should be proven wrong (or more properly, not valid at the high entanglement regime)? More specifically, do you expect the Aaronson-Arkhipov BosonSampling protocol to provide evidences for QED failure when (if) its experimental realisations will include more qbits? Would you change your mind about quantum computer feasability if no anomaly is found for this protocol?

@ Attila Szasz

Yeah, that how I read it too. I was just making the (side) point that calling ECT thesis (either the classical version, first expressed by Godel, or the quantum version, first well expressed by Feynman) after Church and Turing would inevitably lead to confusing complexity with calculability, as in #105. No need to be surprised as in #106 😉

Comment #116 January 6th, 2013 at 9:15 pm

John Sidles #110: Look, I know you’re the #1 fan (probably well above Hartmanis himself!) of Hartmanis’s 1978 monograph. But it seems strange to call “Gödelian skepticism of formal systems” the “great lesson of complexity theory”—among other things, because the idea of proving complexity statements independent of various formal systems

never panned out all that well!Yes, you can prove statements like NP⊄P/poly independent of some very weak fragments of PA, but those fragments tend to be so weak that they can’t even prove statements like the Pigeonhole Principle. And yes, you can interpret relativizing and even algebrizing proofs as “proofs that only use restricted sets of axioms”—but that interpretation came only after the fact, and hasn’t yet led to progress in complexity theory itself.Sure, it’s possible that these things

willlead to progress in the future—I’m guessing that you predict as much. My point is simply thatyou don’t get to call your personal prediction “the great lesson of complexity theory” before it actually comes true!Comment #117 January 6th, 2013 at 9:34 pm

LOL … the largest part of my respect (and many people’s respect) for Juris Hartmanis’ work originates in Mark Twain’s epigram:

Thus we appreciate that without its skeptical Pudd’nheads — as we perceive them! — like Juris Hartmanis, the culture of complexity theory would be deplorably poorer!

Comment #118 January 7th, 2013 at 12:59 am

My question is, let’s say a researcher approached you now asking for a grant to see if he could make his donkey read; would you as a NSF project manager be willing to fund it?

If not, why not?

Comment #119 January 7th, 2013 at 3:55 am

Rahul #118: No, I almost certainly wouldn’t fund it, for reasons already alluded to in the post. What’s the science case for trying to teach a donkey to read? We already have entities that can read (humans, OCR software), so presumably the goal is to understand animal intelligence better? But in that case, the proposal can be criticized on numerous grounds:

(1) Most importantly,

way too ambitious: have donkeys even been trained to respond differently to different colors and shapes? if not, why not focus onthat, or on whatever is the simplest task donkeys haven’t already been trained at? Also, does the PI him- or herself have previous experience teaching animals to perform this sort of recognition task?(2) Why donkeys in particular? Rats, mice, and monkeys seem like much better-characterized lab animals for psychology and cognitive science experiments.

(3) But in that case, there’s been something like a century of previous work studying learning in those animals. What is the PI proposing that’s new?

(4) An experimental proposal isn’t just a statement of what you want to

do: much more important is what you’re trying tolearn, and why. Will the donkey be placed in fMRI as it tries to perform “reading” or shape-recognition tasks? If so, what scientific questions are actually at stake?For the reasons above, I’m almost certain this proposal would lose out in competition; at the very least, it would have to be significantly revised before I would consider it.

Comment #120 January 7th, 2013 at 5:49 am

For us skeptical Puddn’Heads

the recent formal proof-check(incoq) of theFeit-Thompson Theorem, aka the Odd-Order Theoremwas among the most encouraging mathematical events of 2012/2013.This formalization milestone is immensely reassuring regarding the integrity of modern mathematics, in the sense that very lengthy/difficult/intricate proofs, that previously were validated and verified by experts who relied crucially upon their personal intuition-and-experience, now are being objectively verified by machine, one-by-one … with no big problems being uncovered (so far, anyway).

And this same milestone suggests to us that a Nernst-style FTQC/BosonSampling unattainability theorem (in the sense of Comment #6) might be comparably lengthy/difficult/intricate to theorems like the Odd-Order Theorem. The point is that the Odd-Order theorem is effectively an unattainability theorem, in the sense that an inhumanly exhaustive verification and validation (V&V) proof-synthesis has foreclosed the very human desire to find (for example) more finite groups than presently are known.

QuestionMight a comparably exhaustive proof-synthesis similarly foreclose upon our very human desire for scalable FTQC and/or BosonSample protocols?ConclusionThe 21st centuryalreadyis a Golden Era for Hartmanis-style, V&V-oriented, skeptical Puddn’Heads!Comment #121 January 7th, 2013 at 6:16 am

Thanks Scott; well reasoned indeed.

“Too ambitious” in a lot of ways is subjective; but yet I’ll ask:

Do you personally consider a Quantum Computer as too ambitious? Or not?

When a finite amount of funding needs to be disbursed, how strong is Quantum Computing’s claim?

In the “Probability x Payoff” metric how big would you say are those terms?

Comment #122 January 7th, 2013 at 7:42 am

Rahul #121: These are complicated questions. But one answer is that,

to whatever extent people care about fundamental issues in physics and computer science at all,I think quantum computing and information research has a great deal to be said for it. The world recently spent $11 billion to build the Large Hadron Collider, which pushed quantum mechanics a little bit further on the energy frontier. Quantum computing is about trying to push quantum mechanicsvastlyfurther on the “entanglement frontier.” It’s about seeing whether the arguably central claim of quantum mechanics since 1926 — that to describe n particles, you need a vector in a ~2^{n}-dimensional Hilbert space — isliterally trueor not. If itistrue, we want to understand it much better, and maybe someday, even exploit it to solve hard computational problems. If it’s not true, we want to know what to replace it with.In terms of fundamental advances over the last 20 years, I’d put quantum computing and information head-to-head with almost any other area. That’s true whether we’re talking about advances in experimental methods (some of which were recognized this year by Wineland’s Nobel Prize in physics), the impact on

classicaltheoretical computer science (which now often uses quantum-inspired techniques), or the deep work on e.g. quantum error-correcting codes, nonabelian anyons, area laws, etc. and their connections with condensed-matter physics and quantum field theory (some of which was recognized this year by Kitaev’s Fundamental Physics Prize).On the other hand, I would

notfund quantum computing research solely out of a desire for practical devices in the foreseeable future, which we hope for but which are far from a sure thing. I can’t honestly make the case for QC research to someone who has zero interest in fundamental science.And I certainly wouldn’t get so carried away by QC that I starved key adjacent areas, like (to pick a random example )

classicaltheoretical computer science.If you want to discuss the actual allocations … well, I’m one of the worst people to ask since I don’t know the details! However, one thing I do know is that to some extent, the NSF decides on allocations based on “proposal pressure” — i.e., by trusting that researchers will “vote with their feet” about which areas are most promising or exciting at a given time. By that metric, I can tell you that a huge number of extremely smart prospective graduate students—more than we can handle, actually—have been “voting with their feet” that they want to do something related to quantum computing.

Comment #123 January 7th, 2013 at 8:13 am

Jay (#115) Yes, I think so too. If superior quantum computational power is not possible, then QED-computations in the range that requires superior computational power are irrelevant to our physical reality. The deterioration of relevance (if true) can perhaps be witnessed already in regimes where we can compute and experiment.

Comment #124 January 7th, 2013 at 8:22 am

Rahul, as an instructive mathematical case-history regarding appropriate levels of investment, please consider that the formal V&V of the Feit-Thompson theorem was a six-year joint project of MicroSoft/IRIA — and great credit is due to these institutions! — that consumed (apparently)

about 20-60 person-years of effort described as “almost full-time.”Relative to the purely mathematical challenge of verifying the Feit-Thompson theorem (assuming the strict validity of QED field theory) it is natural to ask: is the purely mathematical challenge of deciding the attainability (or not) of scalable FTQC and/or BosonSampling easier, harder, or comparable?

To use again John Preskill’s well-chosen phrase “probably most readers of

Shtetl Optimizedagree” that answering quantum attainability questions is proving to beharderthan verifying even the longest and most intricate mathematical proofs.The QIST roadmaps of 2002/4 envisioned that the mathematical challenges associated to deciding attainability would be finessed by experiment, the roadmap being in effect: “First we’ll demonstrate attainability empirically and

thenwe’ll demonstrate it mathematically.” That did not happen, and nowadays “probably most readers ofShtetl Optimizedagree” that it is not foreseeably likely to happen … which is why new roadmaps are needed, eh?Comment #125 January 7th, 2013 at 8:28 am

Thanks again Scott for the very interesting exposition.

If NSF does indeed allocate the way you describe that’s dangerous and may lend credence to Dyakonov’s spectre of a bubble. Researcher Interest -> More Funding -> More Interest.

Is Dyakonov’s criticism about the ratio of articles on Experiments : Proposals : Theorems valid in your opinion? If so, as a member of the field would you advocate steering more funding into the Experimental side?

PS. If I may request a post. Can you do one on the “Impact of QC on classical theoretical computer science techniques.” Apologies if you already have; I’m a late arrival to the party.

Comment #126 January 7th, 2013 at 9:34 am

Rahul #125: What alternative allocation system would you recommend? Having Congress decide? (Hello, NSF Institute for Creationism and Wish-Based Climatology!) Having the public vote directly on their favorite research areas, like

American Idol? Anything I can think of has serious drawbacks, but the system we have has worked reasonably well since Vannevar Bush created it in the aftermath of WWII.Is Dyakonov’s criticism about the ratio of articles on Experiments : Proposals : Theorems valid in your opinion?

No, it’s wildly misinformed. The amount of funding that goes to theory in this field is a

small fractionof the funding that goes to experiments (not surprisingly or inappropriately, since experiments cost a lot more). A typical theory grant is in the hundreds of thousands (almost all for students, postdocs, summer salary, travel, and university overhead), while a typical experimental grant is in the millions. But without theory, of course, the experiments wouldn’t exist.Admittedly, I don’t know about the ratio of

published articles—it’s possible that the experimenters are too busy in the lab to write as many papers! Just as importantly, they tend to publish 10- or 20-author papers, whereas the same number of theorists might each go off and write their own papers.Regarding your PS: See the wonderful survey Quantum Proofs for Classical Theorems by Drucker and de Wolf. Though there have been many more examples in the few years since that survey was written, such as the quantum communication complexity – inspired lower bound for linear programming formulations of the Traveling Salesman problem (which won the STOC Best Paper Award), or my linear optics proof that the permanent is #P-complete.

Comment #127 January 7th, 2013 at 10:04 am

@Scott:

Maybe the barrier for acceptance of a theory publication ought to go up then? If theory is “cheap” to produce does not imply we should get too much of it.

Oh, I wasn’t intending any of those extreme methods of funding allocations that you brought up (jokingly I hope!). Just an expert panel of diverse scientists perhaps. They probably already have one; but maybe they are yielding too much to proposal pressure? Or perhaps guys on the panel are too awed by what’s currently “cool” (and no dobut QC is, right now! Can’t think of anything outside of the LHC that’s so “in”)?

I don’t know: To me the more general worry is if a research-sector is indeed over-hyped would we recognize it and how?

Thanks for the links.

Comment #128 January 7th, 2013 at 12:16 pm

Rahul #127: The barrier for acceptance

isquite high at the premier theory conferences, like STOC, FOCS, and QIP. Just like in any other field, there are also lower-tier conferences (conferences are often more selective than journals in this field), but I don’t see the problem with that. Server space on the arXiv for scientific preprints is not a scarce resource that’s going to run out, and in general, it’s incredibly hard to predict which papers are going to become important in the future, and something that’s uninteresting to most people can be extremely interesting to a few. There’s some cost to other people’s time torefereelots of mediocre papers, but that’s it.As for “over-hyped research sectors”: if it makes you happier, I can tell you that there are “hot areas” in CS right now compared to which quantum computing isn’t even a dust speck. Try looking up: “Big Data,” wireless sensor networks, multicore programming, cloud computing. I can also tell you that, while federal funding for QC research is not as bad as in many other parts of science, the outlook for

tenure-track faculty positionsin QC has never been great — partly just because the field is interdisciplinary (falling between CS, physics, and several other things), but partly because departments mistakenly view QC as a question of technology rather than fundamental science, and think that if QC doesn’t become practical in 5 years then they’ll have “wasted” a faculty slot.Anyway, the way our funding system — the one Dyakonov denounces — works, yes, it’s basically up to the researchers who are excited about something and have a positive proposal to make their case to the rest of the scientific community and the government. It’s not the responsibility of those researchers to refuse funding on the theory that someone somewhere else, who hasn’t bothered to tell anyone about it, might have a better use for the money. That person should speak up!

Comment #129 January 7th, 2013 at 1:00 pm

Hi let me try again to make my point. I think it is trivially obvious that there are very loud noisy noise models under which QC is impossible. Equally true but much less obvious before the breakthrough papers, there are (at least theoretically) reasonable noise models under which QC is possible. So there will be theorems going both ways. As John Preskill said, the crux of the issue is which noise model(s) is correct or realizable. (So far I have added nothing new to the discussion)

Also not new, but underemphasized in my opinion, is the proposition that the fundamental noise in QM is not thermodynamics but collapse/decoherence. Where we need further progress, both theoretical and experimental, is in discovering and stating the “laws of collapse” or the “laws of decoherence.” To my understanding/speculation, these are the “facts” that will determine the possibility and difficulty hence practicality of QC.

Scott says in #123 that fundamental physics is the best justification of QC and points to the verification of an exponential Hilbert space.

I agree that fundamental physics is an excellent justification for QC and point in addition to the necessary advance of “solving the measurement problem” by going beyond von Neumann to specify when, where, and how projection occurs, not as a matter of philosophy, but as a matter of theoretically stated and experimentally verified physics.

Comment #130 January 7th, 2013 at 1:30 pm

Jay #115: For a theory like QED to be wrong “at the high entanglement regime” is like saying that it would fail when you prepare the inputs to the experiment according to a correlated probability distribution instead of an uncorrelated distribution. To continue the operating system metaphor, this would require changes to the operating system of the universe (quantum mechanics) and could not be achieved by anything at the software level (the QED Hamiltonian).

It is possible for QED to do something different for *specific* entangled states, but this too would be very difficult, since it’s hard to detect entanglement locally and every physical theory ever invented has been local. Overturning this would contradict what we know about both QM _and_ relativity.

There is one concrete noise model of Gil’s (smoothed noise) that he thinks is (a) physically realistic, and (b) is inconsistent with FTQC. My view is that (a) it is physically unrealistic, since it violates locality, and (b) although it violates the _hypotheses_ of the threshold theorem, it nevertheless is compatible with FTQC. However, Gil and I have not yet talked through our differences here, which is my fault for falling behind on emails.

Kuperberg #51: Comments like yours make me wish this blog had a way to “+1″ comments.

Comment #131 January 7th, 2013 at 1:42 pm

First, I acknowledge my mistake in Ref. 50 (polylogariphmic, not polynomial, slowdown).

Scott Aaronson and John Preskill (#34) have formulated my main points rather accurately, and I am happy that they (and hopefully others) agree with some of them. Below are some comments.

Preskill (#34):

(1) Quantum computing is overhyped.

This is putting it very mildly. There is an unprecedented level of hype and deception of the scientific community and the general public. It goes mainly along the following lines:

a) The feasibility of scalable QC has been rigorously proved via the threshold theorem

b) Substantial experimental breakthroughs on the way to QC are occurring all the time

c) QC is going to be the new technological revolution of the 21st century (quantum internet, etc)

Obviously, there are many people in the QC community who understand the distinction between a scientist and a salesman and (though not actively protesting) do not contribute to the hype, however there are many others who do. Otherwise, where would all the hype come from?

(2) Experimental progress, though impressive in some ways, has fallen short of some optimistic projections.

Indeed, it is impressive that using advanced modern technique one can experimentally manipulate several qubits and obtain expected results: “…we run a three-qubit compiled version of Shor’s algorithm to factor the number 15, and successfully find the prime factors 48% of the time” (Nature, 2012). However the existing experimental progress is comparable to the breadcrumb breakthrough in my Nasreddin story.

Apparently the theorists are not curious enough to ask the experimentalists, what prevents them from upgrading their experimental setup from 5 to 50, and eventually to 5.000.000 qubits. Thus some enlightening information is missed.

(3) Large-scale quantum computing is still a long way off and might never work.

I prefer the more precise formulation by Aaronson:

A) The feasibility of scalable quantum computing in the physical world remains open.

In my opinion, we still do not know whether scalable QC is possible or not. (I mean, in the physical world, I have no opinion on its feasibility elsewhere). To my knowledge, this statement has not appeared so far in any publication, while the opposite statement a) above has been repeated innumerable times during the last 15 years.

(4) Conclusions about the scalability of fault-tolerant quantum computing rest on assumptions about noise that might prove to be unwarranted in some physical systems.

In the original proofs of the threshold theorem these conclusions rest on a large number of various assumptions (e.g. zero interaction between qubits etc) which strictly speaking cannot be satisfied in any physical system. However, with sufficient effort they might be approached with some limited (but not arbitrary!) precision. My point is that to judge about scalability the precisions for each assumption should be established. Has this been done?

Daniel Gottesmann (#49) says: “In particular, he claims a number of the assumptions behind the threshold theorem haven’t been examined for robustness, but all of the ones he names have, in fact, either been considered explicitly or implicitly, as a consequence of more general results”. … “We *do* know, contrary to Dyakonov’s belief, that there are fault-tolerant thresholds when we relax these assumptions. And actually the thresholds are a complicated high-dimensional surface because there are many potential types of errors and you can relax multiple assumptions at once”.

For an eventual QC designer this doesn’t sound encouraging. He would like to have numbers: the |0> state should be defined with a precision epsilon1, the interaction between neighboring qubits in dimensionless units should be less than epsilon2, and so forth. Only after having the required precisions (and not just the rather meaningless “error per qubit per gate”) we will be able to decide whether scalability is possible, or not. And if at least one of these epsilons turns out to be, say, 10^(-12), we can forget about QC.

I do not claim that “the assumptions behind the threshold theorem haven’t been examined for robustness”, rather I claim that the required precisions have not been established so far.

When I imagine a million of CNOT gates (electromagnetic pulses) applied in parallel to pairs of qubits within the quantum computer at any given moment, I can hardly believe that the beast can survive. I firmly believe that a machine of sufficient complexity operating with continuous variables will never work, and if the theory says otherwise, it means that something has been overlooked.

(5) Applications of quantum computing are limited. Or (Aaronson):

B) The applications of quantum computing would probably be real but specialized,

That’s right, very specialized, no technological revolution in sight. I explain in my article why factoring by Shor does not have much practical sense. As to simulating quantum systems, the subject is rather vague. What exactly are we going to simulate and what would be the benefits of such simulations? Is it worthwhile to build a $10^10 quantum computer operating with electron spins to simulate some other spin system? Why not simply assemble the desired spin system for only$10^6 and measure its physical properties? Or maybe we will simulate QCD on a 4D space-time lattice? How should we proceed and what are we supposed to learn?

Preskill says (#18)

I am also inclined to agree with Dyakonov that simulation of quantum systems is the most important known application for quantum computers, and that theorists should do a better job of characterizing the advantage of quantum computers in simulation problems, and of conceiving applications that might have high impact on science and engineering. Probably most people working in our field would agree.

I am happy to find a point of agreement here. I think though that first understanding why we need a QC and after that trying to build it would be a reasonable sequence of actions.

Aaronson says: “he somehow, unaided by argument, arrives at the conclusion

C) Quantum computing is a failed, pathological research program, which will soon die out and be of interest only to sociologists.”

The terms “failed, pathological” are not mine, but the general sense is correct. This view is supported by:

a) Unprecedented hype

b) Absence of a clear understanding, why do we need QC

c) Hundreds of absolutely irresponsable proposals of “quantum computing with” (just google this and see the number of hits)

d) Very low percentage of experimental work, all with only several qubits. Apparently, going from 5 qubits to 50 presents hardly surmountable difficulties

e) Crowds of researchers “doing” QC. In the past, all big achievements in science and technology were produced either by individuals, or by small research groups, without fanfare

f) Overproduction of theoretical CS-style work without any contact with experiment

In my opinion, all this shows that “something is rotten” in the QC kingdom and it seems to me that this situation is not healthy and not tenable in the long term. The QC research will not die out completely (nothing does), but it will return to a much lower level with a few enthusiasts quitely working without much outside attention. It is possible that they will find something interesting.

Again, I agree with Preskill (#34): “Whether quantum computing is a bubble history will ultimately judge.” Amen.

Additional comments.

The formal theory of QC and in particular the theory of fault-tolerance are purely mathematical. To succesfully work on it, one does not need to understand and “feel” physics, nor even quantum physics, it is sufficient to remember the quantum postulates on the first pages of a textbook. In a way, this is normal, just as classical CS does not need understanding of what is a semiconductor, or a transistor, or an RC circuit.

However, it becomes normal AFTER the hardware is built, even in some rudimentary form, by people who understand such things, otherwise the theory would remain suspended in air.

In the case of QC, for the time being the main question is: what are the principles on which the hardware should be constructed, and can a functioning and scalable hardware be built at all. I believe that to answer these questions one needs to go beyond the first pages and develop a more extended vocabulary than just: qubits, gates, errors, measurements and “approaching with arbitrary precision any unitary transformation on n qubits by an appropriate number of gates from the universal set”.

Let me give some examples.

Qubit. Do you want the two states to be degenerate, or to have different energies, like a spin in magnetic field? This will make a great difference, and there are serious problems both ways. By the way, in the Earth’s magnetic field an electron spin makes a precession with a frequency ~ 1MHz. Is this OK with you, or should we shield the Earth’s field? If yes, with what precision? There is a lot of questions about just one qubit, and much more if there are many.

Noise. There is a huge literature accumulated during the last maybe 80 years on relaxation and decoherence of various 2-level systems, with which the QC theorists are mostly unfamiliar. Instead of learning what really happens, they prefer to consider different noise models, which are not always relevant. AND there is this mysterious thing that I mention in Ref. 34, the 1/f, or “flicker” noise (f is the frequency, and 1/f is the noise power spectrum. Yes, the integral diverges). This type of noise to some extent is present everywhere. Does this fact mean anything for fault-tolerance?

Gates. Most probably, the duration of the gate and its accuracy are inversly correlated. Doesn’t this impose some constraints?

We can leave all the multitude of such questions to the Future Quantum Engineer, skilful but modest, and believe that our very general considerations are quite sufficient. I personnaly think that all these issues are quite important and relevant to the problem of scalability.

And finally. Quantum mechanics is a linear theory, and this is very important for fault tolerance. Classical mechanics is not, and because of this classical evolution is generally chaotic and irreversible (see my example in Sect. 9), which, among other things, is important for statistical physics. On the other hand, there will certanly be no harm in describing the motion of billiard balls or planets by the linear Schroedinger equation. How can we reconcile these two approaches, leading to different conclusions? (See Ref. 41). Also, it should be noted, that besides some million of qubits, our quantum computer will contain quite a lot of classical devices. We assume them to be under our perfect control, but it might well happen that this is not exactly true.

There are more things in heaven and earth, Horatio,

Than are dreamt of in your philosophy.

Comment #132 January 7th, 2013 at 2:12 pm

A recent, in-depth, quantitative analysis of this crucial and richly structured topic (that includes abundant further references to the literature) is Dereziski, De Roeck, and Maes “Fluctuations of quantum currents and unravelings of master equations” (

Journal of Statistical Physics, 2008).Comment #133 January 7th, 2013 at 2:54 pm

Michel Dyakonov #131: Thanks very much for taking the time to reply. There are many things to say, but let me confine myself for now to four points:

(1) You write about QC researchers who don’t contribute to the hype but don’t actively protest it either. One of the main things I’ve done on this blog for the past 7 years is to actively fight what I saw as irresponsible QC hype — I’ve seen it as just as a professional obligation, and I’m happy to say that several of my colleagues (e.g. Greg Kuperberg and Umesh Vazirani) have as well. On the other hand, I confess that fighting hype is not a very fulfilling full-time career, especially for a more junior person. What’s the point of fighting if you don’t have something better to offer? So, mostly I try to spend my time on research and teaching that could lead somewhere new or interesting (which for me, is

muchless about a new kind of computer than about a deeper understanding of physics and computation). That’s the only reason why I haven’t spent even more of my energy fighting the hypesters than I have.(2) Regarding the motivations for building a QC: I think that, ironically, you’ve done a wonderful job of helping to illustrate one of the more important ones yourself! Namely: if serious physicists such as you firmly believe that it can never work, then clearly, a demonstration that it

didwork would be of considerablephysicalinterest (at least comparable, I would say, to discovering a new elementary particle), even setting aside possible applications. In other words, the more you argue that QC can never work, the more you also undermine the case that it wouldn’t be all that interesting if it did.(3) You write about theorists who haven’t bothered to ask the experimentalists why it’s so hard to scale up from 5 qubits to 50. Based on my experience, that characterization is flatly false. I’m as far on the theoretical side as you can get (mostly I’ve worked not even on quantum algorithms, but on

limitationsof quantum algorithms), yet even I get opportunities to meet experimentalists all the time, tour their labs, and talk to them about scalability issues. To some extent, the things they say wouldn’t surprise you very much: decoherence rates too high, good enough single-photon sources not available, strange crosstalk between qubits that increases when a new qubit is added. By and large, though, I’ve also found the experimentalists areincredibly optimisticabout the prospects for solving these problems. In fact, they tend to bemuch moreoptimistic than the theorists are, an observation that I don’t know how to square with the story you’ve told!(4) You ask about how to reconcile the nonlinearity of classical physics with the linearity of the Schrödinger equation. While this isn’t directly relevant to the “main” discussion, I think I can answer that. Quantum mechanics is linear only at the level of the

amplitude vectors. It can be just as nonlinear as classical physics at the level of measurable objects, whether they’re qubits, electrons in a field, or whatever. Conversely, if you formulate (say) classical statistical physics abstractly enough, as a theory of transformations of probability vectors into other probability vectors by means of stochastic matrices, then it too will be linear; it’s only nonlinear if you look at the actual elements of the ensemble rather than at the distributions. Therefore, this isn’t a classical vs. quantum issue at all, but simply an issue of which level of description you’re interested in. (It so happens that in the quantum case, people more often prefer the abstract level of description, both because that level already contains the highly-nontrivial phenomenon of interference, and because a “concrete” description tends to be difficult or impossible. But again, classical probability is just as linear as quantum probability when you formulate them in a parallel way.)Comment #134 January 7th, 2013 at 3:18 pm

Scott, this assertion makes it natural to wonder:

• Are experimentalists confident enough to express their optimism in writing?

• Does present optimism differ materially from the QIST roadmap’s optimism of 2002-4?

If either answer is “no”, then “incredible optimism” is indeed “incredible.”

And if either answer is “yes”, then please provide details!

Comment #135 January 7th, 2013 at 4:19 pm

John Preskill wrote (#113): “The point of the paper is to show that scalable quantum computing works for a class of noise models. The conclusion is that if scalability fails then the noise model is wrong — i.e. cannot be realized under any conceivable physically realistic conditions.”

Dear John, I gave the model you described in your paper a great amount of thought (but my argument below is heuristic, so I may be wrong). I have a technical mathematical reason for why I think the model neglects a crucial aspect of realistic noise. Let us say that we have 10^7 quibits and that we can treat a noice level of 10^-2 and that our quantum computer is below such a noise level.

Now we look at a single computer cycle and ask how many qubits are corrupted. The expected number is, say, 30,000, well below the threshold. It looks to me that a consequence of your noise model is that the actual number of errors will be

very concentratedaround 30,000. Perhaps (with high probability) 30,000 plus minus 500. What I expect realistically for such systems is that the the fluctuations in the number of errors will be much higher – of the same order of magnitude as the error-rate itself. This point isnotdirectly relevant to quantum fault-tolerance. But it suggests that your noise model leaves out something crucial and cannot be realized under conceivable realistic conditions.Comment #136 January 7th, 2013 at 4:44 pm

Jim Graber #129:

I think it is trivially obvious that there are very loud noisy noise models under which QC is impossible. Equally true but much less obvious before the breakthrough papers, there are (at least theoretically) reasonable noise models under which QC is possible. So there will be theorems going both ways. As John Preskill said, the crux of the issue is which noise model(s) is correct or realizable.

Yes, all of that is indeed “trivially obvious,” but there’s a further to point I would add. Namely, as of today, extremely few interesting noise models have been shown to make QC

notpossible. “A priori,” it didn’t have to be that way: why shouldn’t (say) 5% depolarizing noise have made all quantum circuits efficiently simulable by a classical computer? But it’s not that surprising, once you accept my basic point that universality is much harder to avoid than to achieve (see comment #83).Yes, I believe it’s known that you can make scalable QC impossible by increasing the noise to 90% or something, so that the output is close to uniform regardless of what you do. (Of course, such an assumption would also make

classicalcomputing impossible!) And it’s also known that, with noise rates as “low” as 45-50%, BQP collapses with BQNC: in other words, all quantum computations fall apart after at most logarithmic depth. Alas, that result has never been a very convenient one for the skeptics, since it’salsoknown (see Cleve-Watrous 2000) that logarithmic-depth quantum circuits are already enough to implement Shor’s factoring algorithm!(If I got anything wrong above, or if any of my information is outdated, hopefully the experts will correct me.)

Comment #137 January 7th, 2013 at 5:17 pm

Scott, the above claim is narrowly true, yet broadly false. Consider for example the (implicitly non-Pauli?) class of noise models that is specified in Carsten Henkel’s “

Magnetostatic field noise near metallic surfaces” and references therein. This work makes zero mention of FTQC and/or BosonSampling experiments (even the words “computer” and “computation” are not mentioned), and so this class of noise models is consonant with your assertion … provided we construe the assertionverynarrowly. Yet every BosonSampling experiment ever proposed and/or ever performed strongly couples photon-source/photon-sink conduction bands to high-finesse cavities … and thus injects precisely this kind of spatially-correlated noise into FTQC/BosonSampling systems … strongly, ubiquitously, and generically.Comment #138 January 7th, 2013 at 5:29 pm

John #137, I looked at that paper, and I don’t see how it’s relevant to what we’re talking about. Does it show, even implicitly, that if such-and-such noise is present then there’s a polynomial-time classical simulation? Or any result of that kind? You can’t just give maybe-or-maybe-not-marginally-relevant links as a substitute for arguments!

Comment #139 January 7th, 2013 at 5:54 pm

Scott, the article cited as reference [6] by Carsten (per #137) briefly surveys

noise implications for quantum computing(in particular they derive a set of fluctuation/dissipation/entanglement theorems) … regrettably it appears these authors subsequently have focused their attention mainly uponquantum spin biomicroscopy… do you think there’s any substantial prospect that the methods of these two fields — QIT and quantum spin biomicroscopy — may now be converging, asbothdisciplines press against fundamental limits to observation, computation, and simulation?`Cuz that would

fun,eh?Comment #140 January 7th, 2013 at 9:23 pm

Scott,

Do you ever feel like you’re being forced to play a game of wack-a-mole. Sort of looks that way sometimes, at least to me

Comment #141 January 7th, 2013 at 9:28 pm

Mike #140: Yes.

Comment #142 January 8th, 2013 at 1:02 am

Gil #135: You say: “What I expect realistically for such systems is that the the fluctuations in the number of errors will be much higher – of the same order of magnitude as the error-rate itself.”

I’m not surprised that you think that — it is consistent with other statements you have made.

But I don’t see why what you “expect realistically” should apply in general. Would you say the same for classical bits?

Comment #143 January 8th, 2013 at 3:35 am

Here are some remarks on Michel Dyakonov comment (#131)

1) I agree with Michel that noise, quantum fault-tolerance, and the threshold theorem are at the heart of the matter.

2) Michel’s intuition and specific ideas that the conditions of the threshold theorem cannot be met in practice for large systems are serious and interesting. The principal reason for failure of QC, in principle, or in specific implementations, is simply that the rate of noise will scale up. It is for theoretician like us to explore how such intuitions can be expressed in formal mathematical terms. Michel’s remark regarding the great dependence of the situation on crucial aspects of the specific implementations which are not part of the abstract model is an important point.

3) Looking at 1/f noise is something that was raised also in my debate with Aram Harrow. On the face of it, 1/f noise is not in conflict with the fault-tolerance assumptions, but deeper understanding of 1/f noise and the principles leading to it may be relevant.

4) I agree with Michel that the tension between the nonlinearity of classical physics and chaotic phenomena, with the linearity of the Schrödinger equation is a (very) relevant issue. We talked here about the additional layers of “laws of physics” on top of the quantum probability “operating system”. Classical mechanics is an important layer! This point is closely related to my post on symplectic geometry and quantum noise.

Let me move on to issues where I disagree with Michel, repeating a few things I already wrote to Michel privately. A couple of months ago I had a pleasant email exchange with Michel and I responded to him in person to some of the issues he raised here and in the paper. I was a bit worried from what I perceived as Michel’s negative views on mathematicians and theoreticians, which is, perhaps, why I did not wrote to him all the years, but at the end our email exchange was very pleasant.

5) Regarding Michel’s point number (5) that claims that building a quantum computer will not be a revolution here is what I wrote: “I beg to disagree. In my opinion, factoring large numbers would be an amazing revolution, and simulating some spin chains (in the strong sense of how QC simulate quantum evolutions) will also be an amazing revolution, and, if quantum computers can be built, this will be a completely new reality of human ability to create and control quantum evolutions and, even more, possibly a completely new reality in the type of quantum states and quantum evolution witnessed in reality.”

6) “The feasibility of scalable quantum computing in the physical world remains open.”

The main difference between Michel’s point of view and mine is that I regard the feasibility of quantum fault tolerance and computational superior quantum computing as a great scientific problem. Michel asked me to explain why I think so and here is what I wrote:

“Hmm, this is a very good question. Of course, it is quite possible that my feeling that this is an important and fruitful question is wrong, and it is even much more possible that regardless of the importance of the problem of QC feasibility, my own efforts to study it are misguided. This is a standard professional risk in our business, and overall, since we, academics, have quite sweet lives, I don’t see much reason to complaint about any aspect of it, including the aspect of self-illusion and self-deception.

Nevertheless I do believe that the question is important (or at the very least, potentially important), and I would like to explain why. The main issue in trying to understand the assumptions of the threshold theorem is how to model a quantum evolution when many degrees of freedom are neglected, or, in other words, what kind of approximation for pure quantum evolution are realistic. I think that this is a very important issue. Quantum computers is a setting to think under the same umbrella about very different areas, models, and devices of quantum physics. In this setting much of the specific physics is abstracted away.

The notion of quantum codes and quantum fault tolerance seems also very general notions that are manifested in many different areas. My specific idea is to take “no quantum fault-tolerance” as the guiding principle in modeling quantum decoherence, and more generally for studying approximations for pure quantum evolutions.”

7) While wide open, I believe that there are good reasons to be optimistic that the fundamental open question regarding the feasibility of quantum fault tolerance and quantum computers will be resolved in the forseeable future. The major experimental efforts in many directions are crucial.

8) Six years ago in 2006 Michel wrote a paper about quantum computing and a colleague in Yale wrote to me “the attached paper by Dyakonov claims that the threshold theorem for fault tolerant quantum computation assumes the existence of some perfect gate operations and therefore has no physical relevance. This is apparently attracting attention in the funding agencies.”

Here are a few things I wrote then in reply (they are related to some of the points discussed here).

“The very short answer is: The threshold theorem for fault tolerant quantum computation DOES NOT assume the existence of some perfect gate operations. (My own paper challenges the possibility of uncorrelated errors on entangled qubits but this is not simply based on imperfect gates but requires entirely different models of decoherence than current models.)…

My impression is that the paper reflects mainly Dyakonov’s disbelief in quantum computers or dislike of the endeavor. Of course, there are already other prominent people like Laughlin, Levin and d’Hooft who are very skeptical about the feasibility of quantum computers and this is completely legitimate.

I do not think any of the papers written so far in the skeptical direction (including my own) give reasons for anybody to change her or his a priori position (or subjective probability) about the feasibility of quantum computers. At best they are part of the infrastructure which will enable eventually to build quantum computers or understand the obstructions for building them. A few skeptical papers suggest some “benchmarks” which may come much earlier before a full-fledged quantum computer is built. And if such benchmarks cannot be met the challenge would be to offer some interesting explanations or at least some mathematical framework which may support such an explanation. Essentially this is my direction. (Let me also mention that Scott Aaronson also offered a scientific framework to study skeptical opinions on quantum computers mainly from the point of view of computational complexity.)

Funding: I do not know much about funding, am always surprised and happy that society offers funding for basic research, and regard it as a reflection of society’s fine

values. I do not think any of the skeptical scientific papers give reason to change any current funding policies.”

9) Overall, I personally do not like an approach of “all-out” hostile attacks against a scientific discipline, and I prefer the tedious discussions of specific scientific issues on the technical and conceptual levels. This refers also to the recent disappointing attack on “quantum discord” papers within quantum information that took place on some quantum blogs and this is also how I feel regarding Michel’s paper.

Comment #144 January 8th, 2013 at 3:41 am

Mike, the history of technology development efforts is very largely the

history of mole-whacking, effective or otherwise. It commonly happens that the number & agility of moles both were severely underestimated in the early stages of a project …audio recordings existof John von Neumann describing the innumerable scalability/attainability “moles” that afflicted the development of classical computing of classical computing; moles that were “whacked” only by an enormous and well-coordinated effort. And of course there is the sobering example of efforts like fusion power, in which plasma instability moles have for decades appeared dismayingly faster than they have been whacked.So one lesson of history is that a crucial stage of any technology effort is not the moment when hordes of agile moles appear — which

alwayshappens — but rather what happensnext.It’s not so easy to find historical studies of mole-thwacking that are cogent, inspiring, detailed, and technologically accurate (indeed such analyses have only begun to appear in the history of science rather recently). Three that were well-reviewed and that convey important lessons-learned (as it seems to me) are John Murray’s

First to Fly: the Unlikely Triumph of Wilbur and Orville Wright(2003), Harry Collins’Gravity’s Shadow: the Search for Gravitational Waves(2004), and Stephen Johnson’sThe Secret of Apollo: Systems Management in American and European Space Programs(2002).All three works emphasize the crucial role that team-building plays in mole-whacking, and it is the driest of the three books (Johnson’s) that expresses this best:

Is it any wonder that technology development efforts (like FTQC) provide so many opportunities for

aproposquotations from Twain, Dickens, and other students of the human condition … not to mention their modern heirsThe Far Side,XKCD,Dinosaur Comics, andPerry Bible Study? It is because, for both better and worse, technology development efforts illuminate for us not only the marvels of howquantumsystems work, but also the tragicomedy howhumansystems work. And who is to say that the worth of what we learn about the latter, is not comparable to the worth of what we learn about the former?Comment #145 January 8th, 2013 at 9:35 am

I am pretty sure I am one of those moles you have to keep whacking, but I am going to pop up again. I am wondering why the resource of exponential space has yielded so little? I would expect exponential speedups all over the place and this has not yet happened. All that exponential space is being lazy and underproductive. QM gives us two new resources: randomness and exponential space. I would naively expect the exponential space resource to be much more important than the randomness resource, but so far it appears to me that exactly the opposite has occurred. (Of course the two work together in some sense, I think). So why have we been unable to more effectively exploit the exponential space resource more effectively, even on a theoretical basis?

Comment #146 January 8th, 2013 at 10:04 am

Jim Graber #145: The simplest response is that

quantum mechanics is not “exponential space resources,” except in an extremely misleading sense.For one thing, BQP is contained in PSPACE; thus we know for a fact that exponential space resources arenotneeded to simulate a quantum computer. (Exponentialtimeresources do seem to be needed.)Furthermore, if you want to describe a QC as “exponential space” because that’s the size of the wavefunction, then to be consistent, you should maybe also describe a classical probabilistic computer as “exponential space,” since that’s the size of the probability distribution over possible configurations!

OK, but we could reinterpret your question as follows: why has the “resource of quantumness,” or interference in an exponential-sized Hilbert space, or

whateverwords you want to use to describe it, yielded so little? And that I think is a profound and wonderful question; part of the reason why I’m in this field is that I want to know the answer!The simplest, most obvious answer would be “decoherence.” In nature, quantum systems are constantly interacting with each other; any given collection of qubits is rarely left to its own long enough to implement (say) Shor’s algorithm. And even if it were, why on earth would it implement Shor’s algorithm, rather than something random and less interesting?

OK, but is that a fundamental problem? One possibility is that your arguments are no different from the following hypothetical:

Person in 1935:Nature is being lazy and unproductive. The equations allow the possibility of a U_{235}fission chain reaction, releasing unbelievable energy, so how come we never see that?In other words, even if the equations allow something, history has shown that it’s entirely possible that the universe can run for billions of years with it rarely or ever happening, until some conscious beings care enough to

makeit happen. (Another example of that is the creation of Higgs bosons, quark-gluon plasmas, and other exotic phenomena in modern particle accelerators.)Of course, another possibility is that the “quantum computation resource” is an illusion—that it’s

notsomething the universe provides or has ever provided; it’s merely something that current physical theory falsely leads us tothinkthe universe should provide. As I’ve said many times, if that turns out to be true, then I’ll be a thousand times happier than if quantum computing turns out to bepossible, since then we’ll have the opportunity to revise physics in probably the most far-reaching way since the 1920s.Comment #147 January 8th, 2013 at 10:10 am

Gil, your posts have been consistently valuable (as it seemed to me), not in the sense that

everyoneis compelled by logic to agree with you, but rather that it is essential thatsomepeople agree with the views you advocate. As your colleagues appreciate — and students are encouraged to verify — you are consistently among highest-rated commentators on the physics, mathematics, and TCS stackexchanges, and within this elite class you areentirelyunique in that you consistently earn more points for your good questions, than even for your good answers! For which vital and exceptional service to our community, the present post is an appreciation and thanks.It was Lars Onsager who lucidly explained why Kalai-style “tedious discussions” (like the references in #6, #114, #132, and #137) are valuable:

Gil Kalai, you’ve shown us real class! Thank you for encouraging everyone to “munch the humus” of Onsager-style noise models and thermodynamical field theories.

Comment #148 January 8th, 2013 at 10:14 am

John Preskill (#142) “But I don’t see why what you “expect realistically” should apply in general. Would you say the same for classical bits?”

Thanks for your response, John. This is an excellent question. Of course, my expectation that the standard deviation behaves like the expectation and not like the square root of it, reflects the fact that the qubits are highly interacting, which is not the case for classical bits. (Not for the bits in the entire digital memory and not for the microscopic creatures which account for a single bit.)

Nevertheless, if I have to guess I would guess that, in reality, the standard deviation will be much larger compared to Gaussian

evenfor classical bits:1) If your hard-disc memory (or computer memory) has a capacity of

nbits and the expected number of faulty bits istthan the number of faulty bits at any moment will not be (t± √(nt)) but much higher, perhaps even (t± ε )n.2) If a single memory bit is represented by (say) n electrons with spins up and down where for ‘1’ the expected number of spin ups is (say) 97% then the standard deviation for the number of spin-ups for a ‘1’ bit is much more than √ (0.03

n) and is much closer to (ε n)I am truly curious to know if this is indeed the case…

Comment #149 January 8th, 2013 at 10:44 am

John@144:

Your erudition as usual is impressive.

My only caution would be that we remain vigilant so as not to conflate just any one of the common mole horde for that much rarer variety which embodies more than mere amusement park dazzle. Ideally such moles would clearly display not only the key characteristics of their progenitors, but more importantly, strong and meaningful hints of the traits that selective breeding might allow to flourish. I suppose, of course, that all moles are in the broadest sense related, but perhaps we should be less concerned about the attractive, but still distant, cousins.

Comment #150 January 8th, 2013 at 12:24 pm

Scott #146 :

Let’s assume for an instant that you are indeed right and it turns out that “the “quantum computation resource is an illusion”.

Can you outline a bit what revisions would this cause in physics. Trying to imagine, if that event happened what far reaching consequences (outside of QC) it will have.

Comment #151 January 8th, 2013 at 1:36 pm

Gil Kalai #123 & aram #130,

Unfortunatly I’m running out of free time, but thanks for the interesting discussion.

Comment #152 January 8th, 2013 at 2:16 pm

Seeking a reasonable candidate for a principle killing QCHi Scott and everybody, I am very pleased with the amount of agreement we have on the five points I raised. (#95 and #100 – 4 yes and 1 qualified yes!) Regarding my point my no. 4, I devoted a post in the GLL debate to describe what I regard as fundamental differences in reality without and with quantum fault-tolerance and universal QC. (That post and my email are open for comments.)

Quantum fault-toleranceOne thing to outline is quantum fault-tolerance that represents an amazing (expected) phase transition in the behavior of certain large controlled quantum systems. (Two interesting questions are if quantum fault-tolerance is

manifested alreadyin natural quantum processes, and if superior quantum computational powerrequiresquantum fault-tolerance.)Let me go on to things where Scott and I probably disagree about, and describe how I look at things from my corner. (It is a little long so I divide it to two parts.)

Scott(#83): “On the other hand, at present we don’t even have any reasonablecandidatefor such a principle, something that would kill QC while not grossly contradicting the quantum-mechanical experiments that have already been done. (And I suspect you know that as well as anyone, since to your credit, you’re one of the few people who’s been seriously looking for such a principle.)”This is a fair description. Let me explain my take on the matter.

The nature of the principle we are looking forJust to make it clear again: what we are talking here is about the additional layers (“application-programs”) of laws of physics which are written in the language of quantum mechanics (“the operating system”). See point no. 3 (in #95) that Scott and I agree about.

Unreasonable first!A remark regarding “reasonable candidates:” It is my firm policy to examine quite a few

unreasonablecandidates before reaching a reasonable one (if at all) .My best candidates for a principle killing QCThe best candidate, in my opinion, for a candidate for a principle which kills QC is

Quantum fault-tolerance is impossibleThere are several obvious issues with this (reasonable, in my opinion) suggestion and I will say a few words about some of them:

1)

How to describe this principle formally (in the language of QM):My best shot for a general formal description of the principle of no quantum fault-tolerance is via “

” certain smoothed-in-time quantum evolutions that manifest formally the idea that noise must accumulate. (Those can be found in my recent papers and the last part of the GLL’s debate, Aram referred to them in #130.) Smoothed-in-time evolutions form a restricted subset of all noisy quantum evolutions and thus “are written” in the QM “operating system.” The principle formally is that noise described by such smoothed quantum evolutions must be present, and that this form of noise cannot be corrected.smoothed Linblad evolutions,2)

What are the simplest possible consequences and ways to test this principle:There are very simple manifestation of this principle for pairs of noisy entangled qubits, and for noisy very entangled quantum states. As John Preskill mentioned, for those the issue is related to correlations between errors which cause current FT schemes to fail.

3)

What are the most far-reaching consequences of this principle.When it comes to computational complexity – The hope is that noisy quantum processes without quantum fault-tolerance are in BPP, and, in fact, that stronger definite assertions that do not rely on conjectures from computational complexity can be made. Of course, I am very interested in non computational complexity consequences (including crazy or absurd ones that will kill my proposal or some versions of it.) Another possible consequence is that certain calculations from quantum field theory (even QED) are losing their relevance for large systems (Jay #115).

4)

How does this proposed principle compatible with existing theories and experiments.The three most relevant additional layers of physical laws I am aware of are: thermodynamics, quantum field theory, and (yes!) Newtonian mechanics. In the debate with Aram it was quite interesting for me to try to confront (e.g. see this comment by “Matt”) my specific conjectures with what we theoretically believe and experimentally witness for quantum systems like Bose-Einstein condensate and anyons.

5)

How to derive/understand/represent such a principle from “fundamental physics laws,“in the sense John Preskill is taking about?I am not sure about John’s precise requirement but I will make a related comment at the end.

Comment #153 January 8th, 2013 at 2:18 pm

Let me continue:

The best current candidate for a Sure/Shor separators“Sure/Shor separator” — is a criterion that separates the already-verified quantum states from those that appear in Shor’s factoring algorithm. This is a notion Scott introduced (in 1994) as a way to examine scientifically skeptical claims regarding QC. (It makes sense also to consider BQP-complete problems or quantum fault tolerance protocols instead of Shor’s algorithm.)

The most reasonable candidate at present are based on

bounded depth quantum evolutions.Namely, the assertion is that pure quantum states that cannot be approximated by bounded depth quantum circuits cannot be viable for realistic quantum algorithms.My pretty alternative picture of our computational universeIt will be a little boring to say, after all the effort, that the computational complexity class describing our universe is BPP. I try to draw something stronger to take home with us, namely, that the classical world is essential for computation! Non-trivial computation (at any scale) requires a mechanism (like a repetition code) of classical information that emerges from the quantum mess around. Without such a mechanism, you are limited to bounded (small) depth processes, and you cannot even come close to BPP. We have one beautiful picture of superior computation coming from quantum mechanics, but, without asking you to buy my alternative picture, isn’t it nice too?

Modeling quantum evolutions that do not enact quantum fault-toleranceMy proposed principle, is boldly that quantum fault tolerance is not possible in controlled quantum evolutions and is not witnessed in natural quantum evolution. The formal suggestions to model “quantum evolutions without quantum fault-tolerance” are of interest also for less ambitous goals. In particular, to study specific proposals for implementing quantum evolutions and to suggest what can go wrong. This is especially the case with proposals with important experimental part that do not involve fault tolerance protocols at all: examples are processes to create “weapon-graded” anyons, measurement-based computation, adiabatic computation , and BosonSampling photon machines. The proposed model is interesting also to formally examine if quantum fault tolerance is crucial for quantum supremacy.

What is the environment?John Preskill wrote: “If we know the state of the system and its environment, and we know the Hamiltonian that governs the evolution of the system and its environment, then we know everything about the noise.”

Of course this is correct and important. (By the way, I agree with almost every single word in John’s two recent papers on quantum supremacy and on quantum fault tolerance.) Perhaps my only concern about this description in the context of John’s paper is the use of the word “the system and its environment”. We already know that in the context of noise, the word “environment” refers to many things that do not look intuitively as “environment”. (Being aware of it actually answers several concerns by Michel Dyakonov.) For example, the inner structure of particles, is also part of their “environment”.

We need to remember that “the environment” should include the

devicewhich realize the system. And the device may depend on the quantum evolution we intend to perform. Any systematic relation between the device and the noise-description falls under “the environment.” Such systematic relations are possible, they are precisely what I try to model, they can be described via quantum probability, but they do not fit into theintuitionfor the environment for a single-device which, I think, is a hidden assumption in John’s point of view.Comment #154 January 8th, 2013 at 2:35 pm

Rahul #150:

Let’s assume for an instant that you are indeed right and it turns out that “the “quantum computation resource is an illusion”. Can you outline a bit what revisions would this cause in physics

Firstly, this

isn’twhat I conjecture will actually happen! But(1) I agree with Gil and others that it’s at least a logical possibility,

(2) I’d be thrilled and excited if it

didhappen, and(3) figuring out whether it happens or not is one of the reasons I’m in this field.

It’s extremely hard to say what revisions to physics would be needed — since my personal opinion is that at present,

we don’t know even a candidate revisionthat would kill QC without doing violence to other facts or principles that we already know.(And regarding Gil’s comment #152 above: as I’m sure Gil himself would agree, it’s no answer to say we should simply add a principle stating that “Quantum fault-tolerance is impossible”! For what we really want to know is what could

underliesuch a principle, how one could possibly derive it from lower-level laws.)Broadly speaking, there are two possibilities.

The first is that there would be some change to the formal structure of QM itself. Presumably, QM would then be “merely” a fantastically-accurate approximation that was valid for small numbers of particles, as well as for “sufficiently simple” aggregates of large numbers of particles. (What could “sufficiently simple” mean here? No one knows! I tried to explore possible answers in my Multilinear formulas and skepticism of quantum computing paper.) Once you got to “sufficiently complicated” states, like those arising in Shor’s factoring algorithm, some new probability framework, of which we have no inkling at present, would take over from quantum mechanics. And crucially, the new framework would just so happen to yield no more computational power than good old classical physics! (As opposed to, say,

even more power than quantum computing, which as long as we’re speculating so wildly, is just as much of a logical possibility!!)The second possibility is that there’s some new principle “on top of” QM (for example, arising from quantum field theory), which restricts the allowed Hamiltonians or unitary transformations — again, in a way that just so happens to take us back to classical computing. This is the possibility that Gil has been investigating. Personally, I find it deeply implausible — not only because two decades of work in quantum complexity theory has failed to turn up any good candidate for such a principle, but also because of the general phenomenon (which I discussed in previous comments) that “universality is much harder to avoid than to achieve.” But I do agree that it’s a logical possibility and that it’s interesting to keep investigating it.

Comment #155 January 8th, 2013 at 3:50 pm

This is a terrific summary! A historically plausible extrapolation (as it seems to me) of Scott’s summary (as it seems to him) of Gil Kalai’s reasoning then might be this (hugely optimistic) fable :

—- chapter 1 —–

[A.1] In the late 19th and early 20th century, researchers discovered that zero-temperature (minimum entropy) states were unexpectedly difficult to attain.

[A.2] Walther Nernst formalized this empirical finding as the

Third Law of Thermodynamics: the (Entropic) Unattainability Principle.[A.3] There followed a Nobel for Nernst and subsequent thermodynamical Nobels for Einstein, Kamerlingh-Onnes, Giauque, Onsager, Anfinsen, and Prigogine … along with decades of invigorating debate (that was

occasionally tragicomical, as usual) regarding the (quantum and classical) meaning of the Third Law … and decades of strategically vital 20th century practical applications too!—- chapter 2 —–

[B.1] In the late 20th and early 21st century, researchers discovered that highly quantum-coherent, spatially non-local, large-dimension dynamical trajectories were unexpectedly difficult to produce.

[B.2]

<Young Researcher>formalized this empirical finding as the Fourth Law of Thermodynamics: the (Computational) Unattainability Principle.[B.3] There followed a Nobel for

<Young Researcher>and subsequent thermodynamical Nobels for<Many More Young Researchers>… along with decades of invigorating debate (occasionally tragicomical, as usual) regarding the (quantum and classical) meaning of the Fourth Law … and decades of strategically vital 21st century practical applications too (that were even more incredible than those of the 10th century)!—- Appendix —–

A concrete example of a Fourth Law corollary — and thus a concrete avenue for

<Young Researchers>to initiate their research! — might be this: the entropy dissipated by ann-photon BosonSampling measurement is invariablyO(exp(n)), in consequence of the entropic overhead that (Carmichael-style) quantum field theory demands, in initializingn-photon input states that are strongly coupled tonhigh-efficiency photon detector devices.AcknowledgmentThe above fable borrows key narrative elements from Mark Twain’s much-recycled short story “Is He Living, or is He Dead?“.Comment #156 January 8th, 2013 at 4:16 pm

Scott: (#154) “And regarding Gil’s comment #152 above: as I’m sure Gil himself would agree, it’s no answer to say we should simply add a principle stating that “Quantum fault-tolerance is impossible”! For what we really want to know is what could underlie such a principle, how one could possibly derive it from lower-level laws.”

Of course, I disagree. First of all “simply add” is not simple at all. To arrive at a formal definition (in the language of QM) of “quantum evolutions without (the magical noise cancellation of) quantum fault-tolerance” is not easy. (In fact, some people regard it impossible.) As I said, I have a proposed way to express it (within QM) via “smoothed Lindblad evolutions.”

If indeed this mathematical formalism is viable, the no quantum-fault-tolerance principle is quite a reasonable candidate for a general candidate to kill QC. If we want to understand what are the consequences of the possibility that quantum fault tolerance cannot be achieved from whatever reasons, whether they are “lower-level physical laws,” or the claims by Michel, or just NSF budget cuts, why not try to formalize directly this property.

(As an example you can ask what is the principle behind 3-SAT hardness. And the principle is… “3-SAT is hard.” As a mathematical principle, we indeed expect a derivation from first principles but this is not expected in the near future. As a physical law it is unlikely that “lower-level laws” will interact much with 3-SAT hardness. The big deal was to put this principle on formal grounds, and to investigate the many consequences, implications and connections.)

But of course, we do want to relate it to familiar laws, principles, and theories, and even more, to understand what are the consequences of the failure of quantum fault-tolerance. (BTW, we already agreed on what you refer to as the second possibility, namely that we are discussing a new principle (possibly compatible with existing ones in appropriate ranges that interested physicists before the QC era) in one of the layers of physical laws written in the language of QM.)

I did not understand the principle that “universality is harder to avoid” (you mentioned it, Scottt, but did not state or justified it). When it comes to the classical world there is a compelling case to make that brains (men and donkey’s alike) manifest already the computational power of a universal Turing machines, and you can also simulate a computer by human’s manual computation without any electronics. In the quantum world (I mean our quantum world) I don’t see what is the statement and what is the justification.

Much wrong insight comes, in my opinion, from taking for granted the picture of universality. When it comes to a universal quantum device the principle that kills it is quite simple:

the noise rate will simply scale up and cause the failure,you do not need any more sophisticated model of noise or principle than that. Things are becoming more interesting if we try to understand the behavior of special quantum devices, to explain why universal quantum devices are impossible.Comment #157 January 8th, 2013 at 4:21 pm

(Let me add that while disagreeing I enjoyed Scott’s #154 and found it thoughtful and interesting; John, #147wise I am speechless..)

Comment #158 January 8th, 2013 at 4:43 pm

Michel Dyakonov #131:

Figure 1 in your paper is very misleading. To make the point that QC is a bubble, it is not enough to note that the number of submitted QC papers has exploded throughout the years (and now reaches some kind of asymptote). You need to compare that to how the rest of the arxiv behaved.

to be completely clear, given that presumably the total number of papers Nt submitted to the arxiv also increases with t, one needs to factor this out, for instance by speaking at any time not of the number of submitted QC papers, but of its proportion with respect to Nt.

(Also informative would be to see how this trajectory compares to the one of an actual bubble that has already been recognized as such, if there is any.)

Comment #159 January 8th, 2013 at 4:47 pm

Hype OR Scientific bubble OR Inflation of promises!!!!!!

When this bubble will burst?

In my opinion NOT SO SOON (or never)!

Why?

Because there will be always ill informed or semi-corrupted politicians and decision makers that will spend tax payers money on something “bombastic” such as “quantum secure reports about local elections results??!?!” (in Switzerland) or on “securely transmitting data” from one server to another (for the Football World Cup). Besides, there are billions of dollars piling on the accounts of Insurance Companies that have to be spend (or burned in vein) from time to time.

On the other hand, I can say that in one period QC community had been joined by many brilliant young scientists, but as time goes on, many of them have started to work in other scientific areas. That trend will continue, …

Comment #160 January 8th, 2013 at 5:22 pm

Scott: Thanks again for the detailed explanation.

Though I have to admit, I find Gil’s rather simple, “the noise rate will simply scale up and cause the failure” far more appealing.

Agreed that we don’t have any fundamental physical law that proves QC ought to fail ; but has humanity succeeded in engineering every contraption that is not explicitly forbidden?

Can’t we have a world where scalable QC’s never become reality and yet no substantial rewriting of Physics is really required?

Comment #161 January 8th, 2013 at 5:25 pm

Gil #156: I apologize for misstating your position. Yes, I had wrongly assumed you were looking for an

actual explanationfor why QC couldn’t work — so that for example, if the problem was noise scaling up with system size, then you wouldn’t rest until you could explainwhythe noise worked that way. But I finally understand what, perhaps, you’ve been trying to tell me all along: that not only have you not achieved that,it isn’t even your goal!Instead, you’ll be satisfied if you can justmathematically formulatea “Principle of QC Impossibility” — a variant of what I called a Sure/Shor separator. Provided you could do that, you’d then, essentially, be so confident of the principle’s truth that you’d happily leave itsactual derivation from the laws of physicsas a problem for future generations, much as today we believe P≠NP even though we’re nowhere close to a proof.This position is so alien to my way of thinking that, in retrospect, it’s no wonder I subconsciously distorted it. It’s true that we believe P≠NP even though proving it will require immense mathematical breakthroughs. But the point is that, over ~60 years, we’ve built up a

picture of the computational worldwith which P≠NP is entirely consistent, and according to which P=NP would be startling and baffling. So in some sense, a proof of P≠NP wouldn’t require any “change in worldview.”But when we come to QC, the situation is reversed. Here it’s the possibility of QC that requires no change to physics’ current picture of the world, and its in-principle

impossibility that would require a radical change or addition to explain it.Some people might object that QC being possible requires a change to

theoretical computer scientists’worldview, while its impossibility doesn’t — so who’s to say who’s right? Well, if therewerea conflict, then frankly I think the physicists would get priority (OUCH! it felt like pulling teeth to say that ), both because we’re talking about what can be done in the physical universe, and because QM has been tested much more extensively than the Extended Church-Turing Thesis. But just as importantly, theoretical computer science has evolved! We’ve discovered, over the last 20 years, thatQC can be accommodated into our worldviewin a way that leaves lots of what we care about more-or-less as it was, and changes the rest in rich, subtle, and elegant ways. The border of the feasibly solvable shifts, but only by a “tolerable amount”: to accommodate factoring (for example), but almost certainlynotthe NP-complete problems.So, this is where you and I part ways. For me, a “Principle of QC Impossibility” is like a “Principle of the Impossibility of Humans Landing On Mars” — or closer to home, a “Principle of the Impossibility of Derandomizing BPP.” Sure, it’s consistent with the fact that no one has done it yet, but I’d have no idea how we’d even

startto make it consistent with everything else we’ve learned.But maybe this is why you offered the following striking admission, in comment #143?

I do not think any of the papers written so far in the skeptical direction (including my own) give reasons for anybody to change her or his a priori position (or subjective probability) about the feasibility of quantum computers.

Comment #162 January 8th, 2013 at 5:31 pm

Rahul #160:

Can’t we have a world where scalable QC’s never become reality and yet no substantial rewriting of Physics is really required?

Of course. That would simply be a world where no one ever figured out what the truth was.

If you like, the goal of QC research is to

preventthat world from being our world.Comment #163 January 8th, 2013 at 5:44 pm

Incidentally, Gil #156:

I did not understand the principle that “universality is harder to avoid” (you mentioned it, Scottt, but did not state or justified it).

I meant observations like the following:

Chances are,it will generate AND and NOT, hence every other Boolean function.Chances are,if it’s not in P then it will be NP-complete rather than NP-intermediate.Chances are, it will be Turing-universal.Chances are, it will also be Turing-universal.Chances are,it will be universal for QC. (Indeed this occurs with probability 1.)Of course, Stephen Wolfram wrote a thousand-page book claiming to have discovered this phenomenon. As I wrote at the time, it

isa profound phenomenon; Wolfram was only wrong about the claim to have discovered it!Comment #164 January 8th, 2013 at 5:46 pm

Scott:

Thanks! Too bad I don’t have the money to post a $200k wager; but if I did I’d put my money on that “no one ever figured out what the truth was” world is exactly where we are going to be stuck in.

Any other sympathizers of my position?

Comment #165 January 8th, 2013 at 5:59 pm

Scott – You have to understand what Gil has in common with my mother: They are both famous for finding important mathematical counterexamples. I do not really mean to speak for Gil. If I were in his position, I would be very happy to find any mathematically important noise model in quantum probability, that both censors QC and still allows classical computation. Even though it might well not be physically true.

Gil has also argued to me that it shouldn’t be relevant what any of us believe, only what we accomplish. This is very natural professional courtesy in pure mathematics. I agree at one level. However, I think that it’s a different story when people write papers that advocate beliefs, either intentionally or unintentionally. Then what has been advocated is open to rebuttal.

I think that you can achieve a shallow success of this kind by assuming uncontrolled depolarization noise. However, in order to be convincing, you’d have to do it without breaking either single-qubit U(2) symmetry or qubit permutation symmetry. No one thinks that spin up and down are inequivalent from spin left and right, nor that electrons do not have permutation invariance.

Comment #166 January 8th, 2013 at 6:28 pm

Scott, thanks for being so consistently right that I only have to read your comments to both know what this discussion is about and which position I should take, without having to do any thinking on my own. Thanks.

Comment #167 January 8th, 2013 at 7:10 pm

Gil, objective evidence of your exceptional gifts as a question-asker is that your meritorious-question-to-reputation (MQRR) index on MathOverflow is (AFAICT) larger by a full order of magnitude than any other high-ranking MathOverflow poster (note that this measure is logarithmic like pH) :

Gil Kalai’s MathOverflow MQRR

= log_10 (96 question-related badges/10226 reputation)

= -2.03

So it’s not just *my* opinion that you are a gifted & prolific question-asker, it’s a

consensusopinion.So thank you, Gil … surely the quantum computing community is better-off with your skeptical questions, than with most folks’ enthusiastic answers!

Comment #168 January 8th, 2013 at 7:15 pm

Wouter #166: LOL, I aim to please!

Comment #169 January 9th, 2013 at 3:24 am

Scott (#161) “Yes, I had wrongly assumed you were looking for an actual explanation for why QC couldn’t work — so that for example, if the problem was noise scaling up with system size, then you wouldn’t rest until you could explain why the noise worked that way.”

I guess what I wrote was not clear enough. Yes, I am interested in an actual explanation, and, yes, the problem is that for universal quantum devices the noise will scale up and will cause them to fail, and here is my explanation:

Quantum systems based on special-purpose quantum devices, are subject to noise which systematically depends on the quantum evolution of the system; this dependence reflects dependence of the noise on the quantum device, and the dependence of the quantum device on the quantum evolution it is doomed to perform.

More formally, quantum systems are subject to noise which is expressed by smoothed Lindblad evolutions. The amount of noise in a time interval is bounded below by a non-commutativity measure of (the projections in) the algebra spanned by unitaries describing the evolution in this interval. Smoothed Lindblad evolutions reflect the idea that quantum fault-tolerance is impossible (or not active) and quantum noise must accumulate.

This property of quantum systems based on special-purpose quantum devices explain why universal quantum devices are forced to fail. (The reference to “devices” applies both to human-made devices and nature-made devices.)

Comment #170 January 9th, 2013 at 4:23 am

(Greg #165) Both Greg’s parents, Krystyna Kuperberg and Wlodek Kuperberg, are great mathematicians which I admire, and I am happy and honored by Greg’s comparison. In 1993, Krystina constructed a smooth counterexample to the Seifert conjecture that every nonsingular, continuous vector field on the 3-sphere has a closed orbit. Wlodek and I are both working in the study of convex sets and discrete geometry. Greg and I had long email correspondence about quantum computers, he can represent my way of thinking very well, and we even have a short paper (on the positive side) coming. (Go for it, Greg!).

Also I agree with Scott’s nice characterization of QC research (#162) and regarding (#161; the pulling-teeth part): Sure! It is the physicists who occupy the driver sit in this journey.

Comment #171 January 9th, 2013 at 5:32 am

On benchmarks: there is a very simple one, see my Ref. 39. The quantum computer can do anything that a classical computer does, it is sufficient to use the |1> and |0> states as a classical bit and abstain from using superpositions of these or any other states.

So, let us try to build an elementary school calculator operating numbers below 100 in the classical mode, BUT with spins in quantum dots, or photons, or trapped ions, or endohedral fullerenes. Error correction will be quite trivial.

Of course, if we manage to do it, this would say nothing at all about the feasibility of a quantum computer using the same physical objects, the latter task being many orders of magnitude more difficult.

On the other hand, if for some reasons we CANNOT do it, this would seriously undermine the hopes for simulating QCD with a quantum computer. Thus such a simple test will make some sense: before claiming to be able to swim the 21-mile English Channel, first prove that you don’t get drowned in your garden pool.

It would be interesting to know:

Can this be done (in practice, not in theory).

If yes, what would be the price.

If yes, can we upgrade the calculator to operate with numbers up to 10^9, and how much this will cost.

Comment #172 January 9th, 2013 at 6:56 am

Largely as a (deliberately good-humored) confection … but also to broadly illustrate the role of great questions in research … the MathOverflow merit badge “

Stellar Question” has been awarded only 13 times in total, and the users with reputations >10,000 who have received this badge are Tim Gowers [2x], Bill Thurston, Gil Kalai, and Mariano Suárez-Alvarez … which is pretty good company, eh?Their five stellar questions were:

[1]

Gowers: Examples of common false beliefs in mathematics[2]

Gowers: Proofs that require fundamentally new ways of thinking[3]

Thurston: Thinking and explaining [what is the gap?][4]

Kalai: Fundamental examples [that shape entire disciplines][5]

Suárez-Alvarez: Proofs without words [for nontrivial results]Are the questions under discussion here

Shtetl Optimizedstellarsensu lato?[1] The common belief

“Quantum computation resources are real”may be an illusion (in Gowers’ sense, for either of Scott’s two reasons in #152).[2] Verifying/Invalidating this belief may require fundamentally new ways of thinking (in Gowers’ sense).

[3] Thinking about this problem is tough, and explaining our thoughts to people whose cognitive toolset differs is

verytough (in Thurston’s sense).[4] It’s not yet clear even what questions and/or examples would most productively shape this discipline (in Kalai’s sense,

e.g.is question (#155) “what is the minimal field-theoretic entropy increase associated to eachn-photon BosonSampling measurement?” discipline-shaping or a tedious dead-end?).[5] If we attain a mathematical/logical/symbolic understanding, how do we convert it to a physical/pictorial/intuitive understanding (in Suárez-Alvarez’ sense), and

vice versa?ConclusionThe questions being discussed here onShtetl Optimizedmanifestly are “stellar” (in the MathOverflow sense) … even though at present we have reliable answers to preciselynoneof them. Therefore, let us all be grateful to QIT’s stellar question-askers!————————-

PS: Michel Dyakonov #171, the successful IBM effort in the 1980s to build prototype SQUID computers (SQUID == superconducting quantum interferometer device) arguably has demonstrated what you ask; a starting reference is Konstatin K. Likharev’s “Progress and prospects of superconductor electronics” (

Superconductor Science and Technology, 1990).Comment #173 January 9th, 2013 at 8:35 am

Scott #146. What is left from 1920s, is the mystery of measurements. The models of the noise/decoherence are the words in the attempt to construct (or hide under the table) a (experimentally verifiable) model of measurements. QC is trying to come from the end of large numbers. I see no discussion attempting to solve the problem from the end of small numbers. The presence of classical devices is unavoidable in the current state (understanding of the place of the computation for practical needs) of the world. Another option is to develop pure quantum devices, where there is no need for the measurements of the quantum computations, and the action depends on the class of final states (via say mesoscopic systems). Could it be that the the measurement and preparation are the processes of entangling (quasi-)isolated quantum system with environment (kind of partial measurement by the particles of external classical devices)? Then this entanglement should propagate through the environment from preparation to measurement devices, (leading to correlated noise?). (Scaling here would either increase preparation/measurement or would require scaling of classical device.) This should be measurable by manipulating the classical parts in small isolated computational system. Without experimental characterization of such manipulation the theory is not verifiable, and is a theological discussion funded on the basis of the quality of rhetoric (comment #166).

Comment #174 January 9th, 2013 at 9:27 am

Many people seem to think that, while scalable QC is not impossible

in principle, it’s so difficult as to be apracticalimpossibility. In support of this view, we’ve now heard two analogies: that building a scalable QC is like teaching a donkey to read (Dyakonov), or that it’s like classical computing in the center of the sun (Jim Graber #79).Reflecting on these examples, for me the central thing that separates them from QC is their

arbitrariness. Imagine we had overwhelming reasons to believe that, if only we could get an ordinary laptop intact to the center of the sun, it would become “turbocharged” and start revealing the secrets of eternity, and the cures for death and the common cold. In that case, one could imagine civilization marshaling all its resources to develop incredible new heat-shields, and the “possible in theory but not in practice” might be done in practice. As it is, though, if wedidexpend that massive effort, it would be for naught. We already know from nuclear theoryexactlywhat’s happening in the center of the sun (in fact, better than we know what’s happening on the sun’ssurface!), so even if we managed to get there the achievement would be totally arbitrary; we’d learn nothing new.In the same way, if the future of the world depended on getting donkeys to read, then one could imagine a massive program of genetic engineering, selective breeding, and brain augmentation that would result, after hundreds or thousands of years, in literate donkeys (or donkey/human or donkey/computer hybrids). As it is, though, once one contemplates what would be involved, one immediately starts asking: “why not just skip the donkey part?”

Building a scalable quantum computer won’t save the world or tell us the secrets of eternity. On the other hand, my guess is that it’s

notas hard as sending a functioning computer to the center of the sun or teaching a donkey to read. And ironically, one reason why I don’t think it’s as hard is something Dyakonov himself dwelt on: the diversity ofdifferentways a QC might be built. If one approach doesn’t work, we can always try another, the same way people tried many approaches toclassicalcomputing (Babbage-style mechanical devices, analog computing, vacuum tubes…) before the transistor was invented.Even more important, there’s nothing the slightest bit

arbitraryabout wanting to take advantage of the full computational-complexity resources that the laws of physics provide, or wanting to understand exactly what those resources are. That’s why, while people in the year 3000 might look back on the attempt to build scalable QCs asdoomed(for deep reasons that we don’t yet understand), I really don’t see any way they could look back on it asquixotic. If there’s still a technological civilization then at all, that civilization will still confront the question of the ultimate physical limits of computation.Comment #175 January 9th, 2013 at 9:47 am

Scott #174. I do not understand why, if you are interested in the physical limits of computation, you need to blow-up QC bubble, before you understand a simple unavoidable part of it – measurement, in details, microscopic model of measurement, what make classical device classical (I mean when device become classical in terms of its measurement properties), and classical-quantum interactions. Where in this world mesoscopic devices, how do they interact with the classical/quantum devices. Is it the measurement that prevent superpower of the quantum device? Can we avoid measurements going directly to action? All these are small scale enterprises. The problem is may be that those problems would not sale as good.

Comment #176 January 9th, 2013 at 10:12 am

Not a rhetorical question, just a simple one: What minimum power must a quantum computer have before it could be said to be a well-scaled one? How many qubits? Where do the quantum computing researchers draw the line, if they do?

At least as of today it cannot be a very precise line, I suppose, but it doesn’t have to be so, anyway. A rough indication would do. But unless one has some indication of such a datum, one couldn’t begin to think very meaningfully about this issue.

–Ajit

[E&OE]

Comment #177 January 9th, 2013 at 10:17 am

mkatkov #175: For godsakes, what do you think the QC experimentalists are

doing, if not trying to better understand decoherence and the quantum/classical boundary? However, a highly relevant fact, which your comment omits, is that objects that would have once been called meso- or even macroscopic (for example: SQUIDs, large floppy biomolecules several nanometers across) have now been placed into superposition, and there’s beennot the slightest signof any “fundamental decoherence mechanism,” or any other deviation from the predictions of QM plus ordinary decoherence theory. Of course, experimenters will continue looking for deviations, and one of thereasonsthey will (besides fundamental curiosity) is the impetus provided by quantum information.Comment #178 January 9th, 2013 at 10:32 am

Following up on my comment #174: Another very relevant analogy — indeed, the one Dyakonov opens his essay with — is the medieval alchemists’ dream of transmuting lead into gold.

So, to anticipate the skeptics: why couldn’t an alchemist in the year 1013 have said, “maybe our goal is impossible, but it certainly isn’t

arbitrary! if there’s still an exchange-based economy in the year 2013, then people will still confront the problem of how to transmute lead into gold.”Well, today we’ve learned that the alchemists’ dream is physically possible after all! But we’ve also learned that it makes zero economic sense. If you want gold, it’s quadrillions of times easier to mine it than to create it in nuclear reactors. Just as important, while gold has some uses, we now realize that for the most part its value is only the result of an arbitrary agreement among humans.

So, why is QC different? First, because it aims to solve problems that, if widely-believed conjectures of complexity theory are correct, really

cannot be feasibly solvedin any other way. And second, because the goal of breaking out of the “epistemic prison” of classical polynomial time—even just for some specialized problems—is not an arbitrary one. Here’s my falsifiable prediction: if we ever encounter an advanced extraterrestrial civilization, they might laugh at how we humans covet gold, but they too will covet computational power.Comment #179 January 9th, 2013 at 10:37 am

Incidentally, one of my favorite Sidney Harris cartoons features a medieval alchemist lamenting to his disciple: “At this point, I’d be satisfied if I could transmute gold into lead.” I know of few better summaries for the layperson of what doing science is actually like.

Comment #180 January 9th, 2013 at 10:41 am

Mkatkov (#172) and Michel Dyakonov (#171), to speak again in praise of Konstantin Likharev’s work on

practicalcomputing devices that operate in the quantum regime, a great many of your quantum measurement-related concerns are surveyed in physically realistic contexts in two recent Likharev summary talks (which are available onLikharev’s home page) “Nanoelectronics: Prospects and challenges” (2011) and “Superconductor digital electronics” (2011).In particular, in the latter Likharev survey, the discussion (pages 9-11) of parametron computation in the quantum limit dates back to Charles Bennett’s seminal work on reversible computation, with in turn dates back to Goto’s commercial parametron computers, with derive largely from John von Neumann’s seminal (and posthumously awarded) patent US2,815,488 “Non-linear capacitance or inductance switching, amplifying and memory organs” (1957).

This material is well-worth study, in particular for its concrete treatment of what scalability means in practice. And yet per Bill Thurston’s stellar MathOverflow question (per [3] in comment #172) it is a lamentably difficult challenge (per comments #86 and #91) to translate among the (very different) mathematical idioms that differing research communities employ in discussing these fundamental questions.

One of the (many) great virtues of Aaronson/Arkhipov BosonSampling proposals is that they are sufficiently concrete that the question of their scalability/attainability (or not)

alreadyis reasonably amenable to Likharev-level engineering analysis. The technological history and concrete examples that Likharev so thoroughly reviews helps us to a realistic appreciation of how challenging this work can be.Comment #181 January 9th, 2013 at 11:29 am

Continuing to answer questions I wasn’t asked: reading my comment #178, one might wonder,

how do I knowthat the desire for computational power isn’t just an arbitrary human quirk?Well, the reason I know is that

mathisn’t arbitrary, and computation is nothing more or less than the mechanizable part of solving math problems.Let me put it this way: if we ever make contact with an advanced extraterrestrial civilization, they might have three sexes and five heads. But they, too, will have encountered the problem of factoring integers into primes. Indeed, because they’ll inhabit the same physical universe as we do, they’ll even have encountered the problem of simulating quantum physics. And therefore, putting the two together, they’ll almost certainly have discovered something like Shor’s algorithm — though they’ll call it “Zork’s bloogorithm” or whatever.

Comment #182 January 9th, 2013 at 11:47 am

More on feasible scaling: So I was reading about boson sampling in Physics World this morning. http://physicsworld.com/cws/article/news/2013/jan/08/boson-sampling-offers-shortcut-to-quantum-computing .

Some are skeptical that “we“ won’t make it to 30 or 100 bits although we have already done 3 and 4.

So I imagine building a 4 bit boson sampler about the same size as a laptop with 4 inputs and 4 outputs which I imagine to be screw-on fiber-optic cables. If I take two or four of these 4 bit devices, can I wire em up to make an 8 bit device, at least in principle? (My guess is yes. If not, how many would be required?)

Here is the killer though: How accurately would the inputs, outputs and cable lengths have to be prepared? One wavelength? One tenth of a wavelength?

You get the picture. Is this concept at all feasible? What is the toughest bottleneck/requirement for this type of approach?

By the way thanks for all your previous answers. I am learning a lot.

Comment #183 January 9th, 2013 at 11:50 am

Scott #181:

Do other beings inhabiting our same physical universe have to describe it using the same abstractions we use? Is it possible that the five headed aliens successfully describe the same universe using a formalism that doesn’t include prime numbers at all? Or not?

Does QM or anything else say that there has to be a unique model?

Probably off topic to the core QC discussion though, apologies.

PS. This doesn’t weaken in any way the case for us to explore deeper into QC; but just a tangential question I had.

Comment #184 January 9th, 2013 at 12:07 pm

Rahul #183:

Of coursethe aliens would use different formalisms. But if you told me there was no way, even in principle, totranslatebetween their formalism and ours—and in particular, no way to extract from their formalism anything corresponding to a concept like “prime number”—then I wouldn’t even understand what you meant.Comment #185 January 9th, 2013 at 12:14 pm

Jim Graber #182: As Alex Arkhipov and I wrote in our paper, I think that the scalability of BosonSampling has to be regarded as an open problem. One big question on the experimental side is how reliable you can make single-photon sources: can you get up to n=20 or n=30 photons,

withoutthe probability of an n-photon coincidence becoming too small to measure? But there’s also a big open question on the theoretical side: namely, suppose youdolose half the photons; then do the remaining photons still sample from a classically-hard probability distribution? That’s one of the questions Alex and I would love to think about when we have time—but others are welcome to do it first!(Note: Ultimately, if QM works as we expect, then in principle you ought to be able to scale BosonSampling to as many photons as you want, by layering standard quantum fault-tolerance methods on top of it. At that point, though, you would probably just forget about BosonSampling and build a universal QC instead!)

Comment #186 January 9th, 2013 at 12:29 pm

Scott#181: Yeah. And then Shor and Zork would be sent to a neutral planet for a series of intellectual smackdowns on intergalactic Pay-per-view. I’d subscribe.

Comment #187 January 9th, 2013 at 12:40 pm

A corollary of the thermodynamical Fourth Law: Computational Unattainability (that is postulated in #155) is that

O(exp(n)) devices are required forn-photon BosonSampling measurement. And up to the present time, every BosonSampling experiment ever performed, and every BosonSampling proposal that has been seriously analyzed (AFAIK) respects this Fourth Law prediction.On the other hand, even after 105 years (and multiple Nobel awards) there’s

stillno consensus regarding the validity even of the thermodynamicalThirdLaw, much less any postulated Fourth Law. And this uncertainty very largely arises in the transition from the small-dimensional QM (of theFeynman Lectures, for example) to unbounded-dimension field theory (of Tony Zee’sQuantum Field Theory in a Nutshellfor example, as was humorously discussed in #63 & #68).A lot of terrific mathematics and physics has resulted from the century-old enterprise of (partially) unravelling these questions. Which is good!

Comment #188 January 9th, 2013 at 1:37 pm

[…] The Blog of Scott AaronsonQuantum computers are not known to be ableto solve NP-complete problems in polynomial time, and can be simulated classically with exponential slowdown. « Happy New Year! My response to M. I. Dyakonov […]

Comment #189 January 9th, 2013 at 2:27 pm

Scott, John, with all respect to your genius, I do not see the desire to put QM to the edge, to test the limits, all discussions are about how QM is good in explaining experiments. Scott position here is safe. Einstein and Perelman could not be funded by today granting agencies.

http://www.nobelprize.org/nobel_prizes/physics/laureates/1954/born-lecture.pdf

“The lesson to be learned from what I have told of the origin of quantum mechanics is that probable refinements of mathematical methods will not suffice to produce a satisfactory theory, but that somewhere in our

doctrine is hidden a concept, unjustified by experience, which we must eliminate to open up the road.”

The one of unjustified assumption is the measurements, this is at least. I’m not aware of experimental work asserting this question. Experimentalist could not understand noise – this is a function of a theory. Experimentalist are able to characterize the system. With current approach, I’m not even sure, that the measurements similar to Mercury, used by Einstein for his theory is not overseen in today research in quantum mechanics.

The integer factorization may appear only in the world of sets, e.g. in the word where separate objects exist, just to start to count. In the liquid world one does not need the notion of separate object, and therefore would probably develop different formalism, so it is still question of whether the current model of the world is a limitation of our brain, or it is objective reality. It is of cause hard to imagine, having no other examples, that it is possible.

I’ve been asking about the paper claiming the quantum computation in the lizard communities (being published in Nature), and no reaction from Scott, and any others. This paper is questioning whether the biological systems can be using quantum probabilities in the real life actions. Seems to be quite relevant to the physical limits of computations – no reaction. We have human beings, that are best available experimental computational devices in searching regularities even in the abstract mathematical noise. Are those devices classical? No reaction. With all my respect to the actors the time of scientific revolutions is in the past. The spirit here is different.

Comment #190 January 9th, 2013 at 2:47 pm

mkatkov #189:

1. Because people here didn’t respond to your blog comment as quickly as you would like, therefore “the time of scientific revolutions is in the past”? Do you realize how nutty that sounds?

2. I just looked at the quantum lizard paper. I haven’t formed any detailed opinion of it yet, but it’s clear (they say so themselves) that they’re using QM only at the level of a

mathematical tool or analogy. Which, in general, is great, and something I support, as long as it’s clearly labeled. Today we have actually lots of examples of that sort of thing; for examples within theoretical computer science, see the survey by Drucker and de Wolf.3. I respectfully submit that the experimentalists have been

trying as hard as they canto test the limits of QM, and will continue trying. You should ask yourself whether the real problem is that you don’t like the answer they’ve been getting (i.e., that QM is fine so far).4. Even a fish, swimming through a continuous liquid, could only encircle a given region an integer number of times. For that reason alone, if the fish were a mathematician or physicist, I expect it would eventually want the concept of integers…

Comment #191 January 9th, 2013 at 4:20 pm

Scott #190:

2. Thanks for the reference.

3. In science we have periods of consolidation, and periods of revolutions. I do not like that the consolidation takes almost 100 years, besides QED, and Standard Model, which work with high energies. I hardly believe that approximation methods like perturbation theory is the last word in the description of physical reality. (That goes to “what we can compute is linear, and we almost nothing know about non-linear”). There is something wrong in the starting point. Maybe lost intuition (in the framework “shut-up and calculate”).

4. Fish is an object (so we already have an integer in this world), and the region it encircle to be countable should have stable distinct properties. If you want replace liquid with plasma.

Comment #192 January 9th, 2013 at 5:33 pm

mkatov #189,Scott #190

If think if I took a shitload of LSD, I could sort of imagine some liquid kinda fairies who live happily in a liquid world without writing down a single line of propositional logic or finitary stuff, but are able to somehow gasp the Navier-Stokes mechanisms, all the things we would call the solutions of the equation..”but wait!” -says the rainbow colored

Russel approaching me with vol86 of principia in his hand

having just formalized an existence and smoothness proof.

– So you’re already talking about a set of them, set of mechanisms of their world, thinking about their set of thoughts lacking finitary means – while trying to get

to the conclusion that they are the most unscientific things ever, lacking not jut the notion but the potential

of ever having axiomatic logic, sets or integers, and you claim that this whole train of thought actually has some sense or use. Then You are the horror trip to me and not vica versa.

Ok, sorry for this 😀

Comment #193 January 9th, 2013 at 7:01 pm

Time Warp Warning!

Readers, beware of following John’s links in 172 unless you have a lot of time available!

Comment #194 January 9th, 2013 at 7:17 pm

Rahul, 183:

I have often considered what would be in common in math for most/all alien technologies.

At the “bottom end”, my guess is that integers, primes, and rationals would be exactly the same as we understand them. OTOH, I regard the reals as mainly a convenience for proving theorems. Alien math MIGHT have something similar. Dedekind cuts and Cauchy sequences MIGHT be the “natural ways” to get at this idea.

In higher math, it is harder to guess. Elliptic functions? Probably something equivalent. Stone Cech Compactification? Maybe not. QTF? QC??

Comment #195 January 9th, 2013 at 10:39 pm

Consider these statements which were recently made in this thread:

>> “Do other beings inhabiting our same physical universe have to describe it using the same abstractions we use?” (Rahul #183)

Watchword: “abstraction”

>> “I have often considered what would be in common in math for most/all alien technologies.” (Raoul Ohio #194)

Watchwords: “math[s]” and “technologies”

>> “lacking not jut the notion but the potential

of ever having axiomatic logic, sets or integers” (Attila Szasz #192)

Watchwords: “notion.” (That one’s enough. Once you have that as a watchword, you don’t have to have “logic,” “sets,” or “integers.”)

>> “Of course the aliens would use different formalisms”

and

“[if there was] no way to extract from their formalism anything corresponding to a concept like ‘prime number'” (Scott #184)

Watchwords: “formalisms” and, more importantly, “concept”

* * *

Alright. Enough of examples.

What all these examples show is one important but only implicit assumption: namely, that every (metaphysically powerful) lifeform in the universe must use the same cognitive means as we do, specifically, a conceptual faculty; that the means they have, to hold (and to use) their grasp of reality, must use concepts.

Not necessarily.

From among all the known species of living beings, man is the only one to possess a conceptual faculty.

If also aliens make use of a specifically conceptual faculty to deal with reality, then, sure, they may have different languages, even different formalisms, and yet, when it comes to the first-level or basic concepts, whether of the physical world or of mathematics, they, too, will make use of the same mathematical concepts such as integers, primes, etc.

But if they aren’t “endowed with” a conceptual faculty, there is no way that even the idea of a formalism applies in their cognitive context (the word “cognition” being taken in a broad sense, in the sense of a grasp of reality in some, unspecified, form). Therefore, the issue of a translation between their formalisms and ours simply doesn’t arise. It’s possible that no translation is at all possible, that respective end-actions in concrete reality is the only means of “communicating.”

Yet, of course, the fact is, we haven’t seen aliens (or so I think). More importantly, for our present purposes: We don’t know about the method they use to grasp and deal with reality. (This much, I _know_!) Hence, we should not be using the uncalled for (and for that reason, erroneous) assumption of theirs possessing a conceptual faculty.

I am sure that by “abstraction” Rahul (#183) meant: “conceptual abstraction.” Ditto, for “notion” (Attila #192). If so, then the same objection applies.

It is true that the aliens would deal with the same physical universe; and therefore would deal with the same phenomena, including the quantities with which these phenomena exist.

Inasmuch as the word “technology” denotes something—some concrete machines, engines, devices etc.—realized in the external reality (i.e. the reality apart from the consciousness of man), if metaphysically powerful (unlike insects on the earth), they will have to have a technology.

But the point is: they wouldn’t have to have used mathematics (or concepts) in order to create or use that technology. It would be sufficient that their technology was able to correctly deal with the various quantities with which the various existents concretely existed.

Bottomline: We could always communicate with them. To do so, we would have to be in conformance to reality including the quantitative aspects of it—and so would they. But they wouldn’t have to use concepts—whether physical or mathematical—to be able to deal with reality, even if we do.

A good epistemological diversion, this one was.

I would come back to the issue of the feasibility or otherwise of scalable quantum computing, here on this thread, or on some other thread, or at my blog.

Ajit

[E&OE]

Comment #196 January 10th, 2013 at 5:47 am

Michel’s donkey’s example, as Scott’s response are amusing. It is a nice addition to Aram Harrow’s amusing comparison of QC skepticism within QM to passing a needle through the eye of a camel (or was it the other way around?). One obvious point missing so far from Scott’s treatments of this example is a mention of the underlying major question of understanding the amazing cognitive phase transition represented by humans. Teaching donkeys to read is silly but the fundamental differences between humans and other animals, and, in particular, in the context of language, is a very important intellectual challenge that occupied scientists, philosophers, and theologists for centuries.

I also enjoyed the other analogs discussed here. For example, an explanation for the infeasibility of QC of a similar quality to the explanation given by the principles of chemistry for why turning gold into lead is impossible, will be very satisfying! In spite of the fact that centuries later it turned out (and this was already heavily based on the knowledge of chemistry,) that, in principle, lead can be turned into gold. So, in this respect, the dream of breaking the prison of classical complexity will never be completely shattered and will forever remain something to share with our extraterrestrial buddies.

Comment #197 January 11th, 2013 at 6:46 am

This new paper has some relevance to the foregoing discussion:

http://arxiv.org/abs/1301.1995

Comment #198 January 11th, 2013 at 7:09 am

Thanks, Mike!! I noticed that paper too, and read it with interest (though I think Daniel Gottesman told me the main ideas when he visited MIT). I especially loved the idea of

turning noise to your advantage—once you know that a particular non-unital channel is attacking your qubits, you can then use that channel as a “refrigerator” to partially purify the qubits and ready them for reuse.Comment #199 January 11th, 2013 at 7:10 am

[…] Aaronson’ post “Zork’s bloogorithm“, which comes after his excellent “Happy New Year! My response to M. I. Dyakonov” (with an outstanding and funny list of comments, a very good […]

Comment #200 January 12th, 2013 at 12:20 pm

To the best of my memory, the paper “Quantum Refrigerator,” by Michael Ben-Or, Daniel Gottesman, Avinatan Hassidim, that Mike mentioned, was (partially) originated from a discussion with Robert Alicki in his 2005 visit to Jerusalem. (But maybe I am mixing it with another project.)

Comment #201 January 12th, 2013 at 11:09 pm

I’ve run across Dyakonov’s paper “State of the art and prospects..”by accident. Normally, I do not read blogs as well. However, this time, I’ve decided to make an exception. I would like to mention that, as much as I would like to object against Dyakonov’s treatment of most of mathematical examples in his paper, I will leave them aside. What I would like to do is to remind to everybody about Schrodinger’s small book “What is life” http://whatislife.stanford.edu/LoCo_files/What-is-Life.pdf

Clearly, quantum computers and life processes have many things in common. Very much as we are unable to build thus far flying birds (full scale and with the same components), we may not be able to mimic (for quite some time) life processes ,especially those imitating brain activity, etc., or even machines imitating in full details insects, etc. Yes, we cannot do this…however we can see this and we know that mother Nature (or something above it :-)) had worked out in slightest details and in front of us (us included!), amazing “living” computers. One of the greatest mathematicians of our time, Michael Gromov, lately had spent a lot of his time studying biology. What for? Well, please, read the Notices of AMS to find out. Feynman did the same, incidentally, but gave up too soon, very much as Dyakonov is trying to advocate now…Only George Gamow was able to leave his fundamental imprint on both physics and molecular genetics. The moral of all this is a simple one. The Manhattan project required people of different skills whereas quantum computer community is thriving on ideas generated just from within. This is surely the major problem. Dyakonov is correctly noticing the inadequacy of the existing NSF funding practices. Do we need another major global catastrophe in order to fix the existing sociological problems in science?! I am afraid, only this catastrophe is going to be the trigger for action. But who is going to write a letter, like Einstein did, to the president… Which president? And, may be, this time it is going to be too little too late…As it is stated in the Bible: The arrogance is punishable…

Comment #202 January 13th, 2013 at 10:06 am

The Manhattan project required people of different skills whereas quantum computer community is thriving on ideas generated just from within. This is surely the major problem.

This comment reveals breathtaking ignorance. Say whatever else you want about the QC community, but it’s got condensed-matter physicists, AMO physicists, (former) high-energy physicists, electrical engineers, theoretical computer scientists, mathematicians, chemists, foundations/philosophy people—all talking to each other (at least sometimes)! I defy you to find any part of the physical sciences that’s

moreinterdisciplinary.Comment #203 January 13th, 2013 at 1:00 pm

All this is as familiar to me as it is to you! But, the existing status quo is only indicative of much more homogeneity in thinking than one would expect from the diversity of specializations. And for a good reason. String theoreticians are perfect example. They have not produced a thing which can be experimentally tested (that is solid numbers) but their pride of themselves is larger than the number of protons in the Universe. All these chemists,electrical engineers and so on are being trained using physics textbooks lacking up to date foundational questions. What we do not know, normally, we do not say. This is the format of the existing textbooks. Incidentally, the above list does not include the geneticists, developmental biologists, etc. For a good reason: all these people decided to go to these disciplines just because they will study much less math and physics. When I was writing my comment, I had in mind the professionals of the caliber of Michael Gromov. They have to get together and they have to decide where to go next. Just like Ivan G.Petrovskii assembled in Moscow State University in the early 50ies the best mathematicians and physicists. Remember that Izrail M.Gelfand -one of the greatest mathematicians of all times-was accepted to MGU without PhD and stayed, just like F.Dyson, without a PhD his whole life. Incidentally, he went into bioinformatics after retiring from mathematics. Do we have now people with such a vision as Petrovskii had? It is these people who could organize all these crowds of electrical engineers, computer scientists and so on. I had in mind just this type of outcome since it is pretty obvious that the existing NSF policies will only consume money without any tangible output. And, at this point, I am very pessimistic about the capability of US government or the scientific community to change the existing practices.

Comment #204 January 13th, 2013 at 1:43 pm

arkady #203: If you think the leaders in a given part of science are not very bright, the best way to show that is to enter the field and start revolutionizing it with your own insights. It’s been done before.

In the case of QC, your ideas will have to contend with those of Shor, Kitaev, Vazirani, Bennett, Freedman, Gottesman, Watrous, Wineland, Preskill, Farhi, Laflamme, Knill, Hastings, Ambainis, Aharonov, Regev, Terhal, DiVincenzo, Lloyd, and Chuang, among many others. I wish you success!

Comment #205 January 13th, 2013 at 2:39 pm

LOL … and as concrete presentations of Scott’s good advice in #204, works like Bill Thurston’s

On proof and progress in mathematics(1994), Richard Hamming’sYou and your research(1986), Simon Ramo’sThe business of science(1988), and Craig Venter’sA life decoded(2007) and even Wilbur Wright’sHow we invented the airplane(1988) give explicit recipes for (in Scott’s phrase) “entering a field and revolutionizing it.” Inspiration is useful — even necessary — yet typically arrives not as a prerequisite, but as a by-product of a sustained practical effort. So the process Scott describes is not easy, yet neither is it mysterious!Comment #206 January 13th, 2013 at 4:43 pm

#178 Scott: Even in 1013 an intelligent alchemist should be able to figure out that the value of gold is based on an arbitrary agreement among humans.

Comment #207 January 14th, 2013 at 3:44 pm

Scott # 204: “In the case of QC, your ideas will have to contend with those of …”

… and Aaronson …

Comment #208 January 14th, 2013 at 4:34 pm

Over on

Gödel’s Lost Letter and P=NP, as a remark under the topic “Flying Machines of the 21st Century”,Gil Kalai has just added topics #41-#45to his list of concrete topics in math/physics that have been raised in the now-celebrated Harrow/Kalai enthusiast/skeptic quantum computing debate.The healthy vigor of the Harrow/Kalai debate is perhaps the best single broad answer (as it seems to me) to the wholly legitimate points that Dyakonov raises in Comment #131.

And speaking of QC devices as “the flying machines of the 21st century” please let me commend — especially to QC skeptics — the wonderful account that historian James Tobin has written of the crucial role that

open scientific discourse(yes, precisely per Aaron Swartz!) played in the invention of the flying machine; Tobin’s book can be found variously under the titlesTo Conquer the Air(American edition) andFirst to Fly(British edition).Long may QC discourse remain similarly open and vigorous!

Comment #209 January 14th, 2013 at 5:16 pm

John Sidles #208: A wonderful thing about quantum computing is that, due to its recent founding as well as the culture of physics, math, and CS, essentially the entire literature of the field can be accessed for free at arXiv.org.

(The same thing can and should be done with the world’s

entireresearch literature, going back to antiquity, and I’m cautiously optimistic that I’ll live to see that happen.)Comment #210 January 14th, 2013 at 6:33 pm

Scott, please let me appreciate and commend too the extensive on-line quantum-related materials that researchers like you, and John Preskill, and Carlton Caves all provide (Caves’

Internal Reports web pageis a open resource from which every researcher can benefit, students especially).May the Swartz be with you, always!

Comment #211 January 17th, 2013 at 1:52 pm

A thought on point (4) in Scott’s #133: Could we say that classical deterministic physics/computation is linear over F_1?

Comment #212 January 19th, 2013 at 2:51 pm

This current attack on QC looks completely similar to the attack launched on string theory by Woit et al. The same kind of arguments are offered: QC is overhyped, haven’t delivered its promises, theorists not talking to experimentalists, cornering all the funding from NSF unfairly and of course the classic “it’s all sociology”. Given the status of QC and the number of detractors it has I wondered why someone didn’t think of this strategy earlier.

Scott, as a “mercenary in the string wars” you must be well aware of how successful the skeptics can be. Welcome to the wonderful world of backseat driving!

Comment #213 January 20th, 2013 at 1:41 am

Ray #212:

Is Scott a String Theory skeptic? I didn’t get the “mercenary” insinuation.

What do you mean by backseat driving? Non-string-theory physicists should not be commenting on string-theory?

Comment #214 January 20th, 2013 at 1:50 am

[…] The year 2013 started turbulent for the quantum computing field with a valiant effort by long time skeptic and distinguished experimentalist Michel I. Dyakonov to relegate it to the status of a pathological science akin to cold fusion (he does not use the term in his paper but later stated: "The terms 'failed, pathological' are not mine, but the general sense is correct."). […]

Comment #215 January 20th, 2013 at 7:03 am

The weakest criticism of FTQC indeed resembles the weakest criticisms of (e.g.) string theory.

And the weakest advocacy of FTQC indeed resembles the weakest advocacy of (e.g.) fusion power.

These common-sense reflections provide solid grounds to appreciate Gil Kalai’s

45-point summary of the Harrow/Kalai debate.Ideally, Gil’s well-reasoned (and politely expressed) skeptical considerations will evoke a comparably well-reasoned (and politely expressed) response from FTQC enthusiasts … specifically, (1) FTQC enthusiasts should draft a promised (and long-overdue) successor to the QIST roadmaps of 2002 and 2004, and (2) FTQC skeptics should participate in this process.

Comment #216 February 4th, 2013 at 9:14 am

[…] thought: it’s worth noting that, if (for example) you found Michel Dyakonov’s arguments against QC (discussed on this blog a month ago) persuasive, then you shouldn’t find Anderson’s and […]

Comment #217 March 8th, 2013 at 4:14 pm

Scott already said he feels like he’s playing wack-a-mole, so I apologize for popping out of my burrow with yet another question.

My own doubts about the feasibility of large-scale computation have nothing to do with doubts about whether quantum mechanics, quantum field theory, or other widely accepted theories are really right. If they’re not, we’ll find that out in due course, but when designing machines I’m happy to assume they are.

What I’m worried about is noise models. I haven’t at all kept up with the state of the art, so let me just talk about John Preskill’s paper

Sufficient condition on noise correlations for scalable quantum computing. I think I understand the Hamiltonian in equation (1), which consists of the computer’s ‘ideal’ Hamiltonian, plus a Hamiltonian for the environment, plus a Hamiltonian for an interaction between the computer and its environment. The last term is required to obey some restrictions, and I could cavil at those, but I won’t.What I’d like to see is some more general results where we get to add another term: an arbitrary self-adjoint operator on the Hilbert space of the computer, whose norm is bounded by a small constant epsilon. This doesn’t really represent ‘noise': it represents the fact that ‘nothing is perfect’. We can’t make perfect machines, we don’t know the fundamental constants perfectly, etcetera.

Can we prove it’s possible to scale up quantum computations more and more without epsilon needing to get smaller and smaller?

I’d be happy to see any results along these lines. (I’ve been asking about this for decades, but paying little attention to the state of the art, so I thought it would be good to ask yet again.)

Comment #218 March 13th, 2013 at 7:02 am

John Baez #217: Sorry for the delay! I don’t think you can do scalable QC if the noise is

completelyadversarial (and has a constant magnitude bounded away from 0)—i.e., if the entity adding the noise is told in advance what computation you’re trying to do, and gets to insert ε noise per gate strategically in order to steer you away from the right answer. (Though even there, I don’t know off the top of my head how to prove a no-go theorem. I’m flying today to John Preskill’s 60th birthday celebration at Caltech, and will ask him and the other experts there!)On the other hand, if the adversary has to specify the noise operator first, then the standard fault-tolerance theorem does indeed basically tell you that you can correct against ε noise per gate, where ε is some small constant that doesn’t decrease with the size of the machine.

Comment #219 March 14th, 2013 at 2:37 pm

John Baez: “What I’d like to see is some more general results where we get to add another term: an arbitrary self-adjoint operator on the Hilbert space of the computer, whose norm is bounded by a small constant epsilon. This doesn’t really represent ‘noise’: it represents the fact that ‘nothing is perfect’. We can’t make perfect machines, we don’t know the fundamental constants perfectly, etcetera.”

Dear John, If you allow a “general noise” that acts as an arbitrary self-adjoint operator on the Hilbert space, then fault tolerance will not work even if this noise is not adversarial and even if you know what it is in advance. For quantum fault-tolerance (and to a large extent for classic FT as well) we need some sort of independence of the noise over qubits and gates. Moreover, when you say “without epsilon needing to get smaller and smaller” the meaning of epsilon is changed when the errors are not independent. Keeping the same magnitude of noise in terms of trace distance (which is the reasonable measure) leads to scaling up of the noise in terms of qubit-errors that are relevant to fault tolerance. The number of qubit errors scales up linearly with the number of qubits.

On the other hand, there are fairly convincing reasons not to allow such a general noise operator even if you want to represent the fact that “nothing is perfect”. If the noise is generated by some local mechanism (and this is reasonable assumption even for modeling “nothing is perfect” and related issues,) then completely arbitrary self adjoint operators are out-of-reach for them. I think that the “debate” is now between noise models which allow quantum fault-tolerance, and noise models which cause quantum fault-tolerance to fail which, while a little crazy (and sometimes has statistical features of arbitrary operators), still don’t allow to represent the noise by a completely arbitrary operator, and try to get a more delicate picture on “what is going on” for modelling noisy quantum systems.

“My own doubts about the feasibility of large-scale computation have nothing to do with doubts about whether quantum mechanics, quantum field theory, or other widely accepted theories are really right.”

I think this is a very reasonable approach. Many people are charmed with the possibility that QM is not really right, which is entirely their right. But it looks more reasonable to try to understand obstacles to quantum computers within quantum mechanics. There is a delicate issue regarding computations from quantum field theory that originally motivated the QC idea. It is reasonable to think (while not proven see related comment #101 by Scott) that such computations for complicated quantum systems require a computational power beyond the reach of classical computers (technically in the class BQP). If “nothing is perfect” noise or some more concrete form of detrimental noise (that applies also to natural systems and not just to man-made systems) causes quantum fault tolerance to fail, this is an indication that certain physics computations are not relevant to physical reality in regimes were they are computationally infeasible. These computations (while extremely successful in regimes where computations are possible) represent approximation methods whose scope may well be limited and the issue of appropriate approximation methods is related to the issue of appropriate models of noise studied in quantum information theory. If some such physics computations lose they relevance or need some correcting terms this will be important and possibly even spectacular, but nothing as earthshaking as cracks in QM or something like that.

It is not unreasonable to think that the difficulties with human-made systems which perhaps can be abstractly described by some detrimental noise operator of arbitrary nature will simply cause our efforts to build QC to fail without adding any new insights about the nature of general quantum systems. But the issues regarding quantum error correction and quantum fault tolerance seem fundamental and widely-scoped enough to support the hope that if open quantum systems do not allow quantum fault tolerance this will have important consequences, and if quantum fault-tolerance is possible, then we will be able to engineer it.

Comment #220 March 15th, 2013 at 1:18 am

John Baez: “If they’re not, we’ll find that out in due course, but when designing machines I’m happy to assume they are.”

Hi John, on this point I beg to disagree. Quantum computing and quantum fault-tolerance are such revolutionary concepts and they represent such a revolutionary reality that it is not merely a matter of “designing machines.” Quantum computing provides the opportunity for examining fundamental issues in quantum physics, thermodynamics, quantum field theory, and the interface between quantum and classical physics.

This isthe “due course.”Comment #221 March 15th, 2013 at 6:28 pm

Gil Kalai writes: “If the noise is generated by some local mechanism (and this is reasonable assumption even for modeling “nothing is perfect” and related issues,) then completely arbitrary self adjoint operators are out-of-reach for them.”

I’m willing to believe in the locality imposed by special relativity, which forbids adding terms to our machine’s Hamiltonian of the form XY where X and Y are observables living in spacelike separated regions of spacetime.

However, you’ll notice this restriction, taken over-literally, also rules out all the most popular condensed matter Hamiltonians, since these contain terms like XY where X is the spin of a particle at one lattice site and Y is the spin of a particle at another lattice site at the same time.

Of course the point is that these Hamiltonians are only good approximations if the spins are changing slow enough that we can treat the speed of light as effectively infinite.

But anyway, this leaves me wondering precisely how you’d want to implement the locality required by special relativity when deciding which error terms are physically possible.

(By the way, I prefer the phrase “error term” to “noise term” for the idea I’m talking about, which is that we don’t precisely know the Hamiltonian of our quantum computer.)

Comment #222 March 15th, 2013 at 6:46 pm

Scott wrote: “On the other hand, if the adversary has to specify the noise operator first, then the standard fault-tolerance theorem does indeed basically tell you that you can correct against ε noise per gate, where ε is some small constant that doesn’t decrease with the size of the machine.”

Could you explain why your comment doesn’t contradict Gil Kalai’s… or why if it does, he’s wrong and you’re right? He wrote:

“Dear John, If you allow a “general noise” that acts as an arbitrary self-adjoint operator on the Hilbert space, then fault tolerance will not work even if this noise is not adversarial and even if you know what it is in advance. ”

I’m wondering if the difference is this: your phrase “noise per gate” suggests a perturbation that’s a somewhat “local” self-adjoint operator, not a completely general one.

(Of course I meant an arbitrary self-adjoint operator that’s

bounded in norm by some small constant, and I’m hoping that Gil means this too, though he’s not saying it.)Comment #223 March 15th, 2013 at 6:48 pm

By the way, I’m perfectly willing to assume that the noise terms in the Hamiltonian (or as I prefer to say, “error terms” or “perturbations”) aren’t adversarial. As Einstein said, “the good Lord may be subtle, but he is not malicious.”

Comment #224 March 16th, 2013 at 12:18 pm

John Baez: For the sake of clarity, let me put my current understanding in the form of a list. Gil can let us know if he disagrees with anything.

* Adversary gets to specify any noise operator whatsoever (unbounded, nonlocal, nonunitary, etc.), and you know in advance what it is: scalable QC is impossible. (Why? The adversary could just replace everything by the maximally mixed state.)

* Adversary gets to specify an unbounded, nonlocal noise operator, but it needs to be unitary, and you know in advance what it is: scalable QC is still impossible. (Why? The adversary could scramble the state in such a complicated way that you had no hope of unscrambling it.)

* Adversary gets to specify a bounded, nonlocal noise operator, which need not be unitary, and you know in advance what it is: I don’t think any known fault-tolerance theorems cover this case, but I also don’t see an obvious reason why scalable QC

mustbe impossible here. Same thing if the operator is unitary.* Adversary gets to specify an unbounded, local noise operator, which need not be unitary, and you know in advance what it is: scalable QC is impossible. (Why? Because the adversary can still replace your state by the maximally mixed state using local operations only.)

* Adversary gets to specify an unbounded, local noise operator which is

unitary, and you know in advance what it is: scalable QC ispossible. (Why? Because knowing the local unitary errors, you can simply reverse them yourself!)* Adversary gets to specify a bounded, local noise operator which is nonunitary, and you know in advance what it is: scalable QC is

possible. (This is precisely the case covered by the standard fault-tolerance theorems.)Modulo the “bounded, nonlocal” case, I believe that covers all the cases where the noise operator is known to us in advance. I would make a similar list for the case where the noise operator is chosen adversarially only

afteryou specify the quantum computation, but (a) you’ve already said in comment #223, quite sensibly, that you’re willing to assume non-adversarial noise, and (b) in 5 minutes I need to get into a cab with Peter Shor to go back to LAX.Which reminds me: I’m sorry I forgot to ask John Preskill about adversarial noise! But now I’ll have a a whole cab ride to ask Peter about it.

Executive Summary:The “combinatorial” list above might make the situation seem confusing. But an excellent way to summarize the matter is that, at present, we don’t know any noise models that definitely kill scalable QC, except for those that are so strong that they also kill scalableclassicalcomputation. Furthermore, of the noise models thatdon’tkill classical computation, many (most?) of them are now known, because of the threshold theorems, not to kill scalable QC either. There might be a little bit of gray area, though (e.g., the bounded, nonlocal case), which would be great to clarify.Comment #225 March 16th, 2013 at 1:10 pm

Ok, Peter has pointed out to me that there are trivial examples of noise models that kill quantum computing but not classical computing: for example, purely dephasing noise! Also, it’s entirely possible that the numerical value of the fault-tolerance threshold is somewhat lower for QC than for classical computing. But neither of those examples seems all that useful for QC skeptics.

Comment #226 March 17th, 2013 at 1:46 am

Hi John,

(re:223) of course it is reasonable to assume that the errors are not malicious, but there is a difference between “adversarial” and “malicous.” If you take your bicycles for a ride around the block and suddenly change your mind and decide to use them for a sponteneous manned mission to Mars, you may encounter some difficulties. This is not because the gods are reading your mind and maliciously try to fail you but because you need a different equipment to go to Mars.

It is also useful to note that even in the most standard models of errors, there are dependencies of the errors on the evolution which may have adversarial effects. For example, the rate of errors in a time interval is not taken to be proportional to the length of the interval but rather it is proportional to the amount of computation (# of computer cycles) performed in the time interval.

Comment #227 March 24th, 2013 at 11:23 am

Hi Gil:

The standard model of error for quantum computation says that the rate of errors in a time interval is proportional to the number of length of the interval times the number of qubits, which is indeed more than the amount of computation. So the standard quantum fault tolerance models cover this case. The problem is when the errors are not independent. If you just take John’s model of error bounded by epsilon independent of locality, this allows for malicious errors, and even for classical fault tolerant quantum computation, we can’t handle these. (This is unlike classical error correcting codes, which can handle this model, but which are usually evaluated using a much more realistic random error model.)

So we need some realistic error model which is between the extremes of nearly local, independent, errors and completely arbitrary errors. If somebody can come up with a good model of error, we can start trying to decide if it is possible to protect against these errors.

Comment #228 March 26th, 2013 at 2:25 pm

[…] (Scott) (agreeing to five points I have raised) Gil Kalai #95: yes, yes, yes, qualified yes, and yes! … […]

Comment #229 March 27th, 2013 at 1:08 am

Hi Peter, I agree with your comment. My point was that adversarial behavior of noise depending on the computational task is demonstrated already in the standard noise models and does not necessarily manifest some malicious conspiracy. In these standard noise models fault-tolerance is possible, and the question is if these noise models with independent errors (or near-by behavior) are realistic, and do they fully and correctly capture the adversarial effect that complicated tasks have on quantum devices performing these tasks.

“So we need some realistic error model which is between the extremes of nearly local, independent, errors and completely arbitrary errors. If somebody can come up with a good model of error, we can start trying to decide if it is possible to protect against these errors.”

I agree with this as well. I think that we differ in what we regard as a “realistic error model,” not so much regarding the term “realistic” but rather regarding the term “model.” We may return to this sometime.

It can be useful to relate this to Michel Dyakonov’s paper discussed in the post. Michel suggests (implicitly) that the line between realistic and non-realistic quantum systems is very complicated, that scalable quantum computers are well within the non-realistic area, that this may be related to the uncharted territory of non-linear behavior of quantum systems (see point no. 4 in Scott’s remark 133), and that it is quite possible that there is no exciting physics to be learned from the whole episode. Michel’s scenario is certainly a possibility, but I tend to disagree with his assessment regarding the relevance to physics. I expect a clear dividing-line, described by quantum fault-tolerance, for quantum systems. It is left to be seen if this dividing line represents the gap between today’s and tomorrow’s technology, or the gap between today’s and tomorrow’s understanding.

Comment #230 April 8th, 2013 at 6:01 am

[…] there are scientists who think that research on quantum computation is itself a bubble—see this post by Scott Aaronson. Is this further proof that bubbles can have a fractal structure? Of course the […]

Comment #231 April 23rd, 2013 at 5:33 am

It’s in fact very complicated in this full of activity life to listen news on TV, therefore I only use the web for that reason, and get the latest news.

Comment #232 May 2nd, 2013 at 7:22 pm

Thank you for every other informative website. The place else may I am getting that kind of information written in such a perfect approach?

I have a project that I’m simply now operating on, and I’ve been at the glance out for such information.