## Quantum Computing Since Democritus Lecture 14: Skepticism of Quantum Computing

I just came from Brookhaven National Lab, where I gave my standard colloquium talk on the limits of quantum computers, and got driven around the RHIC accelerator ring by physicist Rob Pisarski. I knew the lab was big; what I hadn’t quite appreciated before getting there is that it’s an *entire town*, with its own police department, restaurants, etc. In many ways it looks like lots of small towns across America, except that this one’s primary business is smashing gold ions into each other at relativistic speeds and measuring properties of the resulting quark-gluon plasmas.

When I talk to physicists like the ones at BNL, they often find it faintly ridiculous that *anyone* would doubt quantum mechanics. But people certainly do — even when they don’t admit that that’s what they’re doing — when the alternative is accepting that integers are efficiently factorizable in the physical world. Which brings us to QCSD Lecture 14, on eleven skeptical objections to quantum computing and what can be said in response to them. And yeah, if you’ve been reading this blog for years, a lot of the material won’t be new to you. It’s just one more hunk of meat to throw into the tigers’ den.

Comment #1 June 12th, 2008 at 5:20 pm

In the beginning of the part about the fourth argument, when talking about the 300 qubits, you mean each component has an amplitude of

2^-150, rather than 2^-300, right?

Comment #2 June 12th, 2008 at 5:28 pm

Thanks, fixed!

Comment #3 June 12th, 2008 at 6:47 pm

The image in the middle of the page (intuitive-hardness.svg) 404s.

Comment #4 June 12th, 2008 at 7:36 pm

Re #3: Thanks for pointing that out. The image was embedded as SVG, which then “falls back” to PNG if the browser has issues, so I’m guessing that somehow, only the PNG version was posted. Sorry for the technical difficulties.

Comment #5 June 12th, 2008 at 9:03 pm

I’m not a skeptic about possibility, I’m a skeptic about economic practicality, I think mostly because of decoherence. The question of interest is the timeline for 10 qubit, 100 qubit, 1000 qubit, 10^4 qubit computations, etc. Even more, the improvements of achieved quantum computational power have to be discounted by the improvements of achieved classical computational power over the same period. Will we be able to factor a 512 bit number faster with a quantum computer than with a classical computer by 2020? Development costs for the two technologies will be difficult to compare, but let’s say the two computers have to cost the same.

Your argument against Laughlin (#6) is compromised if we are allowed to introduce the additional computational resource of the effective discreteness of thermodynamic transitions into classical analogue computation.

Comment #6 June 12th, 2008 at 9:30 pm

David and Chris: Sorry about that! Should be fixed now.

Comment #7 June 13th, 2008 at 12:02 am

Pertaining to objection #3 (“Not enough real physics”), it seems as though many skeptics of quantum computing regard it an roundabout exercise in mathematics.

I (coming from a math/engineering background rather than physics background) have never understood the aversion that some physicists or engineers have to pure mathematics. Yes, it seems redundant and perversely abstract, but nearly everything in science can be reduced to some pure mathematical/statistical model (whether that model is practical to deal with is a different story).

Which brings me to an open question for Scott (and everyone else in quantum computing): Do you come from a physics or mathematics background, and how do you think that has shaped your viewpoint of the field? And if you are not from a physics background when did you first get exposed to quantum mechanics?

-AF

Comment #8 June 13th, 2008 at 1:00 am

Nitpick: nature did appear to have something like a fission reactor, in Gabon.

(http://en.wikipedia.org/wiki/Natural_nuclear_fission_reactor)

The point is well made, though.

Excellent lecture.

Comment #9 June 13th, 2008 at 2:10 am

It is very nice to see this topic discussed again! And there are larger issues worth mentioning. One topic is the role, practices, and methodology of skepticism in (and beyond) science. This post brings us back to the question of how far can we go with “conceptual” ideas. (This was recently discussed regarding FOCS/STOC acceptance policies.) It looks to me that we see a good example where Scott expects some conceptual “one liner,” but a convincing argument against QC will take much more. (Just like building QC.)

Comment #10 June 13th, 2008 at 3:03 am

Do you come from a physics or mathematics background, and how do you think that has shaped your viewpoint of the field?I come from a CS background (which between the two, is surely closer to math). That’s obviously shaped my interests and skill set. How it’s shaped my

beliefsabout the topics in this lecture, I’ll leave for others to judge — though I don’t actually think those beliefs are terribly different from those of most physicists working in the area!And if you are not from a physics background when did you first get exposed to quantum mechanics?I’d heard as a child about Schrödinger’s cat, light being both a wave and a particle, etc. etc., but they were all just phrases to me — I didn’t understand how any of it differed from classical indeterminism. My first

realexposure to QM came via quantum computing — mostly from studying the papers of Bernstein-Vazirani, Simon, Shor, and Grover as a freshman at Cornell, and then from doing an internship with Grover himself at Bell Labs. I was amazed that it was all just probability theory with minus signs, and that no popular book I’d ever read had let me in on the secret. OnlyafterI’d understood the computational model was I able to absorb anything about the physics. With the help of physicist friends, I’ve been slowly filling in the gaping holes in my physics education for the last ten years.Comment #11 June 13th, 2008 at 10:29 am

My own main academic interest on this topic is regarding point 11 that Scott mentions at the end, and I think it reflects much of the (small) academic effort in the “skeptical direction”. Somehow I find Scott’s rhetoric and argument on this issue a bit strange. (Also, is the Q: at the end is a typo?)

“Argument 11 comes from people who understand the Fault-Tolerance Theorem, but who take issue with the assumption that the errors are independent.”

(Of course, this concern was raised not only by skeptics e.g. in Preskill’s 1998 paper.)

“This argument posits that it’s ridiculous to suppose that errors are uncorrelated, or even that they’re only weakly correlated, from one qubit to the next.”

It is very reasonable to assume that errors are uncorrelated and then to go on and study what happens if they are correlated.

“Instead, the claim is that such errors are correlated, albeit in some very complicated way.”

As far as I can see it need not be complicated.

“In order to understand this argument, you have to work from the skeptics’ mindset: to them, this isn’t an engineering issue, it’s given a priori that quantum computing is not going to work.”

I do not understand this sentence and the distinction between an engineering issue and a non engineering issue seems artificial.

” The question is how to correlate the errors such that quantum computing won’t work.”

Yes this is an important question whether you are skeptic or not.

“My favorite response to this argument comes from Daniel Gottesman.”

The comment that certain systematic form of errors can increase the computational power is very nice. But is it really a resopnse, or just some concern people should be aware of?

” Q: [A???] Not only would your errors have to be correlated in some diabolical way, they’d have to be correlated in some unpredictable diabolical way.”

I dont know what “diabolic” means, why the errors should be diabolic, and why should they be unpredictable.

“Otherwise, you could deal with the problem in general.”

How? Do you have a specific result in mind?

Comment #12 June 13th, 2008 at 10:48 am

As far as I understand it, there is a real distinction between engineering and fundamental issues: if there’s an engineering reason that quantum computers can’t be made, then it has nothing to do with a problem in theory, but rather our ability to build machines that exploit that theory. There are fundamental reasons why we cannot ever build superluminal communication systems, perpetual motion machines or 2-second/infinite-step hypercomputers. On the other hand, there is not a fundamental reason why we cannot go faster than the speed of sound (which is why we were able to do it), build a fusion reactor (the sun is a fusion reactor and so we know it can be done, but we have a hard time making it work

justright) or any of a whole host of things which were decried as impossible until we did them.Therein lies what is, to my mind, the crux of the issue: engineering problems have to do with what can do

right now, whereas fundamental problems have to do with what we caneverdo. From that standpoint, it may well be that quantum computers are “impossible” from an engineering standpoint, until someone comes up with a better mechanism for solving those engineering problems. That doesn’t mean, however, that our understanding of the physics involved in quantum computation is somehow insufficient or is wrong.Comment #13 June 13th, 2008 at 12:58 pm

Chris Granade, I thought that yours was a fine post, with which IMHO most would agree.

This leads us to ask, how much further might these ideas be extended, keeping within a (loosely-defined) cognitive class that henceforth we will call “most-would-agree …” (MWA) ?

Thus, the following brief manifesto attempts to be MWA-consistent both with Scott’s fine lecture and with Chris’ fine post, and furthermore (to elicit controversy) it will attempt to push MWA-consistency about as far as it can be pushed, in the present state of math and physics knowledge.

Under the assumption that orthodox quantum mechanics is correct—with “orthodox” meaning (MWA) the linear quantum mechanics of Schroedinger, Heisenberg, Dirac, von Neumann—then quantum computation is feasible in principle (MWA).

Very importantly, “feasible in principle” holds even in the presence of a (restricted class) of errors (MWA).

How hard is building quantum computers likely to be in practice? Very hard indeed (MWA) ! One important reason—that was not explicitly mentioned in Scott’s lecture—is the historical heuristic that, when it comes to foreseeing noise mechanisms, Mother Nature is considerably more ingenious than mathematicians and theorists (MWA).

Here no disrespect to today’s mathematicians and theorists is intended … that’s (MWA) the way things have worked in previous centuries.

That is why (MWA) present experimental attempts to build quantum computers serve the following two purposes, both of which are (MWA) fundamental, and both of which are of considerable practical importance (MWA): (1) these devices explore new realms of “noise space” in which surprising new physics and mathematics is very likely to be encountered (MWA) and (2) these devices test whether orthodox quantum mechanics is exactly true, versus being a (linearized?) approximation to some deeper theory (MWA).

No matter

whatthe answers are to these two questions, (MWA) they are well-worth exploring! Because (MWA) they are among the very deepest mathematical and physical questions—and hence the most exciting questions—that this century will explore.And also the answers to these questions will yield

plentyof important practical applications, whether quantum computers are feasible or not (MWA).Would anyone care to further extend the class MWA? Or does the above push it too far?

Comment #14 June 14th, 2008 at 3:20 am

I found the tone of Scott’s lecture very odd: if anything, the more serious objections were mocked more. I don’t know how people actually phrase objection 11, but it certainly seems reasonable to explore exactly what has and has not been proved. To insist that noise will be diabolically correlated is absurd, but so is the claim that the noise-correction theorem is the end of the story. As Scott usually puts it, the fault-tolerance theorem puts the ball in the skeptics’ court, to find some way around it.

Even if you found straw men in the wild, you’re still tilting at scarecrows.

Comment #15 June 14th, 2008 at 10:17 am

An element that is missing from most discussions of noise in quantum computing is

genchi genbutsu… “go and see for yourself.”John von Neumann was an ardent practitioner of

genchi genbutsuduring the development of classical computing (as this audio recording testifies (*)) and IMHO the practice ofgenchi genbutsuwill be even more necessary to the development of practical quantum computing.What lessons does

genchi genbutsuteach? Many lessons … humility in the presence of nature’s ingenuity is one … the necessity of respectful cooperation is another … each lesson is new and endlessly creative.——-

(*) my best transcription of von Neumann’s lecture follows:

Comment #16 June 14th, 2008 at 12:23 pm

As Scott usually puts it, the fault-tolerance theorem puts the ball in the skeptics’ court, to find some way around it.It’s absolutely true that the ball is in the skeptics’ court. To be clear, the question that Scott is addressing is, “Is quantum computation possible in principle, or has it been a big distraction to take it seriously?” The question is not, “Will useful quantum computers exist any time soon?” Because for that second question, skeptics have a very strong case.

Maybe Scott’s answer to objection #11 is too glib and too snarky. Even so, this corner of the debate can be summarized as follows:

Believer: We have a fault-tolerance theorem for such-and-such error models.

Skeptic: Yes, but your error models are not realistic.

Believer: Okay, what’s a better error model?

And at this point, the debate ends, because the skeptics have no good answers. Some of them feel no need to propose better error models. They see the debate solely in terms of believers defending their turf, and therefore expect the believers to do all of the homework. Some of them, to their credit, are willing to do homework, but they simply have not found error models that are both more realistic and a show-stopper for the theory of fault-tolerance.

Of course, the quest for fault tolerance is not really over. Useful quantum computers do not yet exist. The threshold theorems are not as robust as they could be in a variety of ways. The believers do have a lot more work to do. But not as part of a debate with skeptics. In fact I would welcome a better explanation of why it’s so hard to build quantum computers. At the moment, that reality is largely just an empirical conclusion. Such an explanation would either make it easier to build quantum computers, or it would be evidence that we shouldn’t try. (Or maybe both: it could warn us away from some dead ends and therefore make it easier to follow the right path.) But again, skeptical research papers have contributed very little lately.

To my knowledge, the last really useful skeptical quantum computation paper was by Unruh. He did a basic calculation to conclude that Shor’s algorithm would be swamped by noise. This paper was very useful because it helped motivate the theory of quantum error correction.

Comment #17 June 14th, 2008 at 11:10 pm

Regarding Objection #2:

I’ve seen criticisms before that the “Extended Church-Turing thesis” is more or less a straw man put up by quantum computer scientists to help justify the importance of their work.

I don’t know whether this is true (and I somewhat doubt that most researchers would have those motives…) but I don’t think I’ve ever seen a pre-Shor’s algorithm paper that explicitly states some form of the ECTT. Do you know of one?

Comment #18 June 15th, 2008 at 12:24 am

Harrison: One could find plenty of pre-Shor articles and textbooks defending P and BPP as reasonable formalizations of “efficiently computable” (examples anyone?), though it would be harder to find a defense of the Extended Church-Turing Thesis as such (people didn’t usually lay things out that explicitly). To me, though, the ultimate argument that we’re not just knocking down a strawman is the number of computer scientists and physicists who

todaydefend the Extended Church-Turing Thesis (or at least consider it a live possibility),post-Shor!Comment #19 June 15th, 2008 at 12:29 am

Greg Kuperberg says …

I will gently suggest that in the above dialog, neither Skeptic and the Believer has the right of it, for the simple reason that they have (in effect)

colludedto end the debate: Skeptic has made a nearly content-free assertion, to which Believer has responded with a nearly content-free question (which can in fact be construed as a rhetorical question).This collusion is unsurprising from a game-theoretic point of view, given that neither Skeptic nor Believer perceives much benefit in continuing the dialog.

My own opinion is that a well-defined and productive path forward is to seek motivations for the in-depth study of quantum noise

other than quantum computing.Comment #20 June 15th, 2008 at 2:46 am

Greg, I completely agree with you. I mentioned how Scott usually phrases it, because it surprised me that he didn’t this time.

Perhaps what I should have said is that none of the 11 items are arguments. If you treat them as arguments, they’re stupid. But some of them (most clearly the last) can be transformed into interesting questions. I have nothing against mentioning that people make stupid arguments, but it is not worth focusing on.

Comment #21 June 15th, 2008 at 7:50 am

“Immanuel Kant wrote an entire treatise demolishing it: On the Common Saying: ‘That may be right in theory but does not work in practice.’ ”

Is this serious or a subtle joke? (link, reference)

Actually there was a movement in Americam philosophy (Peirce, Dewey) called pragmatism that claimed more or less that what is important is not what is true but “what works”. Sidney Morgenbesser used to say about pragmatism: “It may be true but it does not work.”

Comment #22 June 15th, 2008 at 9:33 am

Gil: I tried to find the text of the treatise, and found several encyclopedias of philosophy (through Google Books only) that specified that such a treatise exists. Doing a Google search for the full title gave a page which claimed to be a translation of the essay. I also found the abstract to a journal article discussing the essay.

Comment #23 June 15th, 2008 at 12:14 pm

Douglas Knight says:

Agreed, but then can’t the same be said of the lecture’s 11 responses to those 11 arguments?

This is not to pick on Scott’s lecture in particular, but just to say not that IMHO discussions of quantum information theory tend to deteriorate whenever they focus on the practical realities of quantum computing

One clear reason for this is that our understanding of quantum noise—mathematical, experimental, and theoretical—is seriously deficient relative to our understanding of quantum dynamics.

No one can credibly claim that quantum dynamics are understood, and so isn’t it even less credible for folks to claim (explicitly or implicitly) that quantum noise is understood?

Particularly when the mathematics of quantum noise is so much subtler than the mathematics of quantum dynamics, and the experimental study of quantum noise is so much more primitive than the experimental study of quantum dynamics.

Comment #24 June 15th, 2008 at 2:08 pm

Thanks a lot Chris! (Maybe I should be less skeptical…)

Comment #25 June 15th, 2008 at 3:30 pm

Gil: No problem. When I was typesetting, it struck me as odd, so I went to look it up. I didn’t find the claimed translation ’till today, or I would’ve made it a hyperlink.

Comment #26 June 15th, 2008 at 11:17 pm

Do people make the “small amplitudes are unphysical” argument in any context other than quantum computing?

That is, does anyone suggest that this is a place where we ought to look to see if QM breaks down? Do they compute what standard experiments tell us about upper bounds on realizable small amplitudes and suggest experiments that would more directly measure this?

Comment #27 June 16th, 2008 at 2:28 am

Can argument #5 (the appeal to the holographic principle) be looked at as a more explicit version of #8 (the “quantum mechanics is only approximate” argument)?

It seems as if the holographic principle argument would disallow not just quantum computing, but large entangled quantum states in general. (After all, they’re storing 2^n bits, in some sense, even if they’re just sitting there!)

Comment #28 June 16th, 2008 at 6:24 am

Douglas Knight asks:

Do people make the “small amplitudes are unphysical” argument in any context other than quantum computing?The short answer is “yes”.

But thoughtful reading-between-the-lines is necessary, because (as you anticipated) these folks are

nottalking about quantum computing.See (for example) Ashtekar and Schilling arXiv:gr-qc/9706069,

Geometrical formulation of quantum mechanicsfor a discussion of quantum state-spaces having non-linear Kähler metrics. These manifolds simply don’t have enough dimensions to support the high-order correlations/small amplitudes of large-scale quantum computational processes, but theydo support(projective versions of) the orthodox postulates of quantum mechanics.In an Ashtekar-Schilling quantum universe, computers having only a few qubits would work fine, but as the number of qubits increased, so would observed error rates. These errors might at first be attributed to non-fundamental technical noise … it might conceivably take quite awhile until the fundamental geometric origins of the observed quantum noise was appreciated.

From the Ashtekar-Schilling point-of-view—with which I mostly agree—it is perfectly consistent to regard the Kählerian geometry of Hilbert space as an appropriate topic for mathematical, theoretical, and experimental investigation, just like Riemannian geometry.

In fact, these investigations are not only legitimate, they are fun … not least because they give you an excuse to read Terence Tao’s outstanding mathematical blog!

For folks who like quantum mechanics, general relativity, information theory, and practical engineering applications — Kähler manifolds are IMHO are a small slice of paradise.

Comment #29 June 16th, 2008 at 8:58 am

As a further remark—and to respect Scott’s classification—the above Ashtekar-Schilling point-of-view is a specific example of Scott’s argument #8: “Quantum mechanics is just an approximation to some deeper theory” that specifically implies both #4: “Small amplitudes are unphysical” (since in a small-dimension Kählerian state-space at least one amplitude must be large) and #5: “Exponentially large states are unphysical” (since the Kählerian state-space supports only a polynomial number of tangent vectors).

But Ashtekar-Schilling quantum mechanics has the very important and (and somewhat counterintuitive) property of remaining interesting even when Scott’s argument #8 is inverted to read

Ashtekar-Schilling quantum mechanics is just an approximation to orthodox linear quantum mechanics.The point is that Ashtekar-Schilling QM can be simulated (in principle) with polynomial space and time resources. So to the extent that A-S QM is a good approximation to

mostquantum systems, thenmostquantum systems can be efficiently simulated—which is a very important result indeed!That is why A-S QM researchers should not be called

skeptics… they should be calledoptimists.Comment #30 June 16th, 2008 at 12:11 pm

Not to rain on anyone’s quantum-mechanical parade here, but Alan Turing — who I regard as one of the sharpest thinkers of the 20th century, along with ,say, Einstein, Godel and (to a slightly lesser extent) Keynes — offered an easy explanation as to why it’s not gonna be easy going from bits to qubits:

“There is a remarkably close parallel between the problems of the physicist and those of the cryptographer. The system on which a message is enciphered corresponds to the laws of the universe, the intercepted messages to the evidence available, the keys for a day or a message to important constants which have to be determined. The correspondence is very close, but the subject matter of cryptography is very easily dealt with by discrete machinary, physics not so easily.”

Comment #31 June 16th, 2008 at 3:36 pm

Cynthia, I don’t understand why that quote says anything whatsoever about the difficulty of “going from bits from qubits.” Turing was talking about the difficulty of learning about physics using a classical computer, not the difficulty of building a quantum computer.

Comment #32 June 16th, 2008 at 4:21 pm

I always interpreted Turing’s quote as a repackaging in Crypto terms of a classic Huxley quotation.

Huxley, as “Darwin’s Bulldog”, influenced my scientific career a century later, and had something to say about Education which still fascinates me,

the punchline being the final sentence:

“Suppose it were perfectly certain that the life and fortune of every one of us would one day or other depend upon his winning or losing a game of chess. Don’t you think that we should all consider it to be a primary duty to learn at least the names and the moves of the pieces;

to have a notion of a gambit and a keen eye for all the means of giving and getting out of check? Do you not think that we should look with a disapprobation amounting to scorn upon the father who allowed his son, or the State which allowed its members, to grow up without knowing a pawn from a knight? Now, it is a very plain and elementary truth that the life, the fortune and the happiness of every one of us, and, more or less, of those who are connected with us, do depend upon

our knowing something of the rules of a game infinitely more difficult and complicated than chess. It is a game which has been played for untold ages, every man and woman of us being one of the two players in a game of his or her own. The chess board is the world, the pieces the phenomena of the universe, the rules of the game are what we call the laws of nature. The player on the other side is hidden from us. All we

know is that his play is always fair, just and patient. But, also, that he never overlooks a mistake or makes the smallest allowance for ignorance. To the man who plays well the highest stakes are paid with that sort of overflowing generosity with which the strong shows

delight in strength. And one who plays ill is checkmated without haste, but without remorse. My metaphor will remind some of you of the famous picture in which Retzsch has depicted Satan playing at chess with man for his soul. Substitute for the mocking fiend in that picture a calm, strong angel who is playing for love as we say, and would rather lose than win, and I should accept it as an image of human life. Well, now what I mean by education is learning the rules of this mighty game. In other words, education is the instruction of

the intellect in the laws of nature; and the fashioning of the affections, and of the will, into harmony with those laws.”

Comment #33 June 16th, 2008 at 5:51 pm

Alright, Scott. Perhaps what Turing said is right in that it will take more than a classical computer to uncover the quantum nature of gravity, but he wasn’t so right in saying that it’ll take a quantum computer to uncover the other three forces in nature.

Comment #34 June 16th, 2008 at 6:02 pm

Cynthia, he didn’t say that either (certainly not in the passage quoted)! Turing’s text is pretty clear; I don’t think he was concealing hidden meanings like Nostradamus.

Comment #35 June 16th, 2008 at 6:28 pm

Since I basically agree with Cynthia, I’ll supply a quote from the above Ashtekar/Schilling article that makes (what I take to be) Cynthia’s case more explicitly:

And I will add my (heuristic) observation, that past generalizations of physics (and the accompanying new mathematical tools) have sooner-or-later proved very useful in engineering … yes, even general relativity … I am speaking to all you GPS users!

The same will likely prove true of quantum mechanics … and (IMHO) that era of engineering-adaptation is happening right now … the proof being that modern-day engineers read the QIT literature (and this blog) … and care about the issues debated … and respect especially the many-leveled reasons why students are attracted to this field!

Comment #36 June 17th, 2008 at 1:28 am

John, Thanks for the tip on GPS needing GR. I had missed that. As is usually the case when some basic facts are sought, Wikipedia has a nice summary.

Comment #37 June 17th, 2008 at 2:14 am

John Sidles,

Most of what you say sounds false.

A generalization is just as expensive to compute as its special case!

Why do you say the relevant manifolds don’t have enough dimensions? If we observe n qubits, then the A-S manifold should have 2^n dimensions, even if it isn’t projective space. If you claim the universe has few qubits, then you should be predicting that macroscopic superposition isn’t going to work. I don’t see how you’d get from A-S to that, but that’s a clear enough prediction, that you should focus on it.

Finally, A-S is a nonlinear deformation of QM. One expects these to be more powerful, not less! It encompasses one of Weinberg’s variations. Is it the one that allows superluminal signaling?

Comment #38 June 17th, 2008 at 7:15 am

Douglas Night says:

If we observe n qubits, then the A-S manifold should have 2^n dimensions, even if it isn’t projective space.Definitely not! For example, a Kronecker product of single-qubit states has 2n dimensions, not 2^n. Geometrically speaking, it is a Kähler manifold on which the orthodox rules of quantum mechanics can be (projectively, locally, efficiently) realized. Section IV of Ashtekar-Schilling discusses this case at some length.

In practical quantum computations the higher dimensional generalizations of Kronecker product manifolds (like matrix product states / Hartree-Fock states, density functional theory, etc.) can be computed

veryefficiently … for all you Mathematical Intelligencer fans, a fun introduction is Mohlenkamp and Monzon 27(2), p. 65-9. These product-sum manifolds have enough dimensions, and a sufficiently efflorescent geometry, to support plenty of quantum mechanical dynamics … pretty much all of chemistry, atomic physics, and condensed matter physics, for example.The Ashtekar-Schilling article is excellent, but IMHO it would have been even stronger if it had included a section (in the style of Mike and Ike) on measurement operations, POVMs, Choi’s theorem, trajectory unravelings, drift-and-diffusion (etc.) … these building-blocks of modern QIT are just as readily accommodated in geometric quantum mechanics as they are in linear quantum mechanics … the main difference is that the required computational resources are explicitly polynomial in space and time.

Comment #39 June 18th, 2008 at 6:50 am

Statistician and magician Persi Diaconis used to say that if you cannot explain a magic trick by something hidden up the magician’s sleeve, or by the magician having an accomplice in the audience, you should consider the possibility that the magician both has something hidden up his sleeve

andan accomplice in the audience. So I think if we want to imagine a scenario where quantun computers fail, several of the issues raised by Scott (all related to “noise” and well within QM) may be relevant.Overall, Greg’s point of view in comment 16 is reasonable. Unlike the earlier claims regarding noise that should have been addressed, I am not aware of a skeptical claim that requires an urgent response, or that should change a person’s

a prioribeliefs on this matter.Comment #40 June 18th, 2008 at 12:22 pm

I really liked this lecture. Pity you missed QEC 07. Robert Alicki gave a very skeptical talk (slides here).

Anyway, there was one point you raised discussing the holographic principle which I have an issue with:

I believe the Kruskal extension gives a different prediction: From the point of view of an observer falling through the event horizon, the information continues to fall towards the singularity. They don’t pass a layer of high information density at the event horizon. It is only an external observer that will observe the buildup of information at the event horizon.

Comment #41 June 18th, 2008 at 4:38 pm

With digital error correction, I have no issue; in a short time there is a small probability of one qbit flip, etc

It is the analogue error correction that concerns me, where errors are continuous, and so in a short time all qbits will suffer error, all be they small. All correction schemes I am aware of are unable to tolerate errors on all qbits at the same time.

Comment #42 June 18th, 2008 at 8:06 pm

Anwar, the Threshold Theorem specifically assumes that errors happen on all qubits simultaneously (at some constant rate per qubit per time step).

I think you’re missing the basic point of quantum error-correction: that because of the linearity of quantum mechanics, qubit errors can be decomposed into linear combinations of X and Z errors, which are then handled separately in a “digital” way. Say whatever else you want about it, but it’s nothing at all like errors in a classical analog computer. The point is not an obvious one, but it’s been understood for 12 years, so that in this particular lecture I could take for granted that the students had seen it.

Comment #43 June 19th, 2008 at 1:24 am

One issue that came up in comments to my work regarding QC and noise models, and I’d love to hear more opinions about is the following:

Suppose you create 2 entangled photons (say, EPR pair) and let them move apart uninteruppted.

Which of the following is true:

1) There may be some correlated errors when they are created but at any later time the additional errors will be uncorrelated.

2) We may expect additional correlated errors at all later times (induced by the process leaing to the creation of these photons.)

3) Talking about the precise incremental (or infinitesimal) error at a time is a convenient mathematical tool but does not have a precise meaning.

Also, for actual experiments regarding 2 entangled photons (there they move apart in some controlled way) is 2) correct? or we can expect option 1)?

Comment #44 June 19th, 2008 at 5:05 am

Dear Scott,

>into linear combinations of X and Z errors, which are then handled separately in a “digital” way.

my understanding is that you need to measure X component of a spin 1/2, for example, to do this. When measuring it with an instrument, how precise the orientation of the instrument should be?

Do you need better precision as the length of the computation increases?

Thank you in advance for answering beginner’s question.

Regards,

Comment #45 June 19th, 2008 at 8:09 am

One of the main challenges (and frustrations) of QIT is the confluence of scientific and mathematical specialties … each of which speaks a different language … has different goals … and which (in general) does not cite articles from other specialties.

As an example, I’ll work through a chain of articles that begins with an article that Scott’s lecture cites, Dyakonov’s

Is Fault-Tolerant Quantum Computation Really Possible?(arXiv:quant-ph/0610117).Dyakonov makes a seemingly reasonable suggestion (we will discuss at the end why it is in fact not so reasonable):

Hmmm … surely someone has done this? Well, partly. With the caveat that I’m no expert on quantum error correction, the nearest approach I know is Oreshkov and Brun’s

Continuous quantum error correction for non-Markovian decoherence(arXiv:0705.2342).Is Oreshkov and Brun’s analysis sufficient to answer Dyakonov’s criticisms? Well … believers might say “yes” and skeptics might say “no”. The case for “yes” is that Oreshkov and Brun’s analysis is consonant with Threshold Theorem … as is necessarily the case … cuz duh, it’s a

theorem. The case for “no” is that Oreshkov and Brun consider only bit-flip error correction using a three-qubit code … their model does not try to correctsimultaneousbit-flip and phase errors … to the best of my knowledge no one has yet carried through Dyanokov’s program for this (very important) case.Why hasn’t a full-scale Oreshkov-Brun analysis/simulation been carried to completion, to answer Dyanokov’s objections more fully? One reason is implicit in Fig. 2 of Oreshkov-Brun … the correction of even the simplest bit-flip errors (while ignoring phase errors) is impressively complicated … far more complicated than the simplified analysis of the Threshold Theorems would lead us to naively expect.

The Oreshkov-Brun analysis thus occupies a middle region of complexity and realism … their noise-and-correction model is too complicated for mathematicians and theorists to be completely happy with … but not realistic enough for experimentalists to be completely happy with.

And just to be clear: I greatly admire researchers like Oreshkov and Brun, who are brave enough to explore this not-too-popular middle ground of realism and complexity … IMHO such research is absolutely vital to the well-being of the QIT community.

I will let Gil Kalai speak for the simplicity-lovers … and I will mention that Gil shows a becoming modesty in not directing folks attention to his own recent preprint

Detrimental Decoherence(arXiv:0806.2443). That is a very interesting article, Gil!Instead I will speak up for the complexity-lovers … and my text will be the noise model of yet another recent arxiv preprint,

Surface-induced heating of cold polar molecules(arXiv:0806.2913). The point here is that realistic models of thermal electromagnetic noiseareavailable, and they include all the realistic elements that skeptics like Dyanokov would like to see taken into account, including most notably, nontrivial correlations in space and time. And these noise models also include (implicitly) elements that (AFAICT) are too-often ignored in QIT treatments, like wave-function renormalization and AC Stark shifts … there are fluctuation-dissipation-entanglement theorems that link all of these phenomena inextricably together … and they are ubiquitous inallreal-world noise and measurement processes.Scott’s lecture refers to these complexities as “diabolical”, but I think this is not quite fair … the maxim “as simple as possible, but not simpler” applies here … the plain fact is, quantum noise is not as simple as the theorem-provers would like it to be. Facing up to this reality, and finding good paths for moving forward, is a major challenge for the QIT community.

A very important question is, who would want to read a more detailed analysis of quantum noise that did full justice to Dyanokov’s challenge? Such an article might proceed along the lines of Oreshkov and Brun …

but it would have to be (say) two hundred pages long… this (IMHO) would be about the shortest possible article that could do the job … and it would set the stage for subsequent system level analysesthat necessarily would be even longer.Is there any person, or any group, in the QIT community who has the passion, the time, and the resources to do this kind of system-level analysis? Or is QIT destined to become predominantly a community of theorem-provers, whose noise models are too simple to be relevant to real-world quantum system design? Or is there some third alternative?

Comment #46 June 19th, 2008 at 9:02 am

As far as I could understand, Dyakonov’s challenge is whether a simulation will verify the threshold theorem under the assumptions of the threshold theorem. (Thanks for mentioning my paper, John.)

Comment #47 June 19th, 2008 at 9:32 am

Gil, to extract the most benefit from Dyakonov’s challenge, I don’t think it should be interpreted too legalistically (if that is a word).

In particular, IMHO it would very beneficial to respond to Dyakonov’s challenge using continuous measurement models … for the very good reason that instantaneous projection is not physically realizable (even in principle).

To cite just one mechanism (among many) that would have to be analyzed, it is well-known that continuous measurement approaches projective measurement only in a limit in which the integrated phase noise from the AC Stark mechanism diverges … thus the aggregate POVM is in general not a pure projection, but rather is (effectively) a projection accompanied by an additional (inexactly known) unitary that acts solely on the projected subspace.

I mention this effect not to argue on the side of the skeptics, but rather because these real-world effects are unavoidable even in principle, and as the Oreshkov-Brun article illustrates, they are highly nontrivial to analyze.

Comment #48 June 19th, 2008 at 10:26 am

Hi Scott,

I think you are at FOCS’08 PC. When do you announce the result? I know that the meeting was on June 12.

Thanks

Comment #49 June 19th, 2008 at 10:57 am

Do you need better precision as the length of the computation increases?By the Threshold Theorem, in principle you don’t. You just need sufficiently good constant precision (say 10

^{-6}). Systematic errors below the threshold are handled just as well as random errors.Comment #50 June 19th, 2008 at 11:02 am

I think you are at FOCS’08 PC. When do you announce the result?I could tell you, but I’d have to kill you afterward.

Comment #51 June 19th, 2008 at 4:12 pm

John Sidles, thanks for letting me know that I’m not totally out to lunch!

And Scott, please don’t get me wrong. I’m not saying that it’s gonna be impossible to build a fully-functioning quantum computer, but what I am saying is that it’s not gonna be a cakewalk to do so, much like it’s proving to be no cakewalk to find a viable theory of quantum gravity — or a cure for cancer, for that matter.

Comment #52 June 19th, 2008 at 4:47 pm

Here is a quick response to Gil’s question.

When the photon pair is created, the noise may be correlated; that is, we could describe the state of the two photons flying off by a superoperator (linear map mapping density matrices onto density matrices) which cannot be necessarily written as a product of a superoperator of one part (call it photon A) and the other part (photon B). At later times, if the photons are flying off in free space, the ‘additional’ noise will typically be uncorrelated; that is, in a simple model, noise could be described by a superoperator on photon A in tensor product with a superoperator on photon B. An exception would be the case when there are other mechanisms that would explicitly introduce such correlations; that is, another particle that interacts with photon A before flying off to interact with photon B.

Due to the linearity of quantum mechanics, the question whether correlated errors occur or not is in fact independent of the state of the photons. The superoperator describing the noise only depends on mechanisms by which other degrees of freedom can couple to these `modes’, not on what state, entangled or unentangled, we stick into the modes. Of course some states are more fragile than others; for example dephasing is an important source of noise for quantum information but acts trivially on classical bits.

Comment #53 June 20th, 2008 at 2:31 am

Many thanks, Barbara.

Comment #54 June 20th, 2008 at 4:18 am

I’m not saying that it’s gonna be impossible to build a fully-functioning quantum computer, but what I am saying is that it’s not gonna be a cakewalk to do so…Not the slightest hint of a disagreement there.

Comment #55 June 20th, 2008 at 5:12 am

> Systematic errors below the threshold are handled just as well as random errors.

Thank you very much for your reply! But I find this bit hard to swallow – I guess I am questioning the assumptions, not the Threshold Theorem itself. Is it fair to say that systematic errors are correlated almost by definition, in a way that you never get the exact value by averaging? (Because otherwise at least some branches of metrology would reduce to simple averaging to obtain arbitrarily many digits – which certainly is not the case.) Then, if I define systematic errors to be ones that do not allow you to average out, can they still be handled by the quantum error correction procedures?

Thank you again, if you (or someone) have time to enlighten me further.

Comment #56 June 20th, 2008 at 9:24 am

Anon says:

I guess I am questioning the assumptions, not the Threshold Theorem itself … Thank you again, if you (or someone) have time to enlighten me further.Anon, there has been an notable burst of modesty on this thread … many posters (including me) have been somewhat reluctant to cite their own papers … perhaps this is because our present understanding of the role of non-Markovian noise is not very satisfactory.

Barbara Terhal and Guido Burkard discuss some of your questions in

Fault-Tolerant Quantum Computation For Local Non-Markovian Noise(arXiv:quant-ph/0402104).Here I think Terhal-Burkard have asked two good questions. AFAICT, the answer to the first question “Is there a non-Markovian threshold result?” is “Not yet,” And the reason is implicit in the second question. Boson bath models are among the simplest bath models that begin to realistically exhibit the (very important) real-world effects of quantum entanglement and wave-function renormalization. So if we can’t prove a threshold theorem for boson baths, we have very little hope of designing quantum computers that will work in the real world.

In the context of a bosonic bath, same quantum matrix elements that control fluctuation and dissipation also control entanglement and renormalization; in consequence it is possible to write down fluctuation-dissipation-entanglement-renormalization theorems, such that if any one of the four is quantitatively given, the other three are determined (via, e.g., Stieltjies transforms).

Thus as soon as we describe the noise in a system as “non-Markovian”, we are compelled to embrace highly non-trivial phenomena involving dissipation (decoherence), entanglement, and renormalization too … that is why the analysis in these articles tends to be long and complicated!

In practice, renormalization-entanglement effects are observed to be large. In the context of our own quantum spin imaging experiments, it is mathematically convenient to view the cantilever as being a large-j qubit. Then the mere act of bringing the cantilever close to the sample (i.e., the bath) grossly renormalizes the energy levels of the cantilever, and simultaneously entangles the ground state of cantilever with the bath ground state in a highly nontrivial way.

Thus (IMHO) the conclusion of the (2004) Terhal-Burkard article is correct in noting that our understanding of these non-Markovian effects is very far from being satisfactory … conceptually, analytically, or via numerical modeling and simulation.

Hopefully, the above summary will stimulate someone to review the non-Markovian QIT literature since 2004, and point everyone (including me) to some good articles!

Comment #57 June 20th, 2008 at 9:44 am

Oh, and anon … you specifically asked about systematic errors … these can be regarded as (zero-frequency) components of non-Markovian noise … hence the emphasis on the (more general) challenge of developing threshold theorems and quantum computer architectures that respect non-Markovian quantum bath effects.

Comment #58 June 20th, 2008 at 8:43 pm

I haven’t read this lecture yet, so maybe this was covered, but I would like to read anyone’s commentary on Robert Alicki’s proposition “Existence of scalable Quantum Memory (Computer) contradicts Bohr Correspondence Principle”.

Comment #59 June 21st, 2008 at 11:47 am

If it hasn’t been mentioned yet i just want to note that Scott’s article titled “the limits of quantum computing” has just appeared in the German “Spectrum der Wissenschaft”. At last now there is a quality article in this magazine after a series of articles along the lines of “what happened before the big bang”. Other notable points are first that this is another confirmation that Spectrum just rips off Scientific American and second the huge lag that it has in doing so.

Comment #60 June 21st, 2008 at 12:08 pm

I thought that Spectrum was just a translated version of ScAm? Anyway, thanks for the great read Scott, now I didn’t have time to wash the dishes. Thumbs up.

Anyway, I just came accross this jaw-jaw transcript on “QC yes/no/maybe” on the arxiv, from 2003 via Bruce Schneier’s blog, dissing the possibility of practical QC. May be of interest.

Comment #61 June 21st, 2008 at 1:51 pm

Peter,

I think Spectrum has owned Scientific American for about the last 20 years or so. There was considerable grumbling at the time about the perceived drop in quality of SA. My view is that it costs a lot of money to put out a nice science magazine, and the old SA model is perhaps not viable anymore. Too bad.

Physics Today and Astronomy are by far the best science magazines out there. How do they do it? Both have a readership that buys a lot of expensive stuff; advertisers pay top dollar for this audience.

Science News, a standard in high school libraries for 75 odd years (well worth reading), recently switched to a slicker biweekly format. Presumably to help pay for this, they loosened the standards for what advertising they accept, resulting in ads for “improve your sex life” films, etc. This caused a bit of a dustup, and they have cut them out.

Evidently most science magazines accept ads for crackpot science manifestos. Of course, drawing the line between speculative and crackpot is about as hard as writing rules for what pornography is. Anyway, it appears that $M’s have been spent advertising a book called Null Physics. What’s the deal with that?

Comment #62 June 21st, 2008 at 10:17 pm

Peter: Thanks so much for pointing me to that! I actually had no idea that I’d written the cover story for a German-language magazine. (This feels a little bizarre to me, given that my knowledge of German is mostly limited to

ein, danke, ja, Bier, Apfelsaft, Weinerschnitzel, Entscheidungsproblem, Gedankenexperiment, Schloss Dagstuhl, Mensch,and a few unpleasant historical words.) I’ll somehow have to get my hands on a copy of that issue…Comment #63 June 22nd, 2008 at 12:03 am

Dear John,

> hence the emphasis on the (more general) challenge of developing threshold theorems

very glad to get a responce from one of the MRFM pioneers!

As far as I understand, are you essentially saying that there is at least possibility that things like systematic error and/or low frequency noise could be the show-stopper for large scale quantum computation, or at least it is an active field of research?

Thank you also for pointing to the literature! But I guess I have to begin with the basics…

Comment #64 June 22nd, 2008 at 12:55 am

Personally, I tend to think that QC are not feasible. But I do not have a firm belief in this: I tend also to think, for example, that there is no polynomial algorithm for factoring, perhaps with a similar conviction. (I have much stronger belief that P is not equal to NP, and that QM is the right framework for physics.)

Quantum error-correction and the threshold theorems indeed changed matters in the sense that they made a convincing case that noise can be dealt with. Some troubling issues of the nature and origin of noise in quantum systems remain unclear. (Noam Nisan, who was one of the earliest HU people to study quantum computation commented that for him the assertion that quantum systems are noisy is very mysterious and desrves much better understanding.)

Here is another attempt to answer your question regarding systematic errors, anon. Indeed from the mathematical side of the picture there are several things which look simpler. As Scott said, quantum error-correction and the threshold theorem are very robust to various types of systematic errors. Scott’s statement “systematic errors below the threshold are handled just as well as random errors,” is almost correct. Under some natural assumptions even if the X and Z errors are planned by an adversary who has unbounded computational power and can even know ahead what the future computation is going to be, fault-tolerance can prevail. This robustness of the threshold theorem gives much information on systematic errors that will fail the threshold theorem.

The models for correlated noise that are damaging to QC are indeed strange. It is reasonable to say, “we do not expect such a behavior”, or “the threshold theorem gives a good reason to believe that we will overcome various other forms of noise once we know what they are”. And indeed just as QEC and the threshold theorem will be instrumental for building QC if those can be built, if QC cannot be built, in my opinion, the threshold theorem and QEC will be instrumental to the explanation why.

A convincing explanation (within QM) of why QC are not feasible (if they are not) can rely on several points that Scott have raised. In particular, it is possible that a behavior of noise which seems unrealistic, applies to evolutions and states which are thmselves unrealistic.

Comment #65 June 22nd, 2008 at 10:25 pm

Dear Gil,

thank you very much for your explanation! Indeed, the subject seems deep and interesting – I’d say often mundane-looking things prove to be very fundamental, just like the improvement on curve fittings to the blackbody radiation spectrum…

I am afraid to disappoint you, but I guess my question is much more elementary. Let me put it in this way. Consider two quantum computations, A and B, that use the “garden variety” quantum computer with a set of spin 1/2. They differ because the orientation of one part of the instrument (say, a magnet) is different by 10^-6 radian. The angle is fixed throughout the computation and hence it is a systematic error. Naturally, the same magnet is used many times in the computation. The question is, do the computations A and B always give the same answer no matter how long they may be? If they do not, how does the computer “know “ (to personify a bit) which computation was intended?

I am not sure if anyone is still reading this thread, but I’d appreciate it very much if you (or someone) kindly give me a quick response to this!

Comment #66 June 23rd, 2008 at 12:51 am

anon,

It is my understanding that, provided the induced error from this magnet is low enough that both computers will output the same computational result, depending on details. Because of the ‘wishcasting’ involved in QECC, where the qubits are projected either onto errors or onto perfect states, very small systematics should just get corrected out.

On a related note, it’s fun to play the game of going through these lectures and trying to figure out exactly which questions from the floor are

me: I was definately there, and was amongst the more talkative students.As I recall, Scott was very disappointed no one was willing to play the sceptic for this lecture, as he was disappointed in our “rationality” several times during the course of the lecture series.

Comment #67 June 23rd, 2008 at 1:01 am

Dear Anon,

Indeed the type of errors you asked about will be dealt by fault tolerance methods. The computer runs a slightly larger program where every qubit needed for the computer program (called a “logical qubit”) is represented by a group of qubits of the computer (“physical qubits”) and the type of errors you mentioned will not be noticed at all on the “logical qubits” (because of error-correction).

(I was referring to much stranger systematic errors which are hypothetical and may very well not exist.)

Comment #68 June 23rd, 2008 at 1:24 am

Anon, I want to second (third?) what Devon Smith and Gil Kalai are saying … yes, (small) systematic errors will be corrected, along with (small) random errors.

Speaking as a person for whom formal methods are the

laststop on the path to understanding … rather than the first … what works for me is to have at-hand three pretty good books, three pretty good articles, and three pretty good lectures … and most important, to work through *plenty* of simple numerical and analytical examples.For me, the formal theorems only deign to “speak” to me

afterI have worked through plenty of examples. And this is true for most people … as I recall Gauss tabulated (by hand) the distribution of all primes up to three million (?).The problem with quantum noise is, by the time you have worked through the above program, you’ve had to learn big chunks of condensed matter physics and field theory. This is a huge undertaking … which definitely I (for one) am

notlikely to complete. All who embark on this journey swiftly develop an immense appreciation for John Preskill’s on-line lectures on this subject.Comment #69 June 23rd, 2008 at 2:46 am

Here is a provocative issue regarding noise that I will be happy to try on those of you not swallowed by the black hole created by the LHC in the next post.

Suppose you have an intended (pure state) evolution RHO(t), t between 0 and 1 of our quantum computer.

And suppose you know that the actual evolution SIGMA (t) is very close to RHO for every t. (Very close, say in terms of PAULI expansion as Scott mentioned above. Simply stated, at every time every qubit is very close to its intended state.)

On what can the errors depend? Can the errors at an intermediate time depend on the process leading to this state? Can the errors depend on the process leading from this intermediate state to the terminal state?

In my opinion, the answer is yes to both these questions. if you assume that you have a noisy evolution which is very close for the whole time to a prescribed evolution then this assumption may create a dependence between the error at an intermediate time and the entire intended evolution.

A dependence on the previous evolution as expressing memory of the environment on the process was considered in various places, but in my view the errors can depend on the process before and on the intended process ahead and this dependence need not be expressed by some “physical” memory of the environment. It just reflects the constraints that the process is very close at ALL TIMES to the intended evolution.

What do you think??

Comment #70 June 23rd, 2008 at 8:37 am

Gil asks:

“What do you think [of the following question]?”My own perspective is that the question, as asked, is not yet sufficiently well-posed to answer … and I will suggest a modification to make it well-posed.

Suppose we try to code up a few examples of the problem as-stated. Immediately we have technical trouble with starting assumption “the actual evolution [of a pure state] SIGMA (t) is very close to RHO for every t.”

It’s that seemingly harmless—but actually quite troublesome—phrase “actual evolution” that causes the trouble. How exactly are we to interpret this phrase when we implement our trajectory-computing codes?

One is to write two codes whose output is a state trajectory that describes the quantum computation. Alice’s zero-noise code is boring … it outputs the same trajectory each time.

Bob’s finite-noise code is more interesting … it outputs a different trajectory each time … and that trajectory is always close to Alice’s trajectory (and here “close” can mean “has high fidelity relative to Alice’s trajectory at every point along the trajectory”).

We feel like we are making progress. But to disrupt our peace of mind, along comes Eve with a

thirdfinite-noise code. Like Bob’s code, Eve’s code outputs a different trajectory each time … but that trajectory is *not* close to Alice’s trajectory … occasionally it has large “jumps” that grossly deviate from Alice’s trajectory.Nonetheless, Eve claims her code is computing (physically) the same noisy process as Bob’s … and of course she is right!

To further add to the confusion, an exponentially large number of additional coders show up, with each code yielding grossly different outputs from the others, and yet with each code claiming to produce results physically identical to Bob’s.

To reduce the confusion,we need to choose one of the following two alternatives.

Alternative A:Ask only questions whose answer is the same for all of the noisy trajectory simulations.Alternative B:Provide criteria that single out one particular noisy trajectory simulation as “preferred.”At first sight, Alternative A seems to be the most mathematically elegant. But if we embrace Alternative A, then it is not quite clear (to me) how to answer your question “On what can the errors depend?” in an implementation-independent way … since each of the noise codes will in general provide a physically different explanation.

For example, even for the very simplest class of noise simulations, in which of Markovian noise acts independently on the individual qubits, some noise coders will say “the errors are induced by random classical noise”, some will say “the errors are induced by quantum jumps”, some will say “the errors are induced by covert measurement” … and they will all be right.

When we consider the (physically real) possibility non-Markovian noise models, the equivalent noise descriptions multiply almost without limit (this is where John Preskill’s on-line notes are invaluable).

Therefore, Gil, I suggest that perhaps your problem statement might be modified to explicitly embrace Alternative B, by singling out as the preferred noise unravelling the trajectory that has the highest algorithmic compressibility.

This criterion has least has the advantage of being mathematically well-posed … albeit the figure of merit (trajectory compressibility) is (of course) NP-hard to compute.

Alternative B has the practical advantage that we can hope to be able to simulate noisy systems whose Hilbert space is exponentially large, by the expedient of calculating always in the compressed representation. So perhaps we should not be too afraid that implementing Alternative B is NP-hard in the general case … it may be that for many practical cases of interest it can be solved none-the-less.

In fact, my own experience has been that it is highly instructive to read (e.g.) John Preskill’s notes … the quantum matrix product state (MPS) literature … the quantum chemistry literature … with algorithmic compressibility in mind.

The bottom line: if we modify your question to ask not about “the actual quantum evolution” but instead about “the most compressible quantum evolution” … then the problem is better-posed … and we can start having fun right away!

Comment #71 June 23rd, 2008 at 9:12 am

Gil, I forgot to mention two things: (1) we can look forward to Scott’s coming lecture on “quantum state learnability” because of its natural connexion to quantum state compressibility, and (2) Terence Tao’s blog has several posts that describe an on-going avalanche of creative work in sparse representation and compressive sampling theory … this work too has natural connexions to the notion of quantum compressibility.

There is plenty of excitement and creative ferment in these fields … taking part in this creative ferment is one of most enjoyable aspects of embracing the ideas of algorithmic compressibility as being central to quantum noise analysis.

Comment #72 June 23rd, 2008 at 10:23 am

Gosh Gil … at the risk of boring folks … a small risk since this thread is moribund … I will now extend to above argument to establish that the answer to your two questions is “yes” … in the specific sense that there is a perfectly valid noise unraveling in which all causality is retrograde.

The coding of retrograde noise models is straightforward … it demands exponentially large memory, but who cares?

We first recall how Bob unravels prograde (causal) noise models. He scatters noise photon 1 off the qubits, measures the outgoing noise photon 1, scatters noise photon 2, measures noise photon 2 … scatters noise photon

n, measures noise photonn.This approach can be coded with very little memory storage, because each photon’s outgoing quantum state is measured-and-forgotten before the next photon is scattered.

But suppose Bob doesn’t care about code efficiency … suppose he first calculates and stores an (enormous) wave function consisting of the qubits and all the scattered noise photons, and only reduces this (enormous) wave function to a qubit trajectory as the very last step of his code.

Provided Bob is willing to store this enormous intermediate wave function, then there are exponentially many different ways in which Bob can choose to code the final reduction to a qubit trajectory … most of them being grossly non-causal!

For example, Bob is perfectly free to condition the measurement of noise phton

nbased on the result of measuring noise photonn+1 … Bob is in fact free enforce retrograde causality throughout his entire noise simulation, such that the first scattered noise photon is thelastto be measured … or Bob is free to abandon notions of causality entirely, and measure his scattered noise photons inanyrandom order.It is particularly fun to consider models in which even-numbered photons are unraveled with prograde causality, and odd-numered photons are unraveled with regrograde causality … the result is a mixed-causality quantum-noise model that is reminiscent of the mixed-causality Wheeler-Feynman models of electrodynamics.

My conclusion, Gil, is that you are absolutely right … in fact, there is no mathematical or physical requirement that noise models in QIT embody any notion of causality whatsoever.

Comment #73 June 23rd, 2008 at 11:11 pm

Many thanks to the people who kindly took time to respond to my question. I take that simple systematic errors, while correlated to each other, are not “strange” enough to derail the quantum computation… I think I have to carefully read relevant papers. Now I know they are worth reading!

It is very counterintuitive (at least to me) that, while the amount of e.g. phase shift during QC could be arbitrarily close to zero (though polynomially) if I am not mistaken, QECC allows for constant systematic errors that may not average to zero. Perhaps someone can (or maybe I should, if I manage to understand all) write up “quantum error correction for dummies”?

Comment #74 June 25th, 2008 at 6:31 am

Thank, John, for the detailed remarks. (I may try to ask you about some little details off line.)

It looks that many of the points of Scott esp. numbers 5 7 and 9, offers a lot more to discuss.

Comment #75 July 1st, 2008 at 7:34 am

Here are a few more remarks on “QC skepticism”.

1.For me the main dividing line is not between those who regard QC as feasible and those who are skeptical, but between those who regard the feasibility of QC as an interesting scientific question and those who are not. Among people who regard this question as being an exciting scientific question I do not see much difference between having strong opinions one way or the other or not at all. (Of course, there is a difference in the quality of various arguments.) The skeptics Scott mentioned in the lecture appear to dismiss the feasibility of QC as an uninteresting question, (Apart, perhaps, from Oded who does not think skeptics themselves have an obligation to work on this problem just to prove their point.)2.The one thing I strongly believe is that QM is sufficiently wide to accommodate both the feasibility of quantum computers and the non-feasibility of quantum computers.(Of course, it is easier to see how QM accommodates QC being feasible.)3.Scott’s point 7 (which is related to learnability) is quite interesting. What can make an argument of the form “we have not seen anything like this before” strong?Part of the skeptical work is to try to express the concern “we have not seen anything like this before” so that “like this” will be as wide as possible. Some concerns regarding thermodynamics in several papers by Robert Alicki and several coauthors (Alicki-Lidar-Zanardi and and Alicki Horodecki) are in this spirit. The slides from Alicki’s QEC07 lecture that Joe linked, also present ideas in this general direction. I do not think claims of this nature are potentially “show-stoppers” on their own but they can be important nevertheless.

4.My favorite recent shot at a general-as-possible statement in the “we have not seen anything like this before” direction is the following conjecture.“In every description of a noisy quantum system at a state RHO, the error E “tends to commute” with all unitary operators U that stabilize RHO (so that U(Rho)=Rho).”

“Tends to commute” means a (possibly small) bias towards commutativity.

Although this conjecture is somewhat informal, I will be happy to learn about counter-examples and potential counter examples.

5.Another conjecture referring to feasible states for noisy quantum computers (polynomiality of a certain entropy based function) was briefly discussed by Scott and me in the post “reasons to believe II”. It is in the spirit of Scott’s Shor/Sure challenge. Scott offered in remark 60 “cluster states” as possible counter examples, but on further private discussions it appears that they are not counter examples.6.A combination of points 9 and 11 from Scott’s list that can be damaging goes like this: We know (and this was mentioned many times) that if the rate of errors (say for a single qubit) scales up with the number of qubits then we are in trouble; but with the standard models of noise there is no conceptual reason to believe that this is the case.Such a reason may arise for correlated errors. For (very)correlated errors, the different mathematical ways to express “rate of noise” are no longer in agreement, and standard assumptions regarding rate-of-noise in terms of trace distance for the entire computer translate back to individual qubit-errors that are scaling up with the size of the computer.

Comment #76 July 1st, 2008 at 1:03 pm

IMHO that was an outstanding post, Gil. I agree with all except #4, which maybe you could explain at greater length?

My sticking point was the following reasoning: “The ultimate noisy state density matrix is the (high temperature) identity matrix … for this state every unitary is a stabilizer … but there is no error E that commutes with every unitary … and so Gil’s principle does not apply to high-temperature states.”

It is precisely these high-temperature states that we are interested to efficiently simulate our own MRFM work (i.e., when our quantum spin microscopes observe protein structure as encoded in the spin dynamics of a system of 10^4 spins).

Why take the trouble to simulate these large-scale spin systems, when their unperturbed density matrix is (trivially) the identity? How can there be anything “interesting” to say about these (seemingly trivial) quantum states?

The answer is that the act of MRFM observation, in combination with the pulse sequences we apply to flip the protein spins, drives this high-temperature state far from equilibrium.

Life would be easy if we could

definethe spin system to have nice spatial symmetries and Hamiltonians … if we were interested in proving QIT theorems we would immediately take this step … but this is not an option in practical quantum microscopy … we have to simulate (in detail) the disordered molecules and asymmetric spin Hamiltonians that mother nature gives us.Therefore, Gil, from a (purely selfish) engineering point of view, your topic #4 would better serve our needs is it asked: What trajectory unravelings provide computationally efficient representions of thermal density matrices and their perturbations?

The point being (from Scott’s perspective) that these are the only unravelings that are “learnable”, and (from the QSE perspective) these are the only unravelings that are “simulatable.”

More broadly, might we devise a formalism in which “learnable” and “simulatable” quantum systems are the one and the same class?

I would be *very* interested to hear Gil’s opinion—or anyone’s opinion—on this question.

Comment #77 July 2nd, 2008 at 2:07 am

John, isnt the operation that collapses everything to the equilibrium state commutes with all unitary operators?

Comment #78 July 2nd, 2008 at 7:13 am

Gil asks:

John, isn’t the operation that collapses everything to the equilibrium state commutes with all unitary operators?Physical intuition suggests that the answer is “no”. Here is an engineering-type reason why. We consider a spin

jparticle, whose Hilbert space therefore has (2j+1) dimensions. Now we apply a random Markovian 3-vector magnetic field that that couples to the particle via the usual spin operators {s1,s2,s3}. Physically speaking, this magnetic field randomly rotates the particle’s spin axis.We expect that the sole equilibrium state of this system will be the identity state (to within a normalization), and this is not hard to prove. One interesting way to prove it is to unravel the randomizing magnetic field as an equivalent continuous measurement of the particle’s (three-vector) spin axis. The resulting unravelling acts to collapse an arbitrary input state into a coherent state that does a random walk on the Bloch sphere.

It is well known that a statistical ensemble of randomly-oriented coherent states has a density matrix proportional to the identity. So this is a concrete example of a process that evolves an arbitrary input state into to a high-temperature state.

Now we consider an ensemble of such spin-

jparticles. An arbitrary input state (possibly including arbitrary quantum entanglement among the particles) unravels under the influence of the random measurements to a Kronecker product of coherent states. We know that such a Kronecker product has zero entanglement.Now we consider any Hermitian operator that witnesses entanglement among the particles. By the above reasoning, we know that our collapsing operation destroys all such entanglement.

Therefore unitaries generated by entanglement witnesses cannot commute with the superoperator that describes the equilibrium collapse process.

Of course, the above is *not* a proof — just engineering hand-waving! It is really just a physical explanation of *why* operations that collapse to equilibrium need not commute with unitaries, and a starting point for constructing concrete examples.

This example also helps us physically understand why coherent states are ubiquitously present in our noisy macroscopic world, while quantum entanglement is rare and has to be vigilantly guarded against noise.

Comment #79 July 2nd, 2008 at 12:26 pm

Hmmm … Gil … on further reflection I think that your point of view and mine are mutually consistent … probably this means I should read your preprint carefully and then do some explicit calculations … this won’t happen very fast, but perhaps over the next few weeks there will be some progress. Thanks for a great post!

Comment #80 July 4th, 2008 at 7:48 pm

FYI: yet another ignorant article about QC in a respected newspaper (telegraph.co.uk). Some well-known researchers are quoted, so I wonder if the journalist ever discussed with them what he was going to write…

Comment #81 July 4th, 2008 at 8:09 pm

Good grief, there are so many different

kindsof mistakes here! Maybe the author had had a bet with someone that he could break the record for orthogonal pieces of misinformation about a single topic. I think this one is completely new:Instead of searching the internet for key words attached to images and video – which is how YouTube or Google work – quantum computers would be able to search for the images and video themselves.Comment #82 July 4th, 2008 at 8:15 pm

For the sake of my sanity, I’m not even going to click the link.

Comment #83 July 4th, 2008 at 11:18 pm

Greg: Following Scott’s example, I’m not following the link either, but that quote you gave is something else. I mean, yeah, it’d be cool to search inside a video, but what in the hell does that have to do with QI/QC? Where’d they pull that one out of?

Comment #84 July 5th, 2008 at 1:08 am

Chris & Greg: It’s pretty simple. Images can be recast as labeled graphs. Looking for features in images can be recast as graph matching problems (eg Maximum Independent Set). These types of problems can be solved using AQC. As an example you can see http://arxiv.org/find .

Comment #85 July 5th, 2008 at 1:10 am

Or you could go to the hopefully more informative link http://arxiv.org/abs/0804.4457 .

Comment #86 July 5th, 2008 at 4:07 am

My watch can solve MIS instances as well.

Comment #87 July 5th, 2008 at 4:16 am

Actually, it is overall a good and interesting article. Much better and more informative than most other articles in popular press on QC. Anybody know what are the “counterfactual computation” the article refers to?

Comment #88 July 5th, 2008 at 4:51 am

Gil,

It’s probably in reference to this Nature article:

Here’s the supp. for a more detailed explanation of the experiment/optical circuit:

As far as I can tell, it’s simply an implementation of Grover’s search algorithm. The whole ‘counter factual’ label confuses me.

Comment #89 July 5th, 2008 at 4:52 am

Hmm… link isn’t showing up. Well, the article is:

Counterfactual quantum computation through quantum interrogation. Nature 439, 949-952 (23 February 2006)

Comment #90 July 5th, 2008 at 9:42 am

The above-linked

Telegrapharticle is a fine example of a principle that historian Howard McCurdy describes in the conclusion of hisSpace and the American ImaginationTo the extent that this was the

Telegrapharticle’s objective, I agree with Gil that the article was not so bad … obviously it would have been stronger had it been able to state more concrete goals, more concrete outreach, and (especially) more concrete links to our planet’s urgent challenges.Comment #91 July 5th, 2008 at 10:43 am

These two seem related:

and

The latter belongs to Joseph Goebbels. So, where is the borderline between “imagination” and lie? Isn’t the fact that the article in question is probably sponsored by one (in)famous company trying to raise capital on their “quantum computer” (as the company’s CTO is so well aware of what the journalist meant in his bizarre statement about the image search) suggests that, at the very least, it’s not a pure quixotic imagination?

Comment #92 July 5th, 2008 at 10:51 am

Geordie wrote:

Images can be recast as labeled graphs. Looking for features in images can be recast as graph matching problemsUnless you actually spoke with the journalist, I find it hard to believe from the context that this is what he meant at all, because the next paragraph quotes Anton Zeilinger talking about searching a phone book by number “as an example” of this. Describing Grover’s algorithm is a good idea, but the preceding paragraph just seems like a desperate attempt to make searching an unindexed database sound sexy.

Also from the article:

The field of video gaming could also be transformed. By exploiting the multiple states of qubits, quantum video game consoles could generate a truly random aspect to gameplay, producing a more realistic experience. They could also generate new types of games that rely upon betting against which state the qubits will take.Would a QC researcher have said anything remotely like this?

Typical personal computers calculate 64 bits of data at a time. A 64-qubit quantum computer would be about 18 billion billion times faster.Nice that someone knows how to calculate 2^64, but the average reader will feel awfully let down when a 64-qubit computer is actually built.

Comment #93 July 5th, 2008 at 11:35 am

For the purpose of single-player video games, PRNGs are quite good enough, in my experience. I have played more video games than I care to admit, and I have never wanted my experience to be more “quantum.”

All joking aside, it really bothers me that one of the most consistent, surprising and beautiful areas of physics gets used to justify so much woo, superstition and nonsense. I mean, here it is, some very bright and talented people are working very hard to flush out one of the most subtle models of computation yet devised, and journalists treat it as both more powerful and less exciting than it really is. When we talk about quantum computation, it isn’t interesting because it’s faster at the same way of doing things, but because it gives us new ways of accomplishing tasks which are asymptotically faster. With respect to Grover’s Algorithm, it isn’t about being 100× faster or something, it is about doing things in a way that involves fundamentally less work! That is both much more subtle and more awe-inspiring than what so many sell quantum computation as.

Comment #94 July 5th, 2008 at 12:25 pm

Re #78: “quantum entanglement is rare” — I’ve been told by Dr. George Hockney, who did Lattice Gauge theory at Fermilab and UCLA, then Quantum Computing at JPL, that quantum entanglement is common. But, in many hours of discussion, I do not understand his explanation.

I’ve had a small number of refereed papers on Quantum Computing, in minor venues (American Society of Engineering Education proceedings, International Conference on Complex Systems). Many people reading this comment understand QC better than I. So what, most clearly stated, are the reasons to believe that entanglement is rare, versus common?

Given that once can prepare a set of N particles for N greater than 2, such that no pair of them are entangled, but the set of N is entangled; and given that whether or not two particles are entanged can depend upon the observer; and given that the number of particles in an event can depend on the observer, how can the less than expert human understand?

I, for one, depend on Scott Aaronson’s lucid explanation, among other sources.

Comment #95 July 5th, 2008 at 12:26 pm

People are working on “searching for the images and video themselves” now, without quantum computers—and as far as I know, current state-of-the-art is pretty bad. That’s why Google hasn’t deployed it yet.

I only skimmed the paper Geordie linked above, but it looks like it’s about solving an optimization problem given some features. It seems to me the key to solving image recognition is in the right choice of features, not in optimization methods. So can an AQC do that any better than vision researchers?

Comment #96 July 5th, 2008 at 1:03 pm

lylebot: AQC enables a new class of heuristic algorithms for AI researchers to try on problems believed to be important for their field. A good solver that works well on new classes of problems can permanently increase the rate of progress in a field like AI. So while all the tool does is solve problems delivered to it by researchers, the hope is that eventually it will become more than this.

Chris: Ultimately what matters with technology is what it does, not what it is or how it works. This fact motivates the way technology is communicated to non-experts. While you care a lot about “subtle and awe-inspiring” issues, remember that a 10x speed-up on an important problem over today’s state of the art is a huge deal.

Comment #97 July 5th, 2008 at 1:16 pm

Sorry if this is a duplicate! A browser crash intervened … anyway this rewrite is better-spirited!

——-

Stas asks: Where is the borderline between “imagination” and falsehood? Stas, you have asked a key question (IMHO)… one which is at the heart of my own interest in QIT.

One answer that will occur to

Shtetl Optimizedreaders is the orthodox scientific response:It is then very natural to ask,

The empirical answer is that one can encode math, science, and engineering into end-to-end models and simulations, and then use these simulations as trusted foundations upon which to build enterprises.

To appreciate the exponentiating power of this path, just listen to Intel’s fantastic “Chip Talk” podcasts! Intel’s cadenced technical pathfinding for the next generation of 32nm and 22nm hardware, as a platform for the next generation of teraflop computing, provides an outstanding (IMHO) example of technical imagination.

Of course, this path of technology-based federation has been trod ever since Robert Moray formed the Royal Society … which throughout its history has embraced the practical and federative aspects of science (as John Gribbon’s excellent history

The Fellowshipdescribes).Nowadays the federative power of science and technology is expanding exponentially, thanks largely to the usual suspects: internet connectivity, global databases, Moore’s Law increases in computation power and storage.

In consequence, humanity’s opportunities for initiating federative enterprises are (arguably) greater

right nowthan at any time since the late eighteenth century.Many peoples’ interest in QIT (well, my interest at least) stems from recognition that the availability of practical algorithms for efficient quantum simulation is presently cadence-limiting for extending this federative activity to quantum systems. To an engineering mind, Scott’s lectures on quantum noise and quantum learnability suggest powerful new approaches for advancing in this direction.

So the good news is, quantum system engineers now are taking a passionate interest in each new theorem that the QIT community proves … and an even

morepassionate interest in the mathematical concepts and techniques that are used to prove these theorems.Even more exciting—albeit somewhat disquieting too—is the inescapable reality that the fundamental theorems and mathematical tools of QIT are destined to fill a central role in 21st century politics, business, ownership, education, and ethics.

IMHO, QIT researchers should not be too upset about this … any more than Robert Boyle was upset that his gas laws were used to create design rules for steam engines.

Comment #98 July 5th, 2008 at 2:23 pm

Geordie: One problem is that while, indeed, 10× speedup on a critical problem is a very big deal, it also isn’t robust. What I mean by that is that you can lose that speedup if the error-correction overhead is too high, or if you can’t “clock up” your quantum computer to be as fast as a classical computer. On the other hand, O(sqrt(n)) will

eventuallybe better than O(n), no matter how much smaller those constants are, and so that sense of a speedup is better defined and more robust against unexpected difficulties.Since this is in response to a lecture on skepticism of quantum computing, let me try putting it slightly differently: if we “sell” quantum computing as being “ten times as fast” as classical computing (whatever that may even mean), then it’s a much easier claim to disprove. We do ourselves a disservice, I think, when we oversimplify what quantum computing is. Frankly, I think that we can sell the idea that quantum computers let us solve the same problems using new kinds of techniques that are in some fundamental sense faster. Not as in “PowerPCs are faster than x86s,” but as in multiplication

a la russeis faster than repeated addition. We can illustrate the point, I think, by pointing out that quantum computing allows us to take advantage of the structure of some problems in a very subtle sense.Comment #99 July 5th, 2008 at 6:37 pm

“The field of video gaming could also be transformed. By exploiting the multiple states of qubits, quantum video game consoles could generate a truly random aspect to gameplay, producing a more realistic experience. They could also generate new types of games that rely upon betting against which state the qubits will take.”Would a QC researcher have said anything remotely like this?Of course! The last 8 words could totally vaguely fit into a lecture on Bell’s Inequality!

I’m amazed that this guy managed to convince Deutsch and Zeilinger that he had any understanding of what they were saying. (Assuming, of course, it was a two-way conversation…)

Note to self: If a journalist ever interviews me for a math/science article, don’t let them quote me until I’m damn sure they know what I’m talking about. Or maybe university departments should hire PR people to convey scientific advances in understandable but correct terms.

Comment #100 July 5th, 2008 at 9:32 pm

Geordie, are you having any success implementing these AQC algorithms yet?

Comment #101 July 6th, 2008 at 1:31 am

Feynman on honesty

http://www.ar-tiste.com/feynman-on-honesty.html

Comment #102 July 6th, 2008 at 1:31 am

Kaori, you can check out our latest experimental paper to see what we’ve published to date, a link to it is http://dwave.files.wordpress.com/2008/07/20080704_d-wave_lz_1q_experiment.pdf.

Comment #103 July 6th, 2008 at 1:34 am

Ug I am having no luck with links today. Try the above link without the period at the end.

Comment #104 July 6th, 2008 at 4:19 am

Geordie,

Same here with the links. Neat paper/result. Best of luck scaling your systems up!