## Reasons to believe II: quantum edition

At Greg Kuperberg’s request, I’ve decided to follow my Ten Reasons To Believe P!=NP with…

Thirteen Reasons Why I’d Be Surprised If Quantum Computing Were Fundamentally Impossible

So that there’s no question about exactly where I stand, I’ll start out by repeating, for the ten billionth time, the Official Scott Aaronson Quantum Computing Position Statement.

- It’s entirely conceivable that quantum computing will turn out to be impossible for a fundamental reason.
- This would be much more interesting than if it’s possible, since it would overturn our most basic ideas about the physical world.
- The only real way to find out is to try to build a quantum computer.
- Such an effort seems to me at least as scientifically important as (say) the search for supersymmetry or the Higgs boson.
- I have no idea — none — how far it will get in my lifetime.

I now offer thirteen arguments to support the above views.

**The Obvious Argument.**Quantum mechanics has been the foundation for all non-gravitational physics since 1926. Hoping that it would “just go away” has been one of the most consistently losing strategies in the history of science. If physicists and engineers didn’t take quantum mechanics seriously as a description of the world, they wouldn’t have been able to invent the laser, transistor, or classical computer. For that matter, they wouldn’t be able to explain why all the atoms in the universe don’t instantly disintegrate. Now, if you start with quantum mechanics, and write down the model of computation that directly flows from it, what do you end up with? BQP: Bounded-Error Quantum Polynomial-Time.**The Experimental Argument.**Ten years ago, one wouldn’t have been able to do much more than mount a general defense of quantum mechanics. But by now, liquid-NMR quantum computers have been built that not only factored 15 into 3 x 5 with small probability of error, but also searched 8-item databases. I’ve seen some of the machines that performed these staggering computational feats right here in Waterloo; they look like big-ass cylinders with the word “Bruker” on them. Seriously, while liquid-NMR (at least for now) doesn’t seem to be scalable, there’s been lots of recent work on solid-state NMR, photonics, and ion traps, all of which (if I’m not mistaken) are up to at least 3 qubits. While I don’t think the experimentalists are anywhere close to succeeding, these are smart people who haven’t been sitting on their asses (or if they have, then no doubt hard at work at a lab table or something).**The Better-Shor-Than-More Argument.**Why do skeptics always assume that, if quantum mechanics turns out to be only approximate, then whatever theory supersedes it will reinstate the Extended Church-Turing Thesis? Why isn’t it just as likely, a priori, that the new theory would yield even more computational power than BQP? This isn’t merely a logical point: to the extent that people have tried to propose serious alternatives to quantum mechanics (where “serious” means “agreeing with known experiments”), those alternatives often do involve more computational power than BQP.**The Sure/Shor Argument.**If you believe quantum mechanics is going to break down before nontrivial quantum computing becomes possible, then you must believe there’s some point where it will break down — some level of size, or complexity, or whatever, at which it will cease to be a useful description of the world. What is that point? In other words, where is the line — possibly a fuzzy, asymptotic, resource-dependent line — that puts the quantum states that have already been observed on one side, and the quantum states that arise in Shor’s factoring algorithm on the other? In a paper I wrote three years ago, I called such a line a “Sure/Shor separator,” and challenged skeptics to come up with some example of what it might be. I even tried to get the ball rolling by studying such separators myself. My idea was that having a Sure/Shor separator could motivate further research: once they knew where the “barrier” was, the experimentalists could set to work trying to cross it; then, if they succeeded, the skeptics could come back with a new barrier, and so on. Unfortunately, no skeptic has yet risen to the challenge. It’s not hard to see why: if you start with the many-particle entangled states that have already been observed (for example, by the Zeilinger group and by Ghosh et al.) and then throw in a few closure properties, you quickly end up with — well, the set of all quantum states. Coming up with a “reasonable” set of states that includes Sure states but doesn’t include Shor states turns out to be an extremely hard problem.**The Linearity Argument.**In my experience, at least 70% of all objections to quantum computing boil down to the idea that a quantum computer would be a “souped-up analog computer” — a machine that would store information not in voltage differences or the positions of pulleys, but instead in exponentially-small amplitudes. From this idea it follows readily that, just as “old-school” analog computers have always run up against scalability problems, so too will quantum computers. To see why the analogy fails, think about classical probabilities. If you flip a coin a thousand times, you’ll end up with a probability distribution over outcomes that requires real numbers of order 2^{-1000}to describe. Does it follow from this that classical probabilistic computers are really analog computers in disguise, or that classical probability theory must be a mere approximation to some deeper, underlying theory? Of course not — for, unlike voltages or pulleys, probabilities evolve in time by means of norm-preserving linear transformations, which are insensitive to small errors. Well, quantum amplitudes also evolve by means of norm-preserving linear transformations, and this is what makes them behave like probabilities with respect to error, and not like the state variables of an analog computer.**The Fault-Tolerance Argument.**Among the many nontrivial consequences of this linearity, there’s one that probably counts as a separate argument: the Threshold Theorem. This theorem states that even if a quantum computer is subject to noise, we can still use it to do universal computation, provided we have parallel processing and a supply of fresh qubits, and provided the error rate is at most ε per qubit per time step, for some constant ε>0 independent of the length of the computation. The original lower bound on ε was about 10^{-6}, but recently Knill and others have brought it up to 1-3% under plausible assumptions. Many quantum computing researchers talk about this theorem as the knight in shining armor who rode in unexpectedly to vindicate all their hopes. They’re entitled to do so, but to me, the theorem has always felt more like a beautiful, detailed working-out of something that couldn’t possibly have been false. (And not just because it’s a theorem.)**The What-A-Waste Argument.**Why do I say that the threshold theorem “couldn’t possibly have been false”? Well, suppose quantum mechanics were an accurate description of reality, yet quantum computing was still impossible for some fundamental reason. In that case, we’d have to accept that Nature was doing a staggering amount of quantum computation that could never be “extracted,” even in principle. Indeed, even assuming that life is (and always will be) confined to the vicinity of one planet, the resulting computational waste would make the waste of 10^{11}uninhabited galaxies look like chickenfeed. I don’t deny that such a possibility is logically consistent, but my complexity-theoretic instincts rebel against it.**The Non-Extravagance Argument.**In my opinion, if quantum computers could solve NP-complete problems in polynomial time, then there really would be grounds for regarding them as physically extravagant. Like coming up with theories that allow causality violations and superluminal signalling, coming up with models of computation that can simulate NP, #P, and PSPACE is just too easy. It’s not interesting. The interesting task is to come up with a model of computation that’s stronger than the usual ones (P, BPP, and P/poly), but not so strong that it encompasses NP-complete problems. If it weren’t for BQP, I don’t think I’d have any clear idea of what such a model could look like. (Sure, we have problems and complexity classes below NP, but that’s different from a full-fledged model of computation.)**The Turn-The-Tables Argument.**If building quantum computers that outperform classical ones is fundamentally impossible, then it must be possible to write classical computer programs that efficiently simulate any quantum system found in Nature. And yet, even though this way of looking at the question is perfectly equivalent, there’s a reason quantum computing skeptics avoid it. This is that, as soon as you frame the issue this way, they (the skeptics) are the ones who look like wild-eyed technological optimists — believing we’ll be able to simulate superconductors and quark-gluon plasmas on an ordinary desktop PC! The “staid,” “conservative” position is that such a simulation won’t be possible — or, equivalently, that the systems being simulated have more computational power than the PC doing the simulating.**The Island-In-Theoryspace Argument.**String theorists have been ridiculed for claiming that string theory is “too beautiful to be wrong.” But as Peter Woit points out in his fascinating new book, this is not at all a bad argument. It’s a fine argument; the real question is whether string theory — with its perturbation series, ten dimensions of which six are compactified for unknown reasons, landscape of vacua, etc. — really is as beautiful as its proponents think it is. At the risk of breaking my vow, let me hasten to say that I’m in no position to judge. What I do know is that there’s something mathematically unique about quantum mechanics: how it takes advantage of special properties of the L_{2}norm that fail for other p-norms, how the parameter-counting for mixed states that works perfectly with complex numbers fails with real numbers and quaternions, and so on. Crucially, it seems all but impossible to change quantum mechanics while retaining its nice properties. More so than general relativity or any other theory we have, quantum mechanics gives every indication of being an island in theoryspace.**The Only-Game-In-Town Argument.**However one feels about the alternatives to string theory — loop quantum gravity, spin foams, twistors, and so on — at least each one has a “developer base,” a community of physicists who are actively trying to make it work. By contrast, I don’t know of any picture of the world in which quantum computing is impossible, that’s being actively developed by any research community anywhere. (Gerard ‘t Hooft and Stephen Wolfram are not research communities.) All the skepticism of quantum computing that I’m aware of is purely negative in character.**The Historical Argument.**If the above arguments are sound, then why haven’t people already succeeded in building quantum computers? It’s been what, ten years already? Some historical perspective might be helpful here: in Samuel Johnson’s The History of Rasselas, Prince of Abissinia, written in 1759, Johnson has one of his characters give a correct explanation of why heavier-than-air flying machines should be physically possible, and then build a test plane that promptly plummets into a lake. Johnson was safe in ridiculing the idea; it would be another 144 years before Kitty Hawk. Closer to our topic, Darwin wrote in his autobiography about an eccentric loon of his acquaintance, who dreamed of building an engine to automate routine human thought. Though the loon — a certain Charles Babbage — hadn’t run afoul of any fundamental theory, his proposal to build a classical computer was a century ahead of its time. Since the 1600’s, science has often been generations ahead of technology. History gives us no reason at all to assume that a technology will be discovered to be compatible with known laws of physics at about the same time as it becomes possible to implement.**The Trademark-Twist Argument.**This last argument is the hardest one to articulate, but possibly the most compelling to my mind. In my view, Nature has been telling us, over and over and over, that our everyday intuitions will match the physical world if and only if we first apply a little “twist” to them. Often this twist involves an unusual symmetry, or switching from the L_{1}to the L_{2}norm, or inserting negative or complex numbers where our intuition says that only nonnegative real numbers would make sense. We see such a twist in special relativity, in the metric that’s not positive definite but instead has a (-1,1,1,1) signature. We see it in the -1 phase that the universe picks up when you swap a fermion with its identical twin. We see it in the fact that, to rotate an electron back to where it was, you have to turn it not 360^{o}but 720^{o}. We see it in the Dirac equation. We see it, of course, in quantum mechanics itself. And what is BQP, if not P=BPP with Nature’s trademark little twist?

Comment #1 September 8th, 2006 at 12:25 pm

How is number 10 connected to quantum

computing? And I’m not sure that this is an argument in favor of something. Usually, I think of theories that arerobustas being more believable.Comment #2 September 8th, 2006 at 12:29 pm

Interesting. Thanks! I was going to list which of the arguments I agree with the most. But actually, it’s a complicated discussion and it’s just as well to consider your list as one long explanation.

Another “argument” concerning your bullets and some of your arguments: “Sometimes a cigar is just a cigar.” When you are faced with one major surprise in science, then it is very tempting to wish for some even more radical scientific theory. For example, if you have trouble believing that the Earth is round, you could say, “Well gee, now that we suppose that the Earth’s surface has a topology, what if it’s actually a torus? What if it’s a non-orientable surface of genus 5?” But often when you are surprised in one direction in science, there are no further surprises in that direction. The Greeks discovered that the Earth is round thousands of years ago, and today, modulo very minor quibbling, it should still be called round.

Likewise, no one has

anygood ideas for modifying quantum probability, either to make QC impossible or for any other purpose. Quantum probability is no more than non-commutative probability. It is at least as logically rigid as the supposition that planets are round, or indeed that the laws of physics have local rotational symmetry. Since no one has proposed any plausible modifications or extensions, it’s hard to say whether it would be “interesting”. By the same token, would it be “interesting” to “discover” that periodic table is completely wrong?For the same reason, I’m not sure that I would say that building a quantum computer is as scientifically important as the search for supersymmetry. I don’t mean to say that it’s less important; rather the comparison may be apples-vs-oranges, because there are barely plausible alternatives to supersymmetry. In other words, it is a traditional experimental question in the sense that there is some semblance of a null hypothesis. In the case of QC, I’m not sure, per your argument 4 and others, that there is any serious null hypothesis.

Comment #3 September 8th, 2006 at 12:32 pm

I just sent a copy of this to my friend Bill Gates

(just kidding)

Comment #4 September 8th, 2006 at 1:01 pm

anonymous:

How is number 10 connected to quantum computing?As I tried to make clear, for me “quantum mechanics

isquantum computing” — or rather, those who believe quantum computing is impossible have the burden of explaining where our current understanding of quantum mechanics is wrong.And I’m not sure that this is an argument in favor of something. Usually, I think of theories that are robust as being more believable.I completely disagree with you. The best theories are the brittlest theories, since those are (1) the least arbitrary, and (2) the easiest to rule out by experiment if they’re wrong.

Comment #5 September 8th, 2006 at 1:09 pm

Greg: Thanks!

While you don’t quite say this, your comment hints at the idea that it might not be worth trying to build a quantum computer, since it’s so obvious that it would work. To many others, of course, it’s not worth trying to build a quantum computer since it’s so obvious that it

wouldn’twork. I can think of no better argument for doing it than the combination of these two arguments for not doing it.Comment #6 September 8th, 2006 at 1:27 pm

Fascinating post Scott. I can think of one other motivation for trying to build a quantum computer. It’s not obviously physically impossible (in fact I would go further and say that it’s not obvious, at present, even what the major practical bottlenecks are), and yet it does seem to be an enormous engineering challenge. Which leads me to suspect that the solution will be quite beautiful.

Sean

Comment #7 September 8th, 2006 at 3:31 pm

As I tried to make clear, for me “quantum mechanics is quantum computing” — or rather, those who believe quantum computing is impossible have the burden of explaining where our current understanding of quantum mechanics is wrong.Although no refutation of QC looks particularly likely at the moment, by far the most serious possibility is some new law of thermodynamics that contradicts the hypotheses of fault tolerance. It wouldn’t change zero-temperature quantum mechanics, but it would be an extra mathematical theorem or principle at positive temperature. This points to your reason 4, the sure/Shor separator. I think that more work could be done in this direction, not by restricting the set of pure states as in your paper, but rather by demanding mixed states. The hard part, for the skeptics, is that the demand must be forgiving enough to still allow classical computation and all known quantum condensed-matter systems such as lasers, superconductors, etc. This new law of thermodynamics should also be self-consistent, in the same sense that the Boltzmann distribution (for example) is mathematically consistent.

The hard part for the believers is to write a better discussion of how and whether the assumptions of quantum fault tolerance are realistic. I certainly believe in quantum fault tolerance, but I think that the explanations of this point aren’t very good.

Comment #8 September 8th, 2006 at 3:43 pm

While you don’t quite say this, your comment hints at the idea that it might not be worth trying to build a quantum computer, since it’s so obvious that it would work.It certainly is worth trying to build a quantum computer, but I’m not sure that it should count as a science experiment. As Sean says, it could also be viewed as an engineering challenge. Really the endeavor is in an interesting gray area between science and engineering — if it were sheer engineering, it would probably satisfy Moore’s Law.

But no, it is not like smashing particles together just to see what comes out. It is more like the projects to build the first nuclear reactor. It was not obvious that humanity was capable of making it work, but it was clear enough well ahead of the end whether any particular try would work.

It doesn’t phase me that other people are convinced that it can’t work. The last good skeptical paper as far as I know was the noise paper by Unruh, which led to the discovery of quantum error correction. Unless the skeptics start writing useful papers again, they are just the same as the doubting Thomases who thought that airplanes were impossible. Convincing them is a social question, not a science question. (It may be a respectable social question, but still…)

Comment #9 September 8th, 2006 at 6:32 pm

“Although no refutation of QC looks particularly likely at the moment, by far the most serious possibility is some new law of thermodynamics that contradicts the hypotheses of fault tolerance. It wouldn’t change zero-temperature quantum mechanics, but it would be an extra mathematical theorem or principle at positive temperature.”

Well I think this falls directly undert Scott Aaronson’s Quantum Computing Position Statement (new law and all).

“The hard part, for the skeptics, is that the demand must be forgiving enough to still allow classical computation and all known quantum condensed-matter systems such as lasers, superconductors, etc.”

And then there is the real question which is whether the laws of physics actually work this way! Contriving worlds where the above is true isn’t really the problem is it? Experimentally demonstrating that we live in such a world is the important question. This, in my opinion, is the hard part for skeptics. They’re going to have to either go into the lab and do experiments or propose the experiments which will justify their beliefs, right? And even then they have to worry because the threshold theorem is not a static object. As we learn more about how it works, we learn how to adapt it to more and more situations.

I’m also curious about your statement that the challenge will come from thermodynamics. Isn’t thermodynamics nothing more that quantum theory applied to large systems? One would have to simultaneously derive new laws from without modifying quantum theory and a the same time not contradict the thresholds theorems.

Comment #10 September 8th, 2006 at 6:52 pm

Well I think this falls directly undert Scott Aaronson’s Quantum Computing Position Statement (new law and all).That is sort-of right Dave, except that skepticism is the least untenable if you accept the Schrodinger equation but posit that there is something missing in our understanding of thermodynamics or many-body theory.

And then there is the real question which is whether the laws of physics actually work this way! Contriving worlds where the above is true isn’t really the problem is it?I think that even contriving a credible new law of thermodynamics is plenty hard enough, because it needs to be mathematically consistent with itself and everything else that you think is true. What you are saying is, never mind drafting a credible proposal to land astronauts on Venus; “the hard part” is to actually do it. I agree that experimental vindication is even harder, but…

Isn’t thermodynamics nothing more that quantum theory applied to large systems? One would have to simultaneously derive new laws from without modifying quantum theory and a the same time not contradict the thresholds theorems.That’s right, and I certainly don’t expect skeptics to succeed. But I will allow that the task of solving the Schrodinger equation for a large system, or finding properties of solutions, is not rigorous, even if we suppose that the Schrodinger equation itself is completely safe. So it is just barely conceivable that some unknown demon lies there.

By contrast, replacing quantum probability itself with something “better” seems completely off the wall to me.

Comment #11 September 8th, 2006 at 7:36 pm

This is not directly related, but I thought this seems like a place with some Quantum guys, so maybe I can find an answer to some of my simple questions?

In the BB84 why doesn’t bob store the incoming qubits, and wait till Alice announces the entire basis she used to prepare the data, wouldn’t this be more efficient?

Comment #12 September 8th, 2006 at 8:38 pm

I’m both relieved and disappointed — I was

surethis post would provoke a bitter flame war. Or are the combatants still sharpening their knives?Comment #13 September 8th, 2006 at 10:10 pm

To question:

(I hope you don’t mind the aside, Scott.)

In the BB84 why doesn’t bob store the incoming qubits, and wait till Alice announces the entire basis she used to prepare the data, wouldn’t this be more efficient?Certainly in BB84 Bob could hold onto his qubits and wait for Alice to announce her bases before measuring them. This does require good quantum memory, though. Paul Kwiat has done some experiments implementing this idea.

However, in the end this can only improve the rate of sifted bits by a factor of two. If efficiency is that much of a concern, Alice and Bob can achieve the same goal by preparing and measuring in one of the bases almost all the time, and preparing and measuring in the other basis a small fraction of the time (choosing to do so randomly and independently). Almost every received qubit is now sifted, at the expense of needing to run for a long enough time to have good statistics in both bases. See Lo, Chau, and Ardehali for more details.

–Jim Harrington

Comment #14 September 9th, 2006 at 1:25 am

(I hope you don’t mind the aside, Scott.)Not at all — thanks for taking care of it!

Comment #15 September 9th, 2006 at 4:12 am

Count me in as one of the doubting thomases who simply can’t believe that physics violates the extended church turing thesis.

That said, I find your sure/shor and island in theoryspace reasons quite alarming. If I may posit some very vague handwavy reasons why they may not apply, first, the direct one – quantum mechanics is a very weird theory, so weird it’s possible noone would have come up with it without having run into it, and it has extraordinarily bizarre computational properties. To find another theory with equivalently weird properties in the same well wouldn’t surprise me in the least. Feel free to find this argument totally unconvincing.

Second, and more plausibly, physics in general is filled with weird invariants, and it seems quite plausible that some subtle invariant blocks quantum computation, by making it require an exponential amount of… something. It sounds wishy-washy, but doesn’t seem too implausible with analogy to maxwell’s demon, which gets foiled by information theory in a way which took a very long time to work out, with lots of misperceptions about quantum mechanics being involved along the way.

Of course, all of this just strengthens the basic point that this whole field is worth studying more, because any new fundamental invariant in physics would be an extraordinary discovery, and building a real quantum computer couldn’t help but clarify the issue one way or the other.

Comment #16 September 9th, 2006 at 4:56 am

Count me in as one of the doubting thomases who simply can’t believe that physics violates the extended church turing thesis.I have to admit that I was a bit querulous in my comments there. I just looked at your web page and I think that your resume is fantastic! (Assuming that you are the same Bram Cohen — why not register with Blogger?)

Anyway, in my view, in order to evaluate QC, it is crucial to learn enough quantum mechanics so that it isn’t “weird”. (Experience shows that this may not be sufficient, but it is necessary.) But you don’t need to learn quantum mechanics across the board. In a sense, QC doesn’t directly depend on physics at all, but only on a kind of probability that is used in quantum physics. It can be called either quantum probability or non-commutative probability. In any case you can learn it (for instance, form Nielsen and Chuang) with very little reference to atoms, physical forces, etc.

Comment #17 September 9th, 2006 at 9:07 am

This might be a little bit off topic, but what do we know about the

difference between QC being possible and QC being usable in practice?

If, say, the cost of producing a quantum computer grows very fast

(exponentially?) with the number of qubits, can quantum computers

still compete with (parallell?) classical computers whose cost grow

linearly with computational power? How fast may the cost grow before

quantum computers always become more expensive for large problems? Do

we know enough about error correction and the like to predict a limit

to the growth? What if the quantum computer gets exponentially slower

for each qubit?

Comment #18 September 9th, 2006 at 9:09 am

Oops, sorry about the formatting.

Comment #19 September 9th, 2006 at 10:00 am

As someone who knows zilch about QC, but who used to know quite a lot about quantum mechanics, maybe I can stick my ill-informed oar in.

I have a prejudice against quantum computing. It comes from the fact(?) that there’s a lot of references to entanglement and Bell Inequalities in the QC literature. This raises a big red flag to me, and leads me to predict that the effort will come to a sorry end.

People looked at the EPR thought-experiment for 50 years and basically came up with “QM is right, and there’s no evidence for anything ‘behind’ it”. But — in its publicity at least — QC trades off the so-called weirdness of the EPR thought experiment. Didn’t I see a note somewhere with a graph of references to the original papers, which shows a big jump after 50 years as QC took off? I think so.

I guess my question is, quantum physics worked quite happily for 50 years by (in practice) saying that EPR thought experiments are interesting exercises but ultimately don’t tell us very much. And now QC is resurrecting them, but I don’t see there is anything to be gained from doing so. To the extent that QC has dug up Bell’s inequalities and the like, I suspect it is an effort of computing people who don’t really know the physics but are attracted by the idea that there is a free lunch somewhere in this quantum weirdness.

So, assuming I’m wrong, I’d at least like to suggest that QC people do what my first-year physics lecturer recommended 28 years ago, which is not to get dragged into the philosophy of QM, and just stick to solving problems.

Thanks for the opportunity to vent a little. And thanks for the glimpse into your world I get from your blog.

Comment #20 September 9th, 2006 at 12:47 pm

If, say, the cost of producing a quantum computer grows very fast (exponentially?) with the number of qubits, can quantum computers still compete with (parallell?) classical computers whose cost growlinearly with computational power?

This is a good question which can be answered. By definition, we will only say that we have QC if the difficulty of constructing a quantum computer scales at most polynomially with the number of qubits. Exponential difficulty is basically where we are now; it’s the reason that quantum computers sort-of exist, but are just toys.

The gold standard of quantum fault tolerance, by the way, is quasilinear scaling rather than polynomial. In other words, the cost should be linear times some logarthmic factors. This is the most that you can hope for classically as well.

Quasilinear cost comes from a somewhat idealistic model that applies only to the extra computation that you need to correct errors. The cost overhead of geometric scaling (more and more VLSI, stacking chips, even cooling the server rooms) is polynomial. Despite all of the fancy methods with transistor layout, microcoding, and so forth, one of the present limits to classical computation comes just from overheating rooms by putting too much computer equipment into them. This is because of polynomial versus quasilinear scaling of overhead.

Comment #21 September 9th, 2006 at 1:13 pm

People looked at the EPR thought-experiment for 50 years and basically came up with “QM is right, and there’s no evidence for anything ‘behind’ it”The topic in question is not the full physics of quantum mechanics, but more precisely non-commutative or quantum probability as revealed by the Copenhagen interpretation. And the conclusion is not merely that there is no evidence for anything “behind” it. Rather, evidence has accumulated that asking what is “behind” quantum probability is not a good research question.

(You might suppose that progress in science consists entirely of answering questions, but another important part of it is to retire questions that will never have good answers. For example, finding a practical Northwest Passage used to be a fundamental question in geography, until eventually people realized that it just isn’t a very good question.)

On the other hand, even though nothing much has come from asking what’s

behindquantum probability, the great discovery of the past 10-20 years is that there is a great dealin front ofquantum probability. That is what QC and also QI are all about. I like the subject exactly because it retires many old “quantum philosophy” papers as irrelevant, but elevates some of the remaining ones as important for new reasons.I started in this area with a viewpoint similar to yours: I thought that QC was a combination of acceleration by a constant factor with quantum devices, and some woo-woo hype about quantum weirdness. But when I looked at the review papers by Preskill and Steane (Nielsen and Chuang had not yet been published), my prejudgment became untenable. Regardless of any “promises” about what QC will be, these papers were really getting somewhere. They were not written by people who just hype and blather.

Comment #22 September 9th, 2006 at 1:23 pm

Thanks for taking the time to respond. At some point I guess I need to get beyond my prejudices and read some of the serious stuff in the field. Thanks for the pointers in this line — survey works seem about the right level.

Rather, evidence has accumulated that asking what is “behind” quantum probability is not a good research question.– exactly. That’s actually what i was trying to say.

Comment #23 September 9th, 2006 at 2:31 pm

[Only] wild-eyed technological optimists … [believe that] we’ll be able to simulate superconductors and quark-gluon plasmas on an ordinary desktop PC!From an quantum system engineering point of view, the above is the most provocative statement in Scott’s post! Because, the emerging quantum system engineering view is that generic complex quantum systems

canbe emulated with polynomial (desktop) resources.Here by “generic quantum system” is meant, any quantum system whose quantum trajectories are algorithmically compressible. Which is (roughly) to say, any quantum system that is

notcarrying out a quantum computation.“Simulate superconductors and quark-gluon plasmas?” Sure! Because, why not? What grounds are there to think that these systems post a fundamentally intractable computational problem?

Of course, by the year 2020, Scott’s “desktop computer” will no doubt run Apple’s OS XXX “Sabertooth” operating system, and have ~1024 hyperthreaded CPU cores, with each core running at ~50GHz clock speed.

These future desktop computers will then supply simular computational capability to the present-day supercomputers that simulate … precisely Scott’s “superconductors and quark and gluon plasmas!”

Furthermore, can’t equal or great progress be reasonably be foreseen in the efficiency of quantum simulation algorithms?

The confluence of these two trends — faster computers and efficient quantum simulation algorithms — is grounds for considerable optimism (which offsets the “negative criticism” of which Scott spoke) that present-day fundamental advances in quantum informatics are laying solid engineering foundations for future large-scale job-creating industries: quantum microscopy, spintronics, smart materials, synoptic biology.

Comment #24 September 9th, 2006 at 2:48 pm

It’s a shame that quantum informatics is laying the foundation for large-scale job-creating industries. I want job-destroying industries instead; I want people to have more free time.

Comment #25 September 9th, 2006 at 3:21 pm

John: I emphatically disagree that people who study quantum systems believe that polynomial resources are sufficient to simulate them. There are a large number of molecular simulation problems where it is clearly the case that the quantities that require calculating can’t be calculated with any of the approximate methods available today. One example: chemical reaction rate calculations (for example in combustion calculations). Another example: enzyme catalysis.

Comment #26 September 9th, 2006 at 3:30 pm

Dave said:

“Isn’t thermodynamics nothing more that quantum theory applied to large systems?”I’m not sure that this is true – I thought there was a problem deriving the arrow of time from microscopically reversible dynamics (even quantum ones)? Shouldn’t the second law be thought of as a law of nature thats additional to, and independent from, the postulates of quantum mechanics? So, maybe there is theoretical room for Greg’s “fourth law,” without modifying QM.

On a related note, its hard to see how a (fault tolerant) quantum computer could work in a universe in which quantum theory is true, but which has reached thermodynamic equilibrium – there would be no way to prepare fresh ancillas.

Sean

Comment #27 September 9th, 2006 at 3:39 pm

One comment on argument #6 (fault tolerance) – my understanding is that the thresholds and overheads that have been derived so far are all relative to particular error models (e.g. how errors are correlated in time and space). Is this necessarily the case? Could one formulate a more general statement about when fault tolerant QC is possible, that’s independent of the details of how the computer is coupled to it’s environment? To make a (possibly spurious) analogy with thermodynamics: we know in quite general terms when it’s possible to extract work from a heat bath (== threshold) and also how efficient that process of work extraction can be made (== overhead). All this is couched in terms that are independent of the specific details of how the engine is coupled to the baths (== error model ??).

It seems plausible to me that having a physical realisation of a quantum computer, connected to an actual physical environment, and controlled by noisy “nobs,” would help flesh out our understanding of the threshold results.

It’s worth noting that the role of experiment, in general, is not just to learn new “laws of nature”, but more generally to discover/elucidate things that are beyond our present understanding. Much of experimental condensed matter physics is like this – few people expect that e.g. high temperature superconductivity relies on physics beyond conventional quantum theory, but it’s discovery was a big surprise that is still not fully understood. I anticipate that as more and more sophisticated “toy” quantum computers are built over the next few years, there will be quite a few experimental surprises, which is another “basic physics” motivation for building them.

Comment #28 September 9th, 2006 at 3:41 pm

I have a prejudice against quantum computing. It comes from the fact(?) that there’s a lot of references to entanglement and Bell Inequalities in the QC literature. This raises a big red flag to me, and leads me to predict that the effort will come to a sorry end.…

So, assuming I’m wrong, I’d at least like to suggest that QC people do what my first-year physics lecturer recommended 28 years ago, which is not to get dragged into the philosophy of QM, and just stick to solving problems.

Tom S.: For me, a huge part of the appeal of quantum computing is that it lets you talk meaningfully about entanglement and other issues that date back to the beginning of QM, but

withoutgetting dragged into endless philosophical debates. Today we ask not, “What does entanglement really mean?” but instead “What’s itgoodfor? What can youdowith it that you couldn’t do with only classical correlations?” The latter question is the sort on which nontrivial progress can be (and has been) made.If you do take the time to get acquainted with this field, I think you’ll find that the landscape around fundamental issues in QM is very different than it was 28 years ago.

Comment #29 September 9th, 2006 at 5:22 pm

Greg Kuperberg said… It’s a shame that quantum informatics is laying the foundation for large-scale job-creating industries. I want job-destroying industries instead; I want people to have more free time.I felt so too, until I spend considerable time visiting (trapped upon) a tiny third-world island near 8°36’51.98″N, 144°29’42.65″E (use Google Earth).

This island was paradise on one level — no money, no stores, no need that any person have a “job”. In consequence, the people were extraordinarily rich in time; there was little or nothing to do all day, except contemplate the infinite blue purity of the Pacific Ocean.

Here is what a feral professor looks like in this environment.

On the other hand, there were no physicians, no scheduled service to the island, no evacuation possible in an emergency: no easy way to leave at all, and for most people, nowhere to go if you

didleave. In consequence, the suicide rate, especially among young people, was dismayingly high.The island did have a solar-powered satellite-download internet link, though. It sure felt weird to be checking the arxiv server from a thatch hut … but I got used to it.

The island’s children would laugh uproariously whenever they saw that I could not easily, e.g., open a coconut with a lump of coral. In their culture, young people can learn every skill needed for adult life by the age of eight. Which is good, because the life expectancy is in the mid-40s.

The whole experience created in me a profound appreciation of the global need for job-creation. Which is why I study quantum information theory with an emphasis on large-scale job creation. For me, quantum mathematics are not an escape from mundane reality, but rather, a tool for mitigating some of its more distressing aspects.

Caveat: the human reality of third-world life is ultra-complex — the above account describes about one percent of the reality of daily life on these islands.

Comment #30 September 9th, 2006 at 5:41 pm

I’ve got a question for you Scott. I think you can answer it without breaking your wow

We all know what the formal definition of O(g(n)) is. But what is the formal definition of O(g(m,n)) where g is a function of two variables? Informally, if f(m,n) is in O(g(m,n)) we want f to be bounded above by g for sufficiently large (m,n). But how do we formalize “sufficiently large” in the multidimensional case?

Comment #31 September 9th, 2006 at 6:06 pm

Hi Johan,

How’s this: f(m,n)=O(g(m,n)) iff there exist a,b>0 such that f(m,n)0.

Comment #32 September 9th, 2006 at 8:14 pm

My understanding is that the thresholds and overheads that have been derived so far are all relative to particular error models (e.g. how errors are correlated in time and space). Is this necessarily the case?Some of the error models are more robust than you might expect in the following sense. An error model A dominates an error model B if, roughly speaking, you can simulate A by starting with B and adding more error. This extra error may be arbitrarily complicated, provided that it truly is extra error and not error correction that is merely described as error. If a particular method of error correction works for a particular error model A, it also works for any other error model that A dominates.

For example, depolarizing error dominates dephasing error and a lot of other things.

So the question is, can one type of error model be universal, in that it dominates all others? In a narrow sense, it is sometimes so. Independent depolarizing error does dominate a large class of error models that satisfy certain conditions.

But those conditions may not be realistic. It isn’t possible to make an error model which is universal for all purposes, either classically or quantumly. (Except for the specific reference to depolarization, these comments apply equally to classical and quantum error correction.) On the one hand, the error model has to have some locality or independence assumption of some kind in order to have error correction. On the other hand, the common denominator of all locality assumptions is no locality assumption at all.

Comment #33 September 9th, 2006 at 8:27 pm

Shouldn’t the second law be thought of as a law of nature thats additional to, and independent from, the postulates of quantum mechanics?I would say that the laws of thermodynamics are in addition to, but

notindependent from, the postulates of quantum mechanics. It isn’t very hard to outline a relationship between QM and thermodynamics. In quantum information, all entropy can be purified to entanglement. Under certain assumptions, life gets more and more entangled — proximate, easily observable qubits become more and more entangled with distant qubits. Entropy as witnessed by any local observer thus increases.The problem is to either rigorously derive these plausible manifestations of QM, or describe ubiquitous conditions that will lead us to see these manifestations. It isn’t supposed to involve any real new laws of physics, but in practice it does, just because getting from few-state reversibility to many-state irreversibility is presently so far from rigorous.

A somewhat less abstract example of the same sticky situation is the ideal gas law. Is that a separate law of physics or not? It isn’t meant as one, because you can suppose that it holds for things like a dilute rain of classical spheres (ball bearings) in low gravity. But since no one can rigorously prove that, it is treated as a separate law.

Comment #34 September 9th, 2006 at 8:37 pm

Scott’s version is one perfectly valid interpretation of O(f(m,n)). Another reasonable interpretation is that it means a function g(m,n) such that for some N, g(m,n)

Comment #35 September 9th, 2006 at 8:41 pm

A somewhat less abstract example of the same sticky situation is the ideal gas law. Is that a separate law of physics or not? It isn’t meant as one, because you can suppose that it holds for things like a dilute rain of classical spheres (ball bearings) in low gravity. But since no one can rigorously prove that, it is treated as a separate law.If this is true, it seems like a beautiful, important, and well-posed math problem: prove that a set of ball bearings in a rectangular box, subject to zero gravity and elastic collisions, will approximately converge to equilibrium in position space (integrated over time), for all but a set of initial conditions of measure zero.

Comment #36 September 9th, 2006 at 8:54 pm

One common example where you see f(m,n) is where f is the running time of a graph algorithm, m is the number of edges, and n is the number of vertices. In that case, the definition I gave is indeed standard.

In the bizarre case that (say) f(m,n)=mn when m>=2, but f(m,n)=2^n when m=1, I

don’tthink computer scientists would describe f as “polynomial in m and n.” But I can’t think of a single situation where that arises (can you, Greg?).In the case that only one variable is asymptotic, computer scientists will say something like “we show that f(m,n)=O(poly(n)) for any fixed m.” Indeed, a whole subfield called parameterized complexity theory arose to deal with exactly this sort of situation.

Comment #37 September 9th, 2006 at 9:16 pm

If this is true, it seems like a beautiful, important, and well-posed math problem (about ball bearings)…It is beautiful and important, but it’s not as well-posed as you might think. Because, there are a lot of variations of the question, concerning the shape of the container, the shape and density of the bearings, and what exactly arises in the limit. A whole host of things is supposed to happen for many systems in the infinite limit, and no one can tell which formulation is better for what purpose. No one wants to miss a beautiful derivation by inadvertently emphasizing an inconvenient case. But no one wants to sell out to a cheap solution by inadvertently emphasizing a lucky case.

But yes, it is easy enough to write tidy statistical questions that no one can solve, and I am pretty sure that a hard sphere gas is an example.

I am rather more sure about an even simpler system, 2-dimensional percolation on most lattices. Two drunks play a game of Hex randomly on a large board with a roughly round border, until one drunk connects left and right or the other connects top and bottom. The board is not necessarily fair; the players’ regions are instead determined by 4 marked points on the approximate circle. The conjecture is that the probability that the left-right drunk will win is asymptotically a certain simple function of the cross ratio of the 4 marked points. The answer can be derived from basic “physics” of percolation.

Incredibly, this particular example was rigorously proven by Stas Smirnov some 5 years ago. But if you replace the hexagonal board pattern by anything else, for example squares and octagons, then the problem is still open. (The board pattern has to be trivalent to imply that one drunk will eventually win. But there are also non-trivalent versions of the question.)

Comment #38 September 9th, 2006 at 9:18 pm

In the bizarre case that (say) f(m,n)=mn when m>=2, but f(m,n)=2^n when m=1, I don’t think computer scientists would describe f as “polynomial in m and n.” But I can’t think of a single situation where that arises (can you, Greg?).I guess I can’t, which sort-of vindicates you. The distinction that I had in mind is so artificial that it doesn’t happen.

Comment #39 September 9th, 2006 at 10:21 pm

Question not directly related to QC but bothering me.

If I am not wrong you give reasons to believe that QCs

arepossible to be built but won’t be more powerful than classical computers.But why do use these two statements? They are so confusing!

Thirteen Reasons Why I’d Be Surprised If Quantum Computing Were Fundamentally ImpossibleIt’s entirely conceivable that quantum computing will turn out to be impossible for a fundamental reason.Comment #40 September 9th, 2006 at 10:30 pm

I’ve never seen a suggestion that the ideal gas law is a separate law of physics, at least not in the sense of fundamental physical law. Its derivation (for point masses) is not difficult, at least to the level of rigour that most consider reasonable.

(I confess I’m not sure if there are issues with an ergodic hypothesis or something that might mess things up if you want to get all pure-math about it.)

As for the second law – I’m not sure of the status of its derivation from microscopic assumptions. I don’t think it needs any particular quantum foundations though (beyond the correspondence principle), just statistical mechanics. It’s basically a classical law after all.

Comment #41 September 9th, 2006 at 10:43 pm

If I am not wrong you give reasons to believe that QCs are possible to be built but won’t be more powerful than classical computers.No — where on Earth do you get that idea? If scalable, universal quantum computers can be built, then (assuming factoring’s not in BPP) they will certainly outperform classical computers on at least one problem.

(If they didn’t, then they wouldn’t

bescalable quantum computers in the sense I’m talking about.)Comment #42 September 9th, 2006 at 11:58 pm

“If this is true, it seems like a beautiful, important, and well-posed math problem: prove that a set of ball bearings in a rectangular box, subject to zero gravity and elastic collisions, will approximately converge to equilibrium in position space (integrated over time), for all but a set of initial conditions of measure zero. ”

I seem to recall that in a limit where the mean free path remains finite (need to scale size of bearings as number of bearings goes to infinity) that Lanford (spelling?) proved that at short times you recover the Boltman equation from which you can easily prove what you’re asking. An important assumption in this derivation is that the balls are initially not correlated. Also you end up showing that the system is in equilibirum over all of phase space, not just position, I think.

Comment #43 September 10th, 2006 at 12:02 am

That’s “Boltzmann” not “Boltman.” Damnmyfasttypingfingers.

Comment #44 September 10th, 2006 at 12:22 am

OK, so we have a partial answer in the case of infinitely many ball bearings. What about

twoball bearings? (One is trivial.)Comment #45 September 10th, 2006 at 1:25 am

Scott: There is an entire branch of mathematics devoted to your beautiful, important, and well-posed math problem , and variants: ergodic theory. Unfortunately, it turns out to be very hard in general. I think Sinai solved it for two balls, and that three or more balls is still open.

Comment #46 September 10th, 2006 at 1:53 am

Thanks, Walt! That’s exactly what I wanted to know.

(Incidentally, we study “ergodic”-type questions all the time in computer science, but almost always for finite-dimensional Markov chains, where convergence is obvious and the goal is to upper-bound the convergence rate.)

Comment #47 September 10th, 2006 at 5:34 am

Thanks for your answers. The exact interpretation did not really matter in the book I was reading but I was curious if there was a standard formal definition.

Comment #48 September 10th, 2006 at 6:52 am

Greg: Yes, I’m that Bram Cohen. I haven’t registered mostly because you don’t have to to comment on this blog. My resume isn’t really that great from a theory standpoint – I can’t prove squat, my wsat results were of the ‘try a zillion things and see what works’ variety, and my more well-known project has barely any calculations in it at all.

I’ve put the Neilsen and Chuang book on my amazon wishlist. It goes on the pile with understanding survey propogation and the style of proof used in the rigorous proof of the four color theorem. All very interesting and rewarding things to learn if I find myself with a few months with nothing to do, which might happen in a decade or so.

Comment #49 September 10th, 2006 at 1:14 pm

Ok I don’t know qm very well or quantum computing but there is one thing that has always bothered me about QM and that is how do you achieve large scale entaglement. To make QM don’t you need to some how create an entagled state with 100 or 1000 qubits. Is this practically possible?

Comment #50 September 10th, 2006 at 2:38 pm

So are there two senses of quantum computing: one fully scalable universal quantum computers (which

mightbe more powerful than classical ones) and others just like classical computers but operating on quantum bits to show off technological expertise or feasibility?What’s with the two seemingly inconsistent statements I asked in my previous question?

Remember I don’t live and breathe either complexity theory or quantum computing but curious enough to ask you:)

Comment #51 September 10th, 2006 at 4:11 pm

Hey Nagesh,

Sorry if I got impatient — my fault.

So are there two senses of quantum computing: one fully scalable universal quantum computers (which might be more powerful than classical ones) and others just like classical computers but operating on quantum bits to show off technological expertise or feasibility?Yes, except that the point of building non-scalable quantum computers is not just to show off; it’s also to

learnthings about decoherence and control that we’ll need to know if we’re ever going to achieve scalability.Comment #52 September 10th, 2006 at 4:21 pm

To make QM don’t you need to some how create an entagled state with 100 or 1000 qubits. Is this practically possible?Depending on what you count as a “qubit,” entangled states with hundreds or thousands of qubits have already been demonstrated experimentally, mostly within the last decade. (I linked to some of the papers about this in Argument 4.) So all that’s left to do is to create an entangled state of thousands of qubits where each qubit is individually addressable,

andthe qubits can all be manipulated in parallel,andany pair of qubits can be brought together to interact,andthe decoherence rate is tiny,andthe qubits that decohere anyway can be swapped out for fresh ancillas, …Comment #53 September 10th, 2006 at 5:33 pm

Bram:

I’ve put the Nielsen and Chuang book on my amazon wishlist.Note that Section 1.1 is an excellent prose introduction to the field. The rest of the book is the more difficult brain-improvement regimen.

(Also, even if you don’t register with Blogger, you could link your name to your web page.)

Anonymous:

To make QM don’t you need to some how create an entagled state with 100 or 1000 qubits. Is this practically possible?It helps a lot to understand that entanglement is no more than a quantum generalization of correlation. If you set a billion bits of a computer to either all 0 or all 1, but you flipped a coin to decide which, then they are massively correlated. (Incredible! Powerful! Non-local!) Massive entanglement is not conceptually much different from that. However, it is computationally much more useful, in principle.

If you allow a general enough definition of entanglement, then massive entanglement was created by physicists in the past 100 years: superconducters, superfluids, and lasers might all count. In fact, it has existed for billions of years, because neutron stars exist and there are also naturally created masers in astronomy.

What Scott means when he refers to what has been created in the past 10 years is entanglement that plausibly resembles the organization of a quantum computer. In other words, potentially useful entanglement is more recent.

Comment #54 September 10th, 2006 at 5:46 pm

So are there two senses of quantum computing: one fully scalable universal quantum computers (which might be more powerful than classical ones)“might” should be “would be”, unless there is a general algorithm in BPP for period-finding.

and others just like classical computers but operating on quantum bits to show off technological expertise or feasibility?Computing classically with qubits is like serving ketchup in a silver caviar bowl. You can do it, but it’s expensive and slow.

(But see, it is functional, unlike serving caviar in a ketchup bottle.)

Comment #55 September 10th, 2006 at 6:55 pm

Wow! with this intensive post and quick exchanges one wonders having Scott and Greg around, why do we need quantum computers (or any computers) at all.

A few comments:

1 (to various points of Scott’s) . It looks from Scott’s discussion that

quantum computer are impossible only if quantum physics will break down. (E.g., if linearity of quantum physics will break down.) I do not think this is the way most skeptics see it. (This applies to those skeptics who are “observers” and the few who try to translate skepticism about QC into actual research.)

2. Sure/Shor. Scott ask for a “line” that cannot be crossed. (Again the issue is not breakdown of Quantum physics but of breakdown of the possibility for quantum computers.)

There are various rather concrete “lines” conjectured in the few papers that present “pessimistic” or “skeptical” views.

3. Fualt-tolerancs. All fault tolerance theorems assume assumptions on the noise which may well be unrealistic. In any case this seems the crux of the matter.

Most other points look rather neutral

in the sense that they do not support QC

and not support the view that QC is not possible. This goes for the possibility of more than BQP (so what?) the possibility to simulate physics with digital computers if QC are impossible (so what?)

(And these points are not avoided by skeptics.)

Historical precedences (they go both ways) the fact that there are not anti quantum computers research communities (what is the interpretation of this fact? again it goes both ways.)

Point 8 is nice! (not that strong though)

And point 10 make this blog a grand unified blog (almost – a bit of middle-east politics is still missing to this post) (And you promised NOT to talk about string theory until Sept 27.)

A reason to doubt QC is that the highly entangled states like those of quantum computers are not seen anywhere in nature.

Comment #56 September 10th, 2006 at 7:27 pm

Thanks, Gil! Two quick responses:

There are various rather concrete “lines” conjectured in the few papers that present “pessimistic” or “skeptical” views.Can you please give me

oneexample of such a concrete line? I’ve been looking for one for years!A reason to doubt QC is that the highly entangled states like those of quantum computers are not seen anywhere in nature.Under the most obvious interpretation of “highly entangled state,” this is simply false. See the comments by me and Greg above for examples of such states.

(Of course, it’s possible that by “highly entangled state” you mean something more specific than “entangled state of hundreds or thousands of particles.” But then I’ll want to know just

whatyou mean, which leads back to my question above.)Comment #57 September 11th, 2006 at 1:02 pm

“I’m both relieved and disappointed — I was sure this post would provoke a bitter flame war. Or are the combatants still sharpening their knives? “Oh, well, as for me, the answer is I *do* believe QC will be possible. I just don’t *WANT* it to be so.

I can state it better with my our argument:

The God-Hates-Me Argument: QC is possible by the very same reason P != NP: Because otherwise it would be too good to be true.“If building quantum computers that outperform classical ones is fundamentally impossible, then it must be possible to write classical computer programs that efficiently simulate any quantum system found in Nature. “Uh?

Comment #58 September 11th, 2006 at 1:18 pm

This link is a little out of date:

http://xxx.lanl.gov/abs/quant-ph/0508218

but the authors believe that they introduce an architecture for robust and scalable quantum computation using both stationary qubits (e.g. single photon sources made out of trapped atoms, molecules, ions, quantum dots, or defect centers in solids) and flying qubits (e.g. photons). I suspect that progress will come more quickly than many people realize, but time will tell.

Comment #59 September 11th, 2006 at 10:37 pm

Scott, friends,

1. ” Can you please give me one example of such a concrete line? I’ve been looking for one for years! ”

An (almost) pure state which is not (almost) determined by the entanglement of small sets of elements is possibly something not seen in nature, but we do see plenty such states in quantum algorithms e.g. error correcting codes on n qubits where each t are independent for t large. (I do not know if we see in nature any case where an (almost) pure state is not essentially determined by entanglements of pairs. )

A concrete suggested mathematical formulation (along with a few potential counter examples) for this is proposed in my paper “How quantum computers can fail” on my homepage. The paper contains a few other concrete conjectures concerning the state of qubits and errors. (Alao with newly added mathematical formulations.)

(It is not clear if an “obstruction” or a “clear line” is better expressed in terms of a single “forbidden” state rather than the variety of possible states. I am mainly interested in expressing such “lines” via pairs (states, errors) for noisy quantum computers.

Robust qubits based on highly entangled states will contradict most skeptical ideas (so this is a natural line to draw).

Some skeptical ideas will conflict with viable “superconductivity qubits”)

2. Discussig beliefs and certainly firm beliefs is a tricky business. Let’s try something I think most of us do not have strong beliefs about, and try to test our intuitions based on such an example.

Are noisy reversible quantum computers capable of factoring in polynomial time?

ABIN showed that they cannot do more than log-depth quantum circuits. This confirms

the physics intuition that noisy reversible QC are useless which is based on thermodynamics (and indeed the proof has a thermodynamic flavor.) …But log-depth quantum circuits plus a controlling PC allow for factoring in a polynomial time. This suggest that perhaps even noisy reversible computers (without cold ancilias) can factor.

So what is your take on it?

3. I hope this interesting discussion (in fact also on R2BI) will span more than the ususal life time of discussion-for-a-post on a blog.

Comment #60 September 12th, 2006 at 1:57 am

Thanks so much, Gil! I just looked at that paper, and it’s vastly more detailed than what you emailed me in April. In Sections 4.4 and 4.5, you

dopropose a candidate Sure/Shor separator, which is exactly what I’ve been asking skeptics to do, and which is a genuine contribution that moves the debate forward.It occurs to me that your conjecture on page 18, that

~ENT(rho;A) = O(poly(n)),

might already be ruled out experimentally by the so-called cluster states. A cluster state is basically a lattice of qubits in 1 to 3 dimensions, where each basis state has a phase of -1^z, where z is the number of pairs of adjacent qubits that are both 1. (See e.g. my multilinear formulas paper for details.) Aeppli et al. have given evidence that states very much like these arise in condensed-matter physics.

I

believe~ENT(rho;A) will be exponential for cluster states. Can you prove or disprove that?(Also, can you prove that ~ENT(rho;A) is superpolynomial for random coset states, or for the states arising in Shor’s algorithm?)

Let me know if you’d rather not think about these questions, and I’ll do it myself when I have time.

Comment #61 September 12th, 2006 at 7:47 am

Thanks for the clarification Scott! Don’t worry about getting impatient early:) I can understand.

Comment #62 September 12th, 2006 at 8:28 am

Concerning R2B I

I do not agree with point 9.

” If P=NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in “creative leaps,” no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss; everyone who could recognize a good investment strategy would be Warren Buffett.”

Let’s talk about the gap between following a step-by-step argument and providing such an argument. If this gap was a manifestation of NP=!P we could expect the

difficulty of finding a proof

for a theorem to be exponential (or super-polynomial) in the size of the proof. I would expect a linear dependece.

(Namely I see no evidence the ratio is increasing in the size of the proof.)

My view is that P=!NP is an important problem in mathematics but we should not over-push its meaning to mathematics and certainly not beyond mathematics.

The question if people (even creative) can do things computers can’t (in principle) is interesting by itself, but there is certainly no evidence that even creative people can do NP hard problems and it looks to me that this idea is rather absurd. (Also not every thing which is related to insights of computational complexity is automatically related to NP=!P.)

“Let me know if you’d rather not think about these questions, and I’ll do it myself when I have time.”

Please, please, do think about these questions, by all means please do.