## “So You Think Quantum Computing Is Bunk?”

On Wednesday, I gave a fun talk with that title down the street at Microsoft Research New England. Disappointingly, no one in the audience *did* seem to think quantum computing was bunk (or if they did, they didn’t speak up): I was basically preaching to the choir. My PowerPoint slides are here. There’s also a streaming video here, but watch it at your own risk—my stuttering and other nerdy mannerisms seemed particularly bad, at least in the short initial segment that I listened to. I *really* need media training. Anyway, thanks very much to Boaz Barak for inviting me.

Comment #1 April 12th, 2013 at 6:20 pm

The “many worlds versus many words” bit was very awesome. I think “many words” is my new favorite interpretation of QM. Seems to have the smallest Kolmogorov complexity of any of the interpretations to me.

Comment #2 April 13th, 2013 at 4:42 am

Hi Scott,

You don’t need to be scientifically radical or have a revolution in Quantum Mechanics to explain why QC might not work. In fact a rather modest interpretation will do – suppose Nature is a UNIQUE path in a “Many Worlds” Universe – with the path/branches randomly selected by Nature (So that the wavefunction of the universe is “collapsing” randomly every ~10^-43 secs for example)

Now you have quantum superpositions but you don’t have “many worlds”, and since you have a universe evolving along a unique path, Nature can’t factorise large numbers any better than a classical computer can.

Comment #3 April 13th, 2013 at 5:29 am

@James #2

I seriously hope you meant it as a joke/parody of skeptics.

Comment #4 April 13th, 2013 at 8:35 am

Was this a job talk? Sure seems like it…

Comment #5 April 13th, 2013 at 10:33 am

Pangloss #4: No, I assure you a job talk looks very different.

Comment #6 April 13th, 2013 at 11:52 am

Nice lecture, Scott. Since my point of view is briefly mentioned in your lecture let be add a link to my own recent presentation at MIT’s quantum information seminar entitled “Why quantum computers cannot work and how.”

Two remarks: First, I try to put on formal grounds not only assertions where Scott and I disagree but also assertions where we both seem to agree like: “It indeed seems possible that ultimately, “pure” BosonSampling will not scale beyond a few dozen photons. To go significantly beyond that, you would want quantum error-correction, but if you have that, you could probably just build a universal QC instead.”

Second, a non trivial number of Kalais were positioned in the lecture hall to rapidly confront any threshold being passed or other troubles or abuse. As I said, the talk was nice and there was no need put them to action.

Comment #7 April 13th, 2013 at 1:57 pm

Thanks so much, Gil!

Comment #8 April 13th, 2013 at 2:07 pm

Gil:

Loved those slides! Great.

Especially loved the clarity of your:

(1) Does impossibility of QC means breakdown of QM?

The short answer is: No(2) If computationally superior quantum computers are not

possible does it mean that, in principle, classical computation

suﬃces to simulate any physical process?

The short answer is: YesComment #9 April 13th, 2013 at 2:53 pm

Hi Atilla #3

Joke? Parody? I would have referred to deterministic hidden variables or superdeterminisn if I was looking for laughs. 🙂

No, I’m serious, it’s not so crazy is it? We allow Nature to do all the quantum jumps (that ever occur) and evolve the universe wavefunction via schrodinger evolution everytime a random jump occurs so that the whole universe knows what’s just happened

We just have to assume the evolution steps (ie each quantum jumps) occur at a sufficently fine time resolution (say in steps of ~planck time on average) to be consistent with current experimental observations.

‘t Hooft suggests something similar but without the randomness (so I’m not clear how he gets superpositions at all), his evolution operator is just a gigantic permutation matrix operating on the entire universe state vector (this may be slightly oversimplifying, but I think it’s the jist)

My suggestion, is really not so different to the many-worlds interpretation, but introduces discrete time evolution seeded by random quantum jumps (the explicit randomness enables us to get rid of the multiple branches and allows us to have a unique evolution path in the Hilbert Space)

Also we now have no measurement-induced “collapse” (nature does all the collapsing, regardless of our existence)

Cheers

Comment #10 April 13th, 2013 at 3:23 pm

James #2 and #9: If there were actually “dynamical collapses” roughly once per Planck time, then we wouldn’t see interference between different branches of the wavefunction over timescales much longer than the Planck time. But of course, we

dosee such interference (even, as I said in my talk, over ~15 minutes = ~10^{46}Planck times!). And accounting for such interference is the entire reason why we need to describe Nature using complex vectors in Hilbert space in the first place. So it’s your proposal itself that collapses ~1 Planck time after being proposed. 🙂Comment #11 April 13th, 2013 at 4:40 pm

Scott

lol

Imagine the universe consists of zillion of states, each represented by a complex number. Now suppose any of those states can randomly change complex phase. Now, whenever a state changes its complex phase you evolve the universe via a unitary operator (so the whole universe gets updated depending on what the previous single random change was)

There, you have a single path evolution with superpositions

WE DON’T KNOW WHAT THE EVOLUTION PATH IS UNTIL WE MEASURE IT!!!! (It is in a superposition of all possible evolutions)

So measurement just REVEALS knowledge about the state-vector/wave-function – nothing else

Of course you will get all the usual quantum inteference effects – this model will only depart from standard QM at small time scales, which we currently can’t measure. But it also won’t allow exponential speed up with QC algorithms – since the “actual” superpositions are constantly being destroyed

The key point is that a fundamental random seeded evolution is equivalent to superpositions in MW, but that doesn’t mean the superpotions have an actual ontological existence, like in MW (well, they do, but only for minute time steps)

Cheers

Comment #12 April 13th, 2013 at 6:12 pm

James, if we indeed “get all the usual quantum interference effects,” then for that very reason we

cando QC, which is conceptually just like the double-slit experiment, except with a large number of particles interacting in a complicated way. Nothing in a QC depends on what’s happening at the Planck scale—or rather, if itdid, then the discovery of that would be one of the great revolutions in the history of physics.Here’s a simple sanity check: what’s the simplest experiment for which your model predicts an outcome

differentfrom that predicted by standard quantum mechanics?Comment #13 April 13th, 2013 at 7:03 pm

The simplest experiment must be one which relies on the ontological existence of the superpositions. So Quantum Suicide would probably be the simplest, but this is not practical. So in fact, we must look at another quantum scenario that requires the superpositions to exist – and that is in Quantum Computing algorithms, I do not think there is a simpler test of the ontological status of superpositions.

In the probabilistically seeded discrete time evolution model, the superpositions exist mathematically (so most predictions of the model will be the same as for MWI) but I think you are incorrect to say QC algorithms follow from the basic postulates of QM – they do not eg QC doesn’t work with deterministic hidden variable models which are consistent with the basic postulates of QM.

Do we get anything extra from the random seeded discrete time model to make it worth considering? Yes, we get a speed of light limit (but note that the time steps need not be constant like in a deterministic cellular automata model, they may just average ~10^-43 secs (or whatever))

Comment #14 April 14th, 2013 at 6:09 am

The quantum-Babbage analogy can be traced back to the early days of the quantum computing literature,

e.g.,in Andrew Stearne’s highly-recommended 1997 surveyQuantum Computingwe find:What have we learned since 1997?

———-

Post-1997 lesson-Learned #1The STEM capabilities of Babbage’s era were entirely adequate to build the machines that Babbage envisionedAs inarguable evidence, at least two working Babbage engines have been constructed to Babbage’s design, using only nineteenth century technologies (

amazing video here).———-

Post-1997 lesson-Learned #2The STEM capabilities of the modern era are entirely inadequate to build the machines that theQIST RoadmapsenvisionedAs inarguable evidence, over the last three decades, a level of global STEM investment that would have sufficed to many generations of Babbage machines has not sufficed (arguably) to advance beyond even first-generation quantum computers.

———-

A modest suggestionFrom a 21st century perspective, the appropriate historical exemplar for quantum computer development is not Babbage’s quest to build a computing engine, but rather Isaac Newton’s quest to synthesize a Philosopher’s Stone (see,e.g, Newton’s).Lapis Philosophicus cum suis rotis elementaribusNewton’s pragmatical intent, and his experiment-oriented research program, and his informatic regard for alchemical principles, all are strikingly echoed in the QIST Roadmaps:

From a modern perspective, we appreciate the good-and-sound STEM reasons why Newton’s alchemical roadmap could not be pursued to completion (the atomic foundations of chemical processes, the symplectic foundations of separation processes, and the evolutionary foundations of biological processes all being largely absent from the STEM worldview of Newton’s generation).

Are there comparably good-and-sound STEM reasons why the QIST Roadmap cannot be pursued to completion? Is there comparably much still to learn regarding the fundamental mathematics and physics of chemical, separatory, and biological processes? In a nutshell, is our present appreciation of QC/QIT “alchemically” limited, in consequence of inadequate mathematical foundations and over-idealized dynamical assumptions? These are open questions!

As for the QC/QIT/Babbage analogy … well … it served a useful purpose in decades past, but nowadays it is obsolete. The clear STEM lesson of recent decades is that the QC/QIT/alchemy analogy is apropos.

Or at the very least, the lessons of Newton’s alchemical science are usefully thought-provoking for QC/QIT researchers! 🙂

Comment #15 April 14th, 2013 at 10:06 am

James #13: It’s not sufficient to give “Quantum Computing algorithms” as your answer. For as you know, quantum algorithms like Grover’s and Shor’s

havebeen implemented, on up to 7 qubits, and the predictions of QM have always been perfectly confirmed. So the burden is on you to say:whichquantum algorithm, and on what number of qubits? Where will this supposed breakdown of QM first become detectable, and why?Comment #16 April 14th, 2013 at 10:10 am

John Sidles #14: I find it perfectly plausible that, in the year 2300 or whatever, some hobbyist will go back and implement quantum computing with “mere 1997 technology,” just to show it could’ve been done.

Everything’seasier in retrospect, and the fact that someone built the Analytical Engine recently is only weak evidence that Babbage realistically could’ve built it.Comment #17 April 14th, 2013 at 10:54 am

Hmmm … visitors to IBM’s archives learn that by 1853 Georg and Edvard Scheutz

built and operated two Babbage-style difference engines… isn’t thisprima facieproof that “Babbage realisticallycould’vebuilt [his engine]?”Whereas in striking contrast, we now appreciate that — for reasons that seem simple now, but were far from obvious in the late seventeenth century — Isaac Newton’s alchemical roadmap could

notrealistically hope to transmute base metals into gold, or to extend life, or to cure diseases, or indeed to achieveanyof the main objectives of the alchemical roadmap.The QIST Roadmap’s goals are reasonably explicit:

Now it is natural to wonder whether the QIST vision of a “quantum computer-science test-bed” more closely resembles Babbage’s

Engineor Newton’sLapis Philosophicus?… to wonder whether the QIST obstructions reside mainly in engineering, to be overcome by ingenuity? … to wonder whether the QIST obstructions are showing us that our present understanding of quantum dynamics embodies “alchemical” misapprehensions, to be remediated (we hope!) by new foundations in mathematics and physical science?ConclusionAttractive arguments can be advanced to supportallpossible answers to these tough questions. And these uncertainties are very good news for young STEM professionals!Comment #18 April 14th, 2013 at 11:22 am

John Sidles: Well, I understand that there were major differences between the Difference Engine and the Analytical Engine. Among other things, only the latter would’ve been Turing-complete.

Comment #19 April 14th, 2013 at 12:25 pm

I have a query about the many-worlds interpretation. It’s probably a misunderstanding on my part, but here seems like a good place to ask.

Let’s say you have a device which measures the spin of a particle. A dial on the device will point to “+” if the spin is up and “-” if the spin is down. After measurement you basically find that the reduced (internal environment of the device traced out) density matrix of the “equipment and particle” system has become almost exactly diagonal, with the two diagonal pieces:

“Particle spin up, Dial at +”

“Particle spin down, Dial at -”

The Many-World’s interpretation takes this to represent two worlds. The two points I don’t understand are:

(a) What is the meaning of the probabilities on the diagonal if they aren’t both 1/2? Is it the probability of finding yourself in one of the universes? Surely that could only be 1/2 though, since there are only two?

(b) Why interpret it as two different worlds? When a density matrix becomes diagonal, you essentially have a superselection rule generated by the environment. Different values of the Dial observable cannot be placed in superposition, just like electric charge in QED. In QED the way states factor over the algebra of observables means that kets which are a linear combination of two different electrically charged states are actually just classical probabilistic mixtures.

Aren’t the dial values in the apparatus now just in a classical mixture and so shouldn’t we interpret the values along the diagonal of the density matrix in the same way as the 1/6 probability values for a fair dice roll?

I’ve probably expressed myself badly, hopefully somebody can clear up my confusion.

Comment #20 April 14th, 2013 at 7:08 pm

DMcM #19: What you’re asking about are

preciselytwo of the most common criticisms of MWI! I recommend reading any pro- or anti-MWI paper on the web (or maybe some commenter here feels like taking a swing). Briefly, what an MWI proponent would say to your questions is:(a) The meaning of the probabilities is something like “fraction of world-volume.” Operationally, the probabilities simply give you the odds at which you should bet on ending up in one world as opposed to the other world.

(b) The reason MWI proponents want to interpret the branches as two different worlds is that the formalism of QM—the same formalism needed to explain the double-slit experiment, etc.—

doesseem to imply that both of the worlds “exist” in some sense (for example, the simplest digital computer simulation of QM would record an amplitude for both of the worlds). Andin principle, one could even imagine an experiment in which two macroscopically-distinct worlds—with observers having registered different experiences, etc.—interfered with one another. This was David Deutsch’s point in the early 80s: he argued that, if such an experiment were successfully carried out, then “collapse interpretations” (to whatever extent they’re well-defined) would seem to be ruled out experimentally, and the ability of conscious beings to have superpositions of different memories and experiences would’ve been experimentally confirmed. (Not surprisingly, other people argue that even then, onestillwouldn’t have established the reality of many worlds. Do you see what I meant by “Many Worlds vs. Many Words”? 🙂 )Comment #21 April 15th, 2013 at 2:35 am

Minor bibliographical point: I had a vague recollection that I had heard something similar to the “many world vs many words ” quip before. It turns out Max Tegmark said something very similar in the abstract of one of his papers. Quote below:

Comment #22 April 15th, 2013 at 3:57 am

Scott #15

I mean any QC algorithm that demonstrates exponential (or any beyond classical) speedup. Since with the small qubit scenarios we can’t be sure that that we have true “Quantum Computation” – there may be subtle details of the experimental setup that we have overlooked. ie small qubit demonstrations are not a “proof” that large scale QC is possible (even in principle)

If an exponential (or similar) speed up of a calculation beyond what is considered possible classically (I would be convinced by large factorisation, even if not proved to be outside P) is ever demonstrated, then, like the Monkees, I’m a Believer.

I actually do want QC to work, but I reckon, perhaps controversially, that evolution would have discovered it and implemented it, if it were possible. Same reason I don’t believe in levitation and telepathy. (Evolution has exhaustively probed reality at earth conditions for us, but obviously I realise evolution couldn’t make use of bose-einstein condensates, lasers etc)

Comment #23 April 15th, 2013 at 4:04 am

My personal view as a man from the street: the situation with QC seems to be simlar with the flight to Mars, the timeline is always somewhere in the near, not so near, then distant future. But while I am sure that eventually NASA will send someone to Mars (if USA don’t bomb North Corea first with some of their “tactical” weapons), simply because they need to revive the dying enthusiasm for space/space missions/star wars and the like, QC with the time passing looks more and more like philosopher’s stone, with alchemists pronouncing strange formulas, calling the entire universe (or perhaps many universes) to help them, taking money from broke barons and counts in order to see at last the golden glitter among the ashes in the retort..

Comment #24 April 15th, 2013 at 7:10 am

Thanks as always for the slides. Could I trouble you to post these as .pdfs, though? Some of us don’t have Powerpoint on our machines. Thanks!

Comment #25 April 15th, 2013 at 8:24 am

Scott,

you mention that if quantum computation was impossible, there would exist efficient classical algorithms for simulating “realistic quantum systems”.

There has actually been a lot of activity in this area recently, at least if you take “realistic systems” to mean many-particle systems with local interactions. In that case, the low-energy states feature a small amount of entanglement, which allows an approximate, polynomial description of the system, again allowing low-energy initial states to be time-developed efficiently (at least for short timescales). See e.g. [1].

Of course, one could prepare high-energy excitations which would be hard to simulate. Is this what happens in Shor’s and Grover’s algorithms?

Comment #26 April 15th, 2013 at 8:29 am

Forgot the reference:

[1]: G. Vidal: Efficient classsical simulation of slightly entangled quantum computation. DOI: 10.1103/PhysRevLett.91.147902

Comment #27 April 15th, 2013 at 8:53 am

arxiv link for the paper Audun has mentioned:

http://arxiv.org/abs/quant-ph/0301063

(for a moment there I thought it was by Gore Vidal)

Comment #28 April 15th, 2013 at 8:53 am

anonymous #23: If building a QC would be as easy (at least from a technological standpoint) as sending humans to Mars, then I guess this debate is over!

Your position seems to be not that we

can’tdo such things, but that weshouldn’t, since it “looks more and more like philosopher’s stone, with alchemists pronouncing strange formulas, calling the entire universe (or perhaps many universes) to help them, taking money from broke barons and counts in order to see at last the golden glitter among the ashes in the retort” [sic].In that case, an obvious question arises that you don’t address: what about the people who built

classicalcomputers, or the Internet, or airplanes, or modern medicine? Should theyalsohave turned back from their Faustian bargain?Comment #29 April 15th, 2013 at 8:55 am

Clayton #25: If you go to the streaming video page, they made a PDF of the slides.

Comment #30 April 15th, 2013 at 9:12 am

James Gallagher #22: You’ve refuted your own “evolutionary argument” against QC, but to pile on additional examples — evolution also didn’t “discover” nuclear weapons (or any other use of nuclear energy), gunpowder,

the wheel, … so that argument is 100% invalid, and shouldn’t even enter the discussion.Regarding experiments with small numbers of qubits: well, whatever “subtle details of the experimental setup” were overlooked, must have

just so conspiredthat the experiments all perfectly confirmed QM! What do you make of the fact that experiments with thousands or millions of entangled particles—for example, involving Bose-Einstein condensates, high-temperature superconductors, Josephson junctions, and the like—alsoperfectly confirm QM, whenever we can actually carry them out? Not only have you not answered my “Sure/Shor separator” challenge, you haven’t yet evenunderstoodthe challenge. (Which, of course, makes you no different from the vast majority of casual QC skeptics…)Comment #31 April 15th, 2013 at 9:20 am

Maybe this is a really dumb question, but I’ve always wondered: why is

classicalcomputing possible? That is, Liouville’s theorem says that the action of a conservative Hamiltonian on a phase space preserves its volume. As a result, it seems like globally the universe cannot perform classical computations, since no information can be destroyed, and so you can’t delete bits. Locally, you can build a dissipative system, of course, but it’ seems a bit weird that a classical universe as a whole can only perform reversible computations?Comment #32 April 15th, 2013 at 9:20 am

Audun #25 and #26: I’m familiar with Guifre’s beautiful results, which turned quantum computing ideas on their head, to push the boundaries of which quantum systems people could efficiently simulate using

classicalcomputers. Nothing in those results suggests a route to killing quantum computation, as I imagine Guifre himself would be the first to tell you. Most obviously, his results mainly apply to slightly-entangled1-dimensionalspin lattices—in 2 dimensions and higher, people already don’t know what to do! But even aside from that, no one knows of any physical principle that would necessarily keep you in the “slightly-entangled” regime that Guifre’s algorithms can handle. Indeed, I think that the same sorts of states that were counterexamples to the “tree size is a Sure/Shor separator” conjecture, would also be counterexamples for Schmidt rank, matrix product states, and the other things Guifre considers.Comment #33 April 15th, 2013 at 9:33 am

Scott #20, thanks for the explanation! I’ll read some more on the topic as you suggested.

Comment #34 April 15th, 2013 at 9:59 am

Neel #31: Why the universe can do classical computation is not a dumb question at all! Indeed, it’s something that QC skeptics typically take for granted, but shouldn’t.

(A related point, which I should’ve mentioned in my talk but didn’t: it turns out to be extremely hard to design a physically-plausible noise model that would

onlykill QC, and not also kill classical computation!)If it’s reversibility that you’re worried about, then I’ll simply point out that it’s been known since the 80s that reversible computers can simulate non-reversible ones with only a constant-factor slowdown. Yes, it’s “weird” that the universe can only do reversible computations (at least if you consider the

universe as a whole, rather than e.g., the part accessible to any one observer limited by the speed of light in an expanding universe), but I have nothing to add to your correct reasoning that leads to that conclusion! 🙂Comment #35 April 15th, 2013 at 10:05 am

PS #21:

I had a vague recollection that I had heard something similar to the “many world vs many words ” quip before. It turns out Max Tegmark said something very similar…

Yeah, I’ve heard various people use that quip—I’m not sure whether it was Tegmark or someone else who invented it, but it certainly wasn’t me.

Comment #36 April 15th, 2013 at 10:06 am

Scott #30

I understand the challenge, I’m saying that most QM experiments do not rely on the “actual” existence/persistence of superpositions to explain the measured outcome of the experiment. Wheras exponentially sped-up QC algorithms definitely do depend on the “actual” existence/persistence of the superpositions.

The common argument says decoherence explains the problem with getting persistent superpositions, but I say no, Nature itself doesn’t have persistent superpostitions (she wouldn’t be so wasteful to try to evolve exponentially increasing information!). Mathematically they exist, and any QM experiment so far carried out has results which are a prediction of the mathematical framework – where the superpositions are only a mathematical tool for a calculation.

But for your exponentially sped up QC algorithms you will need these superpostions to really exist/persist, even if you are just using the Copenhangen Instrumentalist Interpretation you won’t get exponential sped up quantum algorithm if, as I suggest, the superpositions are just a mathematical construction. (Except in a very unlikely probabilistic scenario whereby the evolution happens to give the required outcome from an exponential number of possibilities)

And btw, evolution did discover the wheel and even room-temperature engines (~gunpowder) at the microscopic scale, so stop dissing evolution, it’s much cleverer than you guys trying to create laboratory quantum computers

Comment #37 April 15th, 2013 at 10:29 am

Please allow me to commend to

Shtetl Optimizedreaders Fernando Brandao and Michał Horodecki’s recentExponential Decay of Correlations Implies Area Law(arXiv:1206.2947v2, 2012) which beginsBrandao and Horodecki go on to reference earlier work by David DiVincenzo, Debbie Leung and Barbara Terhal

Quantum data hiding(arXiv:quant-ph/0103098v1, 2001), and by Patrick Hayden, Debbie Leung, and Andreas Winter,Aspects of generic entanglement(arXiv:quant-ph/0407049v2, 2004) which introduces the notion ofquantum data hidingstates.It seems entirely credible (to me) that further development of these ideas — accompanied by further beautiful theorems of course! — may inexorably convey our understanding toward a world that is governed by

Of course, the more conservative members of the engineering/chemistry/biology/experimental physics communities already

knowthat the QM/QIT/QED thermodynamical quenching postulates are true, in Feynman’s pragmatic sense that “a very great deal more truth can become known than can be proven!”Comment #38 April 15th, 2013 at 10:49 am

James #36: I don’t know how you explain even (say) the double-slit experiment, or the Bell inequality, without invoking superposition in very much the same way that QC invokes them. Words like “ontological” or “really exist” are irrelevant to this discussion: since we’re talking about the outcomes of actual experiments, the only question is what role superpositions play in the theory we use to predict those outcomes. And this is where you’re caught in the horns of a dilemma:

(1) If superpositions are “just a statistical tool” for describing some deeper, “classical” layer of reality, then how do you explain known phenomena like Bell inequality violation, or the behavior of spin lattices or Bose-Einstein condensates—all of which appear to force the “classical” layer, if it exists, to be so weird that it’s basically just a restatement of quantum mechanics?

(2) If superpositions are more than such a statistical tool—if there

isno “deeper, classical layer”—then how do you rule out QC?My position is the following: before I could take QC skepticism seriously, I’d need an answer to either (1) or (2) that nontrivially engages my intellect, and deals with all the obvious objections that a physicist would raise.

Comment #39 April 15th, 2013 at 10:56 am

Incidentally, regarding Nature having invented “wheels” or even “gunpowder” (!) at the molecular scale: well, if those are the rules you want to play by, then Nature has

alsoinvented “quantum computing” at the molecular scale! See the recent work on the GMO photosynthetic complex, and its interpretation in terms of a quantum walk algorithm. Or the use of quantum entanglement in the internal compasses of European robins. These things are at least as much “quantum computers” as the metabolism going on in my stomach is a “gun”! 🙂Comment #40 April 15th, 2013 at 11:07 am

Scott #38

Aye, there’s the rub.

There is no classical layer.

I’m saying Nature is fundamentally random, but she only evolves in a SINGLE probabilistically determined path. So all the other “branches”of the evolution don’t exist.

Fundamental randomness is a funny thing, and it was never properly introduced to QM, even with the Copenhagen Interpretation.

I’m explicitly inserting fundamental randomness at the tiniest level. In fact I’m saying the only reason the universe evolves at all is because of these microscopic random “jumps” and then Nature does some bookkeeping by unitarily evolving the enitire universe state vector, so we don’t just have chaotic random evolution.

What I’m saying is perhaps confusing, I believe it’s not a just a trivial interpretation of things.

I just wanted to point out, that a small adjustment to our interpretation of QM might explain why QC can’t work while everything else seems to conform perfectly well to Standard Quantum Mechanics.

Of course this model satisfies Bell Violations – unless there is a branch in MWI which doesn’t.

An easier refutation of this model might come from showing the EM spectrum is continuous to a much finer resolution than planck timescales, maybe from gamma ray bursts or similar.

Comment #41 April 15th, 2013 at 11:49 am

James #36:

[..] so stop dissing evolution, it’s much cleverer than you guys trying to create laboratory quantum computersAt this point I can’t resist to mention this, sort of half jokingly;

there actually is some evidence that european robins are

better at maintaining entangled qubit pairs in a controlled manner, than any expensive laboratory equipment at present.

Comment #42 April 15th, 2013 at 11:56 am

Scott & Attilla

that photosynthesis and Robin research is pretty stunning.

But I think it’s better as a refutation of Tegmark’s simple argument that the “warm” brain can’t do any interesting quantum stuff than an argument for Quantum Computing – ie I don’t think “entanglement robustness” = “quantum computing”

But I will be at least as happy as Scott if it’s proved that photosynthesis or Robins are doing QC! 🙂

Comment #43 April 15th, 2013 at 1:08 pm

@James

We were playing by your rules, as Scott puts it; these weren’t arguments for QC, just a few funny remarks against your quite silly (ok, you can say “controversial”) evolutionary reasoning.

Personally, I’m getting the impression that discussing the correct spelling of my first name would make just as a scientifically rewarding and profound conversation as this one.

Comment #44 April 15th, 2013 at 1:12 pm

James #40: Sorry to keep harping, but setting aside the question of whether your model is true, you haven’t even

explainedyour model. All the words about “fundamental randomness” and “only one branch being real” are useless to me.Are you claiming that, whenever a measurement is made, the universe conspires to make it look like it was obeying QM—

butif it has to perform too much computation in order to do so, then it will fail, and the conspiracy will be unmasked? If so, then exactly how much computation is too much? You still haven’t answered the question: will we start seeing this breakdown of QM at 10 qubits? 15? 20? And what will the breakdownlook like? In other words: what exactly is it that you predict a QC willdo, if not working as QM says it will work? Will it “crash the universe”? Or produce an error message from God? 🙂 Or will it just produce random garbage? Whatkindof random garbage?Comment #45 April 15th, 2013 at 1:26 pm

Sorry Attila about spelling.

Scott, you’re being ridiculous.

A minor modifcation to the interpretation of QM will not cause the universe to crash.

I have suggested that the discrete time evolution model can more easily be attacked by investigations on the continuity of the EM spectrum, rather than wating for you bumbling guys construct something that REALLY should do a quantum computation.

I don’t want your money, btw, I just want the truth.

Comment #46 April 15th, 2013 at 1:32 pm

In answer to your question, properly understood, a “QC” will fail at 2 qubits.

Comment #47 April 15th, 2013 at 3:18 pm

Sorry James, you’re hereby banned from this blog for 2 years, by reason of trollishness and evasiveness.

Hearing the sirens outside my window, responding to the explosions at the Boston marathon, really causes me to reevaluate my priorities, and debating ignoramuses on the blogosphere isn’t one of them.

Comment #48 April 15th, 2013 at 4:39 pm

I certainly didn’t mean to suggest that Vidal’s results does anything to contradict quantum computation, I am merely trying to understand their connection to the issues discussed here.

Note that similar techniques can be applied in order to stimulate two-dimensional systems (search for PEPS out “projected entangled pair states”).

As for what would keep you in the “slightly entangled regime”, it has been shown that ground states (and also, I think, low energy states) of local Hamiltonians can be represented by matrix product states, which means they are only slightly entangled. Of course, how time development enters this is a different story.

Comment #49 April 15th, 2013 at 4:49 pm

Yeah, I remember almost starting a fight with someone recycling bottles just after I’d put my baby daughter to bed. The protectiveness instinct passes once they’re a few years old,

Comment #50 April 15th, 2013 at 5:06 pm

The proof I mentioned is the article “MPS represent ground states faithfully” by Verstreate and Cirac (arxiv: http://arxiv.org/abs/cond-mat/0505140 ). Of course, it is limited to 1D. It makes no mention of excitations, but it seems plausible that something similar would apply to low-lying excitations, and numerics say yes.

I am very sad to hear about the bombs, by the way 🙁

Comment #51 April 15th, 2013 at 5:54 pm

That is true of the theorems, but on the other hand, it is striking that Verstreate and Cirac provide three pages of physical intuitions (leading up to their Eq. 1) that apply in arbitrary dimension. It is natural to wonder, whether physical principles that can be rigorously proved in one dimension, may plausibly also be true in more dimensions?

This provides an occasion to quote from Sergei Novikov’s

The role of integrable models in the development of mathematics(1991):Nowadays Novikov’s maxim aptly describes the the accelerating application of QIT methods to the simulation of physical dynamics.

Comment #52 April 16th, 2013 at 1:48 am

So if it is true that the low-energy spectra of realistic systems are only slightly entangled, this should mean that any quantum computational speedup must start with a highly excited state. Could this be a helpful perspective on the difficulties of building a QC?

Comment #53 April 16th, 2013 at 7:15 am

Audun #52: Except that’s not true in general. I’m told that one of Kitaev’s many great discoveries involves realistic 2D systems whose ground states exhibit “topological order” (i.e., are highly entangled in some sense). I don’t understand it, but hopefully someone else here can explain it.

Comment #54 April 16th, 2013 at 8:02 am

Duh, I forgot about those! Well, there goes my “unique perspective” 🙂

Comment #55 April 16th, 2013 at 12:33 pm

Scott #53 and Audun #54:

How realistic Kitaev’s topological systems are depends on who you ask.

You have to perform pretty clever tricks to physically realize Kitaev’s toric code hamiltonian (which isn’t capable of universal quantum computation–it’s just a “protected” memory, which actually isn’t so protected because it’s unstable at any finite temperature). This usually involves all sorts of lasers and shuttling of particles around.

For his topological system capable of universal computation, I’m not sure any physical proposals exist. There’s been a lot of noise about the fractional quantum hall systems, but no one has demonstrated the right type of non-abelian anyons, to my knowledge, nor the necessary control over them.

Comment #56 April 16th, 2013 at 5:11 pm

Hey Scott, just wanted to let you know that as a layman (electrical engineer) I find your arguments with ignoramouses very illuminating. I don’t blame you for getting frustrated with aggressively ignorant kooks, but it tends to prompt you to explain how the debate is framed at the topmost level, which I appreciate immensely.

Comment #57 April 16th, 2013 at 7:52 pm

I agree with Anon@56. Just do what you think is right, but keep on blogging comparde. We”ll all be the better for it — hopefully. 😉

Comment #58 April 17th, 2013 at 3:05 pm

In #34, Scott says “(A related point, which I should’ve mentioned in my talk but didn’t: it turns out to be extremely hard to design a physically-plausible noise model that would only kill QC, and not also kill classical computation!)”

Interesting. Is this only because we don’t know enough about QC error correction to be able to prove that a certain noise definitely does not allow QC?

Comment #59 April 17th, 2013 at 4:08 pm

Steve Simon voices similarly broad claims in his (terrific!) on-line CSSQI video lecture

Topological Quantum Computing: “You can argue thatallnoise processes are local!”However, these broad claims are far more commonly encountered in QIT talks, and in blogs comments, than in the peer-reviewed literature. For example, no such claims are advanced in the much-referenced survey upon which Steve Simon’s talk is based — much-referenced because it’s terrific! — namely Nayak, Simon, Stern, Freeedman and Das Sarma, “Non-Abelian Anyons and Topological Quantum Computation” (http://arxiv.org/abs/0707.1889).

The reason is simple: Nature provides

plentyof physically-plausible noise models that are problematic for QC preciselybecausethey are generically non-local.Disc drives provide an example that is both familiar and instructive: the classical memory is composed of thermodynamically stable magnetic domains in the platter. As the read-sensor flies overhead, each platter-bit transiently “sees” magnetic images of adjacent platter-bits in the conduction-band of the read-sensor, and is (non-locally) perturbed by those images.

Fortunately — or rather, in virtue of careful design — the platter-memory bits are self-correcting, in that the platter itself constitutes a thermal reservoir that continuously reads-and-corrects each platter bit. Yet even at the classical level, non-local memory errors are ubiquitously present — in both magnetic and electrostatic memories — so much so, that the unix command “memtest all 1” will run more than a dozen tests, that assess vulnerability to both non-local and pattern-dependent noise.

Needless to say, similar interactions dynamically entangle photon sources, photon detectors, and optical interferometer modes, such that none can be assessed in isolation from the other two. This is why assessing/demonstrating the scalability of n-photon coherent sources is comparably difficult to assessing/demonstrating the scalability of n-qubit memories, or of assessing/demonstrating the scalability of n-qubit qubit computations.

Can quantum memories be as robustly and scalably self-healing as classical memories, both in principle and if so, then (hopefully someday!) in practice? That is an open question, regarding which the invited speakers at QStart 2013 no doubt will have much to say.

Comment #60 April 29th, 2013 at 12:53 pm

The quantum Babbage analogy goes back at least to late April 1994, in the first public talk I gave about the factoring algorithm, at the ANTS conference. Specifically, in his introduction Len Adelman compared me to both Charles Babbage and Leonardo da Vinci (probably the most flattering introduction I will ever receive).

To be specific, he showed the plans for Babbage’s analytical engine, and showed one of Leonardo da Vinci’s drawings, and said that Babbage’s invention was a hundred years ahead of its time, but da Vinci’s was conceptually impossible.

Comment #61 April 29th, 2013 at 12:57 pm

Slight revision (my memory is faulty). The ANTS conference was at Cornell, May 6-9, 1994.

Comment #62 April 29th, 2013 at 3:28 pm

Anon #56 and Mike #57: Thanks so much for the encouragement!

Comment #63 April 29th, 2013 at 3:33 pm

jonas #58:

Is this only because we don’t know enough about QC error correction to be able to prove that a certain noise definitely does not allow QC?

I don’t know but I very much doubt it. The way I think about it, the reason why it’s so hard to kill QC without also killing classical computation is that, to get universal QC, there are basically only two requirements:

(1) A universal set of

classicaloperations.(2) A single “quantum” operation, such as the Hadamard gate.

And yes, it’s possible to design noise models (for example, pure dephasing noise) that target only (2) without targeting (1). But because they require specifying the

basisin which the classical computation will be allowed to take place unmolested, such noise models tend to be pretty contrived. Most noise models will kill either (1)and(2) (in which case they don’t even allow classical computation), or neither (in which case they allow QC).Comment #64 May 7th, 2013 at 4:43 am

Re Scott #63: OK, thank you for the explanation.

Comment #65 May 11th, 2013 at 11:28 am

To answer your question:

I believe that the QC experiments will begin fail with higher probability as the number of qubits increases. The reason they will fail with higher probability is that the fraction of the possible “unitary” evolution paths where the discrete fourier transform etc will be useful becomes smaller and smaller as more qubits are involved. I’m suggesting that this is a fundamental limitation of Nature, not of engineering. I think this will be very hard to demonstrate convincingly until a few hundred qubits are reached, since at low numbers of qubits there is not much effect probabilistically on the predicted outcomes of the quantum algorithms by assuming that superpositions are not persistent in reality.

ie at low qubits the probability is very high that the algorithms will work (the universe evolves on a path where the algorithm is effective) – and any failures will be so rare as to not be distinguished from experimental error.

Now, you might say that this is too vague and not much of a claim, but if I was able to claim anything stronger, then I would certainly have to be wrong as it would imply a lot of well-established QM results shouldn’t work in practice either.

That’s why I suggested that any “discrete” time* evolution model could better be attacked by observations on things like the recent huge gamma ray burst GRB 130427A.

(*Note that time is continous, and so are the amplitude phases)

But perhaps a quantitative calculation of the probability of failure wrt qubit number for a specific algorithm would be possible – then I guess people would be more interested.

Comment #66 May 15th, 2013 at 8:46 pm

If your figures are right, you’re mistaken about what to look at. A gamma ray burst tends to produce photons in the 10-100 GeV range; this is around 30 orders of magnitude below the Planck scale, and a couple of orders below the output of a high-end particle accelerator. But as hard as it is to build, say, a 512 qubit quantum computer, it is still child’s play compared to building a Planck-scale particle accelerator.

That said, in the unlikely event that I am understanding your model correctly, the best thing to look at would be Quantum Zeno Effect experiments, since in your theory wavefunctions will collapse even in the absence of observation.

Comment #67 May 18th, 2013 at 12:06 am

Whoops! It’s the input to a high-end accelerator that is in the 1-10 TeV range, not the output; the latter is probably lower.

But not enough lower that it would matter to my argument, I think.

On the other hand, it seems as if cosmic rays can sometimes run up to 10^21 eV, so only 20 orders below the Planck level.

Comment #68 May 29th, 2013 at 3:36 pm

Your mannerisms were fine, good talk.