Lots of people asked me to comment on a much-discussed new preprint by Matthew Pusey, Jonathan Barrett, and Terry Rudolph (henceforth PBR), “The quantum state cannot be interpreted statistically”. (See here for an effusive *Nature News* article, here for the predictable Slashdot confusion-fest, here for a related *Cosmic Variance* guest post by David Wallace, and here for a spiteful rant by Lubos Motl that hilariously misunderstands the new result as “anti-quantum-mechanics.”)

I recommend reading the preprint if you haven’t done so yet; it should only take an hour. PBR’s main result reminds me a little of the No-Cloning Theorem: it’s a profound triviality, something that most people who thought about quantum mechanics already knew, but probably didn’t *know* they knew. (Some people are even making comparisons to Bell’s Theorem, but to me, the PBR result lacks the same surprise factor.)

To understand the new result, the first question we should ask is, what exactly do PBR *mean* by a quantum state being “statistically interpretable”? Strangely, PBR spend barely a paragraph justifying their answer to this central question—but it’s easy enough to explain what their answer is. Basically, PBR call something “statistical” if two people, who live in the same universe but have different information, could *rationally disagree* about it. (They put it differently, but I’m pretty sure that’s what they mean.) As for what “rational” means, all we’ll need to know is that a rational person can never assign a probability of 0 to something that will actually happen.

To illustrate, suppose a coin is flipped, and you (but not I) get a tip from a reliable source that the coin probably landed heads. Then you and I will describe the coin using different probability distributions, but neither of us will be “wrong” or “irrational”, given the information we have.

In quantum mechanics, *mixed states*—the most general type of state—have exactly the same observer-relative property. That isn’t surprising, since mixed states include classical probability distributions as a special case. As I understand it, it’s this property of mixed states, more than anything else, that’s encouraged many people (especially in and around the Perimeter Institute) to chant slogans like “quantum states are states of knowledge, not states of nature.”

By contrast, *pure* states—states with perfect quantum coherence—seem intuitively much more “objective.” Concretely, suppose I describe a physical system using a pure state |ψ>, and you describe the same system using a different pure state |φ>≠|ψ>. Then it seems obvious that *at least one of us* has to be flat-out wrong, our confidence misplaced! In other words, at least one of us should’ve assigned a mixed state rather than a pure state. The PBR result basically formalizes and confirms that intuition.

In the special case that |ψ> and |φ> are *orthogonal*, the conclusion is obvious: we can just *measure* the system in a basis containing |ψ> and |φ>. If we see outcome |ψ> then you’re “unmasked as irrational”, while if we see outcome |φ>, then I’m unmasked as irrational.

So let’s try a slightly more interesting, non-orthogonal example. Suppose I describe a system S using the state |0>, while you describe it using the state |+>=(|0>+|1>)/√2. Even then, there are *some* measurements and outcomes of those measurements that would clearly reveal one of us to have been irrational. If we measure S in the {|0>,|1>} basis and get outcome |1>, then I was irrational. If we measure in the {|+>,|->} basis (where |->=(|0>-|1>)/√2) and get outcome |->, then you were irrational. Furthermore, if S is any qubit that obeys quantum mechanics, then it *must* have a decent probability either of returning outcome |1> when measured in the {|0>,|1>} basis, *or* of returning outcome |-> when measured in the {|+>,|->} basis.

So, are we finished? Well, PBR don’t discuss the simple argument above, but I assume they wouldn’t be satisfied with it. In particular, they’d probably point out that it only unmasks one of us as irrational for *some* measurement outcomes—but who can say what the measurement outcome will be, especially if we don’t presuppose that the quantum state provides a complete description of reality?

What they want instead is a measurement that’s guaranteed to unmask someone as irrational, *regardless* of its outcome. PBR show that this can be obtained, under one further assumption: that “rational beliefs behave well under tensor products.” More concretely, suppose two people with different knowledge could rationally describe the same physical system S using different pure states, say |0> or |+> respectively. Then if we consider a new system T, consisting of *two independent copies of S*, it should be rationally possible to describe T using any of the *four* states |0>|0>, |0>|+>, |+>|0>, or |+>|+>. But now, PBR point out that there’s a 2-qubit orthonormal basis where the first vector is orthogonal to |0>|0>, the second vector is orthogonal to |0>|+>, the third vector is orthogonal to |+>|0>, and the fourth vector is orthogonal to |+>|+>. So, if we measure in *that* basis, then *someone* will get unmasked as irrational regardless of the measurement result.

More generally, given any physical system S that you and I describe using different pure states |ψ> and |φ>, PBR define a new system T consisting of k independent copies of S, where k is inversely proportional to the angle between |ψ> and |φ>. They then construct a projective measurement M on T such that, *whichever* of M’s 2^{k} possible outcomes is observed, *one* of the 2^{k} possible “tensor product beliefs” about T gets unmasked as irrational. And that’s it (well, other than a generalization to the noisy case).

So, will this theorem finally end the century-old debate about the “reality” of quantum states—proving, with mathematical certitude, that the “ontic” camp was right and the “epistemic” camp was wrong? To ask this question is to answer it.

(**Clarification added for Lubos Motl and anyone else unwilling or unable to understand:** The answer that I intended was “no.” I don’t think the battle between the “ontic” and “epistemic” camps can *ever* be won, by its nature. Nor has that particular battle ever interested me greatly, except insofar as some interesting mathematical results have come out of it.)

I expect that PBR’s philosophical opponents are already hard at work on a rebuttal paper: “The quantum state *can too* be interpreted statistically”, or even “The quantum state* must* be interpreted statistically.”

I expect the rebuttal to say that, yes, *obviously* two people can’t rationally assign different pure states to the same physical system—but only a fool would’ve ever thought otherwise, and that’s not what anyone ever meant by calling quantum states “statistical”, and anyway it’s beside the point, since pure states are just a degenerate special case of the more fundamental mixed states.

I expect the rebuttal to prove a contrary theorem, using a definition of the word “statistical” that subtly differs from PBRs. I expect the difference between the two definitions to get buried somewhere in the body of the paper.

I expect the rebuttal to get blogged and Slashdotted. I expect the Slashdot entry to get hundreds of comments taking strong sides, not one of which will acknowledge that the entire dispute hinges on the two camps’ differing definitions.

There’s an important lesson here for mathematicians, theoretical computer scientists, and analytic philosophers. You want the kind of public interest in your work that the physicists enjoy? Then *stop being so goddamned precise with words!* The taxpayers who fund us—those who pay attention at all, that is—want a riveting show, a grand Einsteinian dispute about what is or isn’t real. Who wants some mathematical spoilsport telling them: “Look, it all depends what you mean by ‘real.’ *If* you mean, uniquely determined by the complete state of the universe, and *if* you’re only talking about pure states, then…”

**One final remark.** In their conclusion, PBR write:

… the quantum state has the striking property of being an exponentially complicated object. Specifically, the number of real parameters needed to specify a quantum state is exponential in the number of systems n. This has a consequence for classical simulation of quantum systems. If a simulation is constrained by our assumptions—that is, if it must store in memory a state for a quantum system, with independent preparations assigned uncorrelated states—then it will need an amount of memory which is exponential in the number of quantum systems.

The above statement is certainly true, but it seems to me that it was already demonstrated—and much more convincingly—by (for example) the exponential separations between randomized and quantum communication complexities.

Also to be expected is an article that presents a painfully contrived crypto-setting, where the PBR result is “the quantum solution, which can not be done classically…”, etc. The most impressive part of this article will be the fact that, no matter how much you torture them, its authors will keep on insisting that it is actually a very natural and relevant problem that they solved.

Nice explanation. Now I need to read the darn paper!

I think your explanation of the roots of sloganeering in paragraph 5 is a tad off, though. Mixed states — like classical probabilistic states — are more or less unambiguously “states of knowledge”. I don’t know of much disagreement about that, although there are a few high-profile papers (like Deutsch on CTCs) that fail to recognize it.

The debate is really about whether PURE states are “real” or just “states of knowledge”. The sloganchanters at PI (or, at least, their leaders) are asserting that pure states are states of knowledge, and not (I think) motivated by any observations about mixed states.

The effect of PBR on that movement will be interesting to observe. A couple of years ago, I thought I had Rob stumped with an unambiguous-discrimination example, but he found a hidden variable model of it. PBR is a stronger case of the same thing, and I’m not at all sure that Rob (or somebody) won’t come up with a similar reconciliation — i.e., a mapping from pure states to probability distributions over a hidden variable that reproduces exactly the same behavior. Which would imply that “…probability distributions cannot be interpreted statistically,” which would sort of cast doubt on PBR’s definition of “interpreted statistically”.

On the other hand, if you *can’t* do that, then PBR’s argument is a very compelling indication that pure states aren’t just distributions over hidden variables!

You forgot one,

I expect Lubos to write a rant against me

This was a controversy? Who out there thought pure states were states of knowledge? I thought that was resolved decades ago.

I’ll read the paper but I get a cloud of ignorance in my head about what “a new system T, consisting of two independent copies of S” means in the real world. Is that possible? Or are we just talking copy in the sense of same quantum state, e.g. spin up, spin down, mixed that we could create for qubits.

I don’t think your reading of “statistical” as being able to rationally assign different states is quite right. We already knew (and yes, I know I am falling into your trap) that two agents cannot rationally assign different pure states from the Brun, Finkelstein and Mermin result (see http://arxiv.org/abs/quant-ph/0109041 or http://arxiv.org/abs/1110.1085 for my version of it). This gives a well-defined operational meaning to the ability to rationally assign different states.

Now, I could easily imagine a world in which the BFM result holds, but the PBR result does not. This is because, if you are allowing hidden variables then there is potentially more information available in reality than you can access operationally. Therefore, you might be forced into agreement on the basis of operationally available information, but not if you only knew the ontological state.

The psi-epistemicist response to PBR is quite straightforward. Basically, the result does not rule out any proposal that we were taking seriously in the first place. For the neo-Copenhagenists (e.g. Quantum Bayesians, Zeilinger’s, etc.) there is no underlying state of reality beyond the quantum predictions, so the result is irrelevant to their program and they can continue as before. Those of us who are realist, e.g. Rob Spekkens and myself, have more of a problem and we must deny one of the assumptions of the theorem. However, Bell’s theorem, Kochen-Specker, Hardy’s Ontological Excess Baggage theorem, and a host of results by Alberto Montina have already given us enough problems with the usual framework for ontological models that we had already abandoned it as a serious proposal a long time ago. Spekkens thinks that the ultimate theory will have an ontology consisting of relational degrees of freedom, i.e. systems do not have properties in isolation, but only relative to other systems. Personally, I can’t make much sense of that beyond a rephrasing of many worlds, so I favor a theory with retrocausal influences instead. Neither of these proposals is ruled out by the PBR theorem.

That said, I do think the PBR result is the most significant result in quantum foundations for several years. It was an important open question as to whether psi-epistemicism was possible within the standard framework for ontological theories and that has now been answered in the negative. However, as I said, this only confirms intuitions that we (both psi-ontologists and psi-epistemicists) already had.

Guess what? No, don’t guess and just have a look at this abstract:

http://pirsa.org/11050028/

Robin #2: OK, I imagined the argument was basically that

(a) mixed states are clearly states of partial knowledge,

(b) mixed states are the most fundamental type of quantum state, ergo

(c) “quantum states are states of knowledge”, with a pure state |ψ> being the degenerate special state of

maximalknowledge (at least with respect to an orthogonal basis containing |ψ>).Let me put it this way:

ifwhat the epistemic camp believed is overturned by the PBR theorem,thenwhat they believed is soobviouslywrong that they shouldn’t have needed such a theorem to set them straight! And therefore, being charitable, I’m going to proceed on the assumption that they meant something else.Matt #6: Sorry, our comments crossed each other—thanks for the clarifications! (And especially, for confirming my suspicion that “the [PBR] result does not rule out any proposal that we [psi-epistemicists] were taking seriously in the first place.”)

As I said, I would’ve strongly preferred if PBR had given a careful discussion of what they mean by “statistical” and what they

don’tmean (and for which meanings the “statistical interpretation” can be trivially ruled out evenwithouttheir theorem, etc. etc.), rather than breezing past these issues in a few sentences.Markk #5: “Two independent copies of S” just means that you apply the same physical state preparation procedure twice, for example in two separate laboratories.

So the paper is shit. Why should I waste an hour reading it? I’d rather finish working my way through Marx’s collected works!

You couldn’t have possibly read the paper or you’re unfamiliar with basic facts about QM or basic terminology used by those who want to replace QM by something else, if I avoid the term crackpot.

Quantum mechanics is definitely probabilistic. It means that it can only make probabilistic predictions of measurements which can’t be made more unambiguous, not even in principle, and it says nothing about the “real state” of a system prior to the measurement. These are completely basic and universal properties of any quantum mechanical theory that are not open to any interpretation. The probabilistic framework of quantum mechanics is undoubtedly valid.

Even the very abstract of the Pusey paper presents these things very clearly and argues – totally absurdly – that the statistical interpretation (by Max Born) is wrong.

On the other hand, when you say that the paper is trying to resolve the battle between the “ontic” and “epistemic” camps, what you fail to understand is that, as you can see in the first paragraph of e.g.

http://arxiv.org/abs/0706.2661

both of these two camps are composed of advocates of hidden-variable models, if I avoid the term crackpots for the second time. To say the least, the hidden variable models have been falsified for more than 40 years and they are by definition incompatible with the basic principle of quantum mechanics: both of these adjectives are anti-quantum-mechanics. So the paper surely can’t prove that ontists or epistemists are right because both of these groups have been known to be wrong for 40+ years (and I would really say for 85+ years).

Check my blog for updates (at the end).

To me the claim that mixed states are states of knowledge while pure states are not is a little puzzling because of the fact that it is not possible to uniquely recover what aspects of the mixed state are subjective and what aspects are objective.

The simple case is this: Let’s work with a spin-1/2 particle, so there are states:

|0>

|1>

|+> = 1/square-root(2) (|0> + |1>)

|-> = 1/square-root(2) (|0> – |1>)

The mixed state corresponding to 50% |0> + 50% |1> is the SAME as the mixed state corresponding to 50% |+> + 50% |->. In both case, the density matrix (or whatever it’s called) looks like this:

0.5 |0><1|

Scott, I agree that a better discussion of statistical would have been desirable, or at least a replacement of the word “statistical” with “psi-epistemic”. However, I suspect the authors were compactifying for the purposes of submitting to a big name journal and seeking a title that would have wide appeal. I recommend this paper:

http://arxiv.org/abs/0706.2661

for a more thoroughgoing discussion of the background to the question that the PBR paper solves.

Also, I disagree with you that the type of theory that PBR rules out is obviously wrong. For a single system of arbitrary dimension, I believe it is possible to construct a psi-epistemic theory of this type. Hardy, Spekkens and Barrett constructed such a theory several years ago, but it was never published because it was rather ugly and contrived. This shows that it is essential to have the independence condition for product states and to have entangled measurements in order to get the result. Alternatively, one could consider dynamics and multiple time-steps to derive psi-ontology, as Montina did in an earlier paper:

http://arxiv.org/abs/1008.4415

So… does this paper lend credence to Bohm and MWI more so than any other potential “solution” to the measurement problem?

It’s fun and instructive to read Pusey et al.

The quantum state cannot be interpreted statistically(arXiv:1111.3328, henceforth QS-CBIS), concurrently with Negrevergne et al.Benchmarking Quantum Control Methods on a 12-Qubit System(PRL 2006, henceforth BQCM12-QS). A natural question is: What changes to the real-world quantum circuit of BQCM12-QS (Fig. 1) are required to emulate the ideal quantum circuit of QS-CBIS (Fig. 2), so that the ingenious quantum mechanical predictions of arXiv:1111.3328 can be tested?On first reading, it seems that three changes are required. Least difficult — but by no means trivial — is that the 13C and 15N nuclei of the

l-histidine molecules of BQCM12-QS need to be hyperpolarized; this is feasible via existing techniques of dynamic nuclear polarization (DNP). Second, the individual spins of the 13C and 15N nuclei must be read-out individually rather than collectively; such single-molecule readouts are feasible in principle and practical instrumentation is developing rapidly (ongoing developments in hyperpolarization and imaging technology are motivated largely by applications in medicine and materials science).The third and toughest technical challenge is dynamically instantiating the single gate that in QS-CBIS Fig. 2 is labeled “R_\alpha”, because this particular gate’s generator (that is, its effective Hamiltonian) — though algebraically simple — does not correspond to any of the local spin Hamiltonians that nature so generously provides.

Thus we end up confronted with a generic challenge of quantum complexity theory: how can we physically represent unitary transforms that generate high-order non-local quantum entanglement? The experience of recent decades has been that the practical obstructions are very great; researchers like Gil Kalai (arXiv:1106.0485) and others now are wondering what (if anything) this difficulty tells us about the mathematical structure of nature’s state-spaces.

rrtucci #3:

You forgot one,I expect Lubos to write a rant against meBravo! Your prediction actually came true before any of mine. 😀

Joe Miller #11:

So the paper is shit. Why should I waste an hour reading it?Uh, I didn’t say that, and I don’t think it! I thought my assessment was pretty clear: “it’s a profound triviality, something that most people who thought about quantum mechanics already knew, but probably didn’t

knowthey knew.”(Naturally, though, if the foundations of quantum mechanics is not one of your interests, then you shouldn’t spend even the hour it takes to read it.)

Peter Shils #15:

So… does this paper lend credence to Bohm and MWI more so than any other potential “solution” to the measurement problem?I apologize for not being clear enough. No, I don’t think that this paper lends particular credence to

anyinterpretation: the disagreements between the Bohmians, MWIers, Copenhagenists, etc. are just not the sort of thing that can be cleared up by a technical result of this kind. So you can go right on believing whichever QReligion you like the best—unlessyour religion actually held for some reason that the same physical system could be rationally described using two different pure states. And in that case, as I said before, you should have known better even before this paper came out!Kudos to you Scott for not deleting the screeds of people like Lubos Motl. He hasn’t mastered either the English language or QM but his lunacy is pretty entertaining.

Off-Topic: Hi Scott, what is about this article: http://www.technologyreview.com/computing/38833/?p1=Mag_story0

Anything serious, or just another D-Wave lookalike blub…?

Here is video of Lubos singing Queen’s Bohemian Rhapsody:

http://www.youtube.com/watch?v=S9qbZSVSFAM

And Bon Jovi’s Always:

http://www.youtube.com/watch?v=P2SL1GquC4A&feature=related

He can also sing Frank Sinatra:

http://www.youtube.com/user/caasn#p/u/19/l5f-gqLsIec

And REM’s Losing My Religion:

http://www.youtube.com/user/caasn#p/u/27/WLS2oUjA2qs

Also, Lubos has a knack for insulting people, misquoting them, and playing dirty during arguments. For example, these actions have led physicist Sabine Hossenfelder to write this post:

http://backreaction.blogspot.com/2007/08/lubo-motl.html

Raistlin #21: No, that looks like another perfectly-reasonable experimental QC result getting a bit oversold by the press. Unlike in the D-Wave case, the research group isn’t claiming to have something “commercially-viable” already.

Gene #22: I find it deeply ironic that Sheldon, from

The Big Bang Theory, is said to be modeled after Lubos. Normally art exaggerates life for comic effect—but in this case, Sheldon strikes me as downrightempatheticcompared to Lubos, with a vastly greater ability to understand other people’s viewpoints.Raistlin \#21: Your attention is directed toward three dovetailed quantum computing articles in that

Scienceissue:(1) Mariantoni

et al.“Implementing the Quantum von Neumann Architecture with Superconducting Circuits,”(2) Lanyon

et al.“Universal Digital Quantum Simulation with Trapped Ions ,”and(3) DiVincenzo’s accompanying commentary “Toward Control of Large-Scale Quantum Computing.”

David Divencenzo’s commentary in particular is uncompromisingly optimistic:

Seeking historical precedents (with the help of Google Books) we can search the 1945-80 literature relating to “fusion power” and “atomic power” — in vain — for similarly uncompromising avowals that fusion and/or nuclear power technologies “unquestionably” would solve humanity’s energy problems.

We thus appreciate that in their unquestioning faith in quantum computing’s eventual practical feasibility, at least some quantum computing researchers rank among the most optimistic scientists in history. Are they right? Who knows!

It’s fun too to search for usages of “unquestionably” in the Inspec, MathSciNet, and Arxiv databases. It turns out that “unquestionably” is a rather uncommon word in these databases, perhaps because generic math/science/engineering assertions can be improved by omitting it.

Yet even this rule has exceptions: to my mind one such exception is John Preskill’s assertion in his still-relevant 1997 survey

Quantum Computing: Pro and Con(arXiv:quant-ph/9705032) that “arguments of this sort are unquestionably useful.”Scott, I’m ill-at-ease that you call Molt as “on the autism spectrum”. Of course I understand you choose this phrasing to avoid morron and other offensive terms. However autism is a disorder that does not abolish anyone’s ability to be kind, humble, and self-conscious of her/his cognitive limitation. By being underoffensive to a single one that arguably would deserve it, you made your post offensive to many that do *not* deserve it.

Thank you for this otherwise very interesting post.

Jiav #26: You’re absolutely right, and I changed the post. I apologize for any offense I caused.

As someone who might be on the autism spectrum myself, it seems to me that Lubos’s problem is a toxic

combinationof autism with ordinary assholery. In other words, he’s unable to understand what any researchers with views sufficiently different from his own are saying, andbeingunable, he then leaps to the conclusion that they must be saying whatever they are because they’re communist imbecile garbage bent on destroying science.[…] amount of buzz in the last couple of days. Nature posted an article about it on their website, Scott Aaronson and Lubos Motl blogged about it, and I have been seeing a lot of commentary about it on Twitter and […]

[…] miss Scott Aaronson's analysis of the results. This entry was posted in Quantum Mechanics by krister. Bookmark the […]

Umm D-Wave sold a quantum computer for $10 million dollars, it was installed at USC’s ISI, and it’s being used now by dozens of scientists from around the world. I’d call that commercial. Also apparently they are seeing linear run time scaling on average case np-hard problems with runtimes of microseconds up to problems defined over 100 binary variables. I think it would be great if you acknowledged this so that your fanboys who appear to have trouble thinking for themselves could have a less biased view of what could be the most important development in computing in the past 50 years.

Just wanted to congratulate you on the title, which is the shortest summary I’ve seen of the generic physicist’s take on the issue of interpretation (by and large my own personal take as well, though I seem to be unable to commit to it).

I left a comment on Matt Leifer’s beautiful post about PBR—and since it represents a (small) change in my views, I thought I should crosspost it here.

Matt, thanks so much for this beautiful exposition! I particularly appreciated learning the backstory behind the PBR result (which they seem to have simply assumed—wrongly, in my case!—that their readers would already know), as well as your explanation that are really three relevant camps here:

(1) psi-epistemologists who think that a quantum state represents ordinary statistical uncertainty about some underlying “real” object (which isn’t itself a quantum state),

(2) psi-epistemologists who deny that there’s any such “real” object (i.e., Copenhagenists and QBayesians), and

(3) psi-ontologists (like Everettians and Bohmians), who think that the quantum state is the “real” object (or at least part of it).

However, after reading your discussion based on this classification, I have to confess that I’ve edged slightly toward Lubos’s view (if not, of course, his vitriol! ).

When I read the PBR paper, my reaction was: hey, supposing different people were to describe the same physical system using different pure states, this sort of entangled measurement would provide a neat and very convincing way to show that at least one of the people was flat-out wrong, not just insufficiently informed. And crucially, much like with the Bell or Kochen-Specker theorems, the procedure can be described directly in terms of measurements and their outcomes, without having to presuppose that the formalism of QM tells you all and exactly what’s happening behind the scenes (which of course would trivialize the desired conclusion). I could certainly imagine such a procedure being good for something.

Now you’re saying, no, it’s much bigger than that: the real import of this result is to rule out the epistemic/ontic hybrid view (1). In response, I agree that it’s nice to rule out (1) so cleanly … but I also reiterate my conviction that Bell, Kochen-Specker, and many other results already told us that view (1) can’t and shouldn’t be taken seriously! In other words: even before PBR, if you’d explained to me that people were pursuing (1) as an actual research program (rather than just a mathematical strawman to be knocked down), I would’ve said: “why?! don’t we already know that any ‘cure’ of that kind would be vastly worse than whatever disease is worrying you?”

So, that’s the tiny extent to which I agree with Lubos. I differ from him, of course, in appreciating (and wanting to understand) a nice proof even for something I’d already taken as obvious!

Curious programmer #30: Selling one experimental machine—to a military contractor that was apparently

requiredto spend a large sum on Canadian-made equipment because of a contract with the Canadian government—seems “commercial” only in the loosest sense of the word! D-Wave might even sell more such machines to the “pointy-haired bosses” of the world—but if they want the Dilberts to get interested, they’ll probably first need to demonstrate a clear advantage over simulated annealing and other classical optimization algorithms. (I.e., an advantage that you don’t have to squint to see.) I myself will get more interested once they clear the milestone of demonstrating even 2-qubit entanglement.First of all, thanks to Scott for explaining the paper by Pusey et al. Another friend mentioned this paper, and I did skim it, but did not give it enough time to figure out their real point. Their basic construction is very simple, and it was only that I found their terminology to be confusing.

At least in one common use of the word “epistemic”, I think that am in the epistemic camp. Which is to say, I accept a neo-Copenhagen interpretation, basically Copenhagen plus Bayes plus mixed states. So yes, I think that mixed states are more fundamental for these particular questions than pure states.

It is an interesting remark that two Bayesian observers of quantum systems cannot rationally assign them unequal pure states. Now that I understand that that is what the paper is says, the paper at least isn’t crazy. But that doesn’t make it all that exciting, and it certainly doesn’t support the conclusion that the “wavefunction” (which is dusty and misleading terminology for a pure or vector state of a quantum system) isn’t statistical. For one reason, because two observes certainly can rationally assign two different density operators to a quantum system, as long as the density operators do not have disjoint support.

For another reason, because the argument only applies if the two observers are disentangled relative to each other, i.e., if they can share classical (meaning non-quantum) information with each other. This is not the case if they are in separate galaxies that disappear into each others’ cosmological event horizons. In this scenario, the holographic principle says more-or-less the opposite of what Pusey et al say.

As for Lubos Motl, I agree with what he says about the physics — sort of, because his posting is marred by some serious errors in descriptions of other people. He cites a paper that defines something called a “psi-epistemic hidden variable model”, and then says that the “epistemic camp” is “composed of advocates of hidden variables”. He further says that “apparently” Scott is “an anti-quantum-mechanical crank”. As a description of people, this is nonsense. Just because one paper calls one type of hidden variables model “psi-epistemic”, that doesn’t mean that the “epistemic camp” advocates hidden variables. On the contrary, the word “epistemic” has simply been used to refer to the Copenhagen interpretation or the probabilistic interpretation.

In particular, Scott isn’t any more of anti-quantum-mechanical crank than Max Born. I haven’t ever seen him advocate hidden variables. Moreover, I don’t know what Lubos has contributed to the topic of quantum probability other than a vitriolic repetition of textbook explanations. By contrast, Scott has a number of important results in quantum probability, especially if you take it to include quantum computation as a subfield.

I guess that what I call neo-Copenhagen in reference to myself is what Matt Leifer calls “QBayesian”. Yeah, basically the “QBayesian” viewpoint makes a lot of sense to me, although I’m not that happy with the neologism. Maybe one can say “quantum Bayesian”?

The fundamental state of a quantum object is a density operator or, more generally, a dual vector on its algebra of observables. It is a state in the Bayesian sense. That about sums up my view of it. All that I would add is that I don’t endorse this view to put a hat in the philosophy ring, I like it because it’s the best way for me to understand the science and mathematics.

One other remark: Pusey et al explore a quantum version of a well-known aspect of classical Bayesianism. Two classical Bayesians cannot rationally assign classical states (or distributions) to the same object with disjoint support. It means exactly that each one believes something that the other one thinks is impossible.

The wrinkle is that in the quantum case, “intersecting support” should be taken in the strict sense, with the extra assumptions that Pusey et al adopt. In other words, if one observer believes density operator ρ and other one simultaneously believes σ, then ρ and σ should have a common pure state |ψ> in their supports. It isn’t enough that the supports of ρ and σ are non-orthogonal.

Greg Zuckerberg said

“Maybe one can say “quantum Bayesian”?

Not bad!! But it would sell better among computer scientists if it had “social network” in it.

How about the “quantum Bayesian network”

Actually, the rumor that Sheldon Cooper is modeled after Lubos Motl is not well attested. (In fact, I haven’t found anything earlier than a 2008 blog thread in which Lubos himself introduced the idea.) The Wikipedia page says that Cooper is modeled after a coworker of the show’s writer Bill Prady, when Prady worked as a computer programmer many years ago.

http://en.wikipedia.org/wiki/Sheldon_Cooper

Re: #35Having seen this paper, I suspect that the term “QBayesian” is indeed a contraction of “quantum Bayesian.”

Dr. Aaronson, you write:

But this isn’t what PBR obtain—in fact, I think I have an easy proof that this is

impossibleto obtain. (I suspect the proof is wrong; otherwise, you would have found it already.)If I understand correctly, the result that PBR actually get is this:

To explain why I think this result is different from the one you stated, I’ll go to one of your examples. You assume:

You conclude:

But how does the conclusion follow from the assumption? The two people in the assumption (let’s call them Alice and Bob) have different information. Alice, based on the information

shehas, describes the system using |0>. Bob, based on the informationhehas, describes the system using |+>. Since Alice’s information is different than Bob’s, how can we conclude it’s rationally possible for Alice to describe the composite system T using (say) the state |0>|+>? Similarly, how can we conclude it’s rationally possible for Bob to describe the composite system T using the state |0>|+>?Following this line of reasoning, I get the feeling that when the PBR result is rephrased in your language, it comes out something like this:

But this looks suspiciously similar to the old Brun, Finkelstein and Mermin result. So now I’m more skeptical that PBR have actually proven something new. (In particular, the quantum “intersecting supports” result that Dr. Kuperberg mentions in

#36was proven by BFM.)[…] lots of headlines. Most people my colleagues at Nature spoke to were quite enthusiastic, whereas Scott Aaronson didn’t seem to see that much of a surprise. Matt Leifer has an informative, quite detailed […]

[…] both of which rely on phenomena that aren’t possible in classical physics.Physics bloggers Scott Aaronson, Lubos Motl, David Wallace and Matt Leifer have posted detailed reviews with different opinions. […]

Greg Kuperberg #34, 35, 36, 38: Thanks for the comments!

I haven’t ever seen [Scott] advocate hidden variables.In the interest of full disclosure, I got extremely interested in hidden variables at one point in grad school, and I even wrote a whole long paper called Quantum Computing and Hidden Variables, which studies the computational complexity of

sampling an entire hidden-variable trajectoryin discrete hidden-variable theories sort of like Bohmian mechanics. The main results were that, by sampling trajectories, you could solve Graph Isomorphism in polynomial time (and indeed, all problems in the class SZK), and you could also do Grover search in ~N^{1/3}queries, but not fewer. In other words, the hidden-variable model can do a bit more than standard quantum computing, but itstillprobably doesn’t let you solve NP-complete problems in polynomial time. To this day, this represents almost the only model I know that seems to generalize BQP, but only “slightly” so.While you might have trouble believing this, the hidden-variable work—and specifically, my desire to prove an oracle separation between the hidden-variable quantum computing model and ordinary BQP—provided the original impetus for the collision lower bound.

However, partly as a

resultof this work, I came to regard all hidden-variable theories as basically creative mathematical stories that one can tell about quantum mechanics. Sometimes those stories have led to very interesting research questions—and that arguably makes them worth thinking about. But, much like in the case of religions, even if we wanted to believeoneof the stories, it’s doubtful that we could ever have scientific grounds to accept one of them in preference to all the others.I say old chap, for grossly exaggerated, disingenuous science press releases from the UK, nobody can hold a candle to Imperial College. If the Cameron government wants to cut wasteful spending, they should take an ax to Imperial College.

There is nothing terrible in and of itself with taking an interest in hidden variables, or even denying quantum probability outright, or even being a young-Earth creationist, *provided* that you can stay honest and make good use of unorthodox opinions. It is a remarkable fact that John Bell, of all people, was at heart a quantum probability denialist. Bell really took the hard road, because he never fully changed his mind, he just admitted that he was wrong in the narrow sense. More conservatively, there are times when it helps to have been on the other side, or have been open to the other side, to understand the issues. For instance, there is the famous case of Richard Muller, who until recently was a global warming denialist, but at heart is an honest physicist. I wouldn’t say that Muller’s study of global warming is the most important one out there, but it is decent scientific work, and he is also now in a better position than many people to understand and explain global warming denialism.

I think that it is interesting, as an abstract result, to show that a class of hidden variable models would yield even more computational power than BQP. The only problem is that, as you say, the models are too creative, or in many cases too vague, for a specific result of this type to be all that satisfying.

Anyway, for full “disclosure”, I came to be interested in quantum probability from a kind of denialism, and at a later date the same thing happened with quantum computation. When I took quantum mechanics as an undergraduate, I thought that what was being discussed was a cloud model, not a probability cloud but a material cloud of some kind. The course was too easy, and it was only in the last three weeks that I realized that I did not understand what a fermion was. I didn’t understand it because the idea of a JOINT wave function or vector state is (in my opinion) untenable if you think of a wave function as a physical cloud. So I didn’t realize that joint wave functions are the actual model, and therefore I was not able to antisymmetrize a wave function.

Then at Davis I heard about quantum computation, and at first I was sure that it was some sort of slogan for a constant factor speedup with quantum devices. But, just to make sure, I read Preskill’s notes on quantum error correction and algorithms. In fact in an initial stage I was convinced largely by argument by authority. I didn’t quite understand what was being said, but it was clear that the material was written by serious people.

I had (briefly) been skeptical of QC because it just did not occur to me that the universe could have more computational power than a 3D classical cellular automaton. I was never in the camp of people who think that the universe actually is a cellular automaton — not after learning special relativity. However, I figured that there must be important cutoffs that come to the same thing if quantum field theory makes sense. There are indeed bounds on the ability of space to store information. In fact the ultimate bound from quantum gravity is even worse, it’s a two-dimensional bound rather than a three-dimensional bound. Also on the transistor scale, there is a quasi-two-dimensional bound coming from the need to remove heat. But the intuitive 3-dimensional bound is roughly correct. The surprise is that it is a bound on quantum information, which still leaves more computational power than classical information.

I enjoyed reading your explanation and comments about the PBR result. Reading your “final remark”, where you refer to PBR’s claim that an exponential amount of information is needed to classically “simulate” quantum systems, and then your reference to known quantum/classical communication complexity separations, got me wondering: Can one actually deduce a “new” quantum/classical communication complexity separation directly from the the PBR result?

It doesn’t seem straightforward to me.

Greg Kuperberg #45:

Your last paragraph is rather difficult to unravel, but looks very interesting. Have you (or someone else) explained your thoughts in more details somewhere?

Would you have good pointers on the ideas you mention?

Thanks.

Hello Greg,

Yes I am curious too to know a little more your last paragraph. Care to share a little more?

Thanks in advance

-Ungrateful

Richard Cleve #46: Good question! No, I don’t see how to get a communication complexity separation from PBR, and would be extremely interested if someone managed to do that.

But at least, I believe I now understand what PBR were saying in the quoted paragraph. Their result says that, under their assumptions, the “ontic state” λ uniquely determines the quantum state |ψ>, and therefore must include |ψ> as a component. So,

ifyou thought there might be some clever choice of λ that could take exponentially fewer classical bits to encode than |ψ>, then you’re wrong.Needless to say, I

didn’tthink that—among many other reasons, because of the communication complexity separations, which demonstrate the same fact very convincingly (as Barrett himself seems to agree, in the talk linked to in comment #7).upthread:

If the Cameron government wants to cut wasteful spending, they should take an ax to Imperial College.<headdesk>

Greg #45: Just one quick followup.

I think that it is interesting, as an abstract result, to show that a class of hidden variable models would yield even more computational power than BQP. The only problem is that, as you say, the models are too creative, or in many cases too vague, for a specific result of this type to be all that satisfying.Actually, if you look at my paper, you’ll see that the algorithms for solving the collision problem in O(1) queries (and hence graph isomorphism, etc. in polynomial time), as well as for solving the Grover problem in O(N

^{1/3}) queries, are extremely simple and hardly need any specific properties from the hidden-variable model. So I fully expect that those algorithms, or variants of them, would work for any “reasonable” hidden-variable model that anyone wrote down (“reasonable” of course being a relative term here ). And as for the Ω(N^{1/3}) lower bound, that doesn’t useanyspecific properties of the hidden-variable model at all—it’s just a hidden-variable analogue of the BBBV lower bound.Re comments #47 and #48. This is not something that I ever completely thought through, and I can’t give you great references. If you want expanded or maybe just clearer remarks, I can try:

(1) It is reasonable to assume an upper bound on energy density for a computer. The computer cannot reasonably use stronger and stronger forces or higher and higher temperatures. For the most part it is limited to the electromagnetic force between electrons, and room temperature. With an upper bound on energy density, you should expect an upper bound on the logarithm of the number of states available per unit volume of space. At the same time, signals cannot travel faster than light. This limits the power of a computer to something comparable to a 3D cellular automaton. However, it can be a quantum cellular automaton.

(2) Even with a constant upper bound on energy density, a computer that is too large at constant density would collapse into a black hole. So the density of a very large computer would have to be small, and the correct upper bound on storage is then some number of bits or qubits per surface area rather than per volume. This is related to something called the holographic principle in physics that I haven’t studied. In any case practical computers are obviously very far away from this gravity upper bound.

(3) On the other hand, transistors in real computers are not very far away from melting. Even though many computers look 3-dimensional, most of the geometry of a computer is within each chip of the computer, and that geometry is almost completely 2-dimensional. One reason for that is the photolithography used to make the chips. But another reason is that there is no way to carry away the heat from a 3-dimensional block of transistors. Without that problem you could sandwich many chips together in a sort-of 3-dimensional pile. The heat problem effectively limits real computers to the power of 2-dimensional cellular automata. However, this 2-dimensional geometry is mostly used to simulate a RAM machine. It cannot be an efficient simulation, but it is what happens in practice, since most higher-level languages create a RAM machine environment for software. It’s also a pain to design algorithms for a 2D computational grid rather than for a RAM machine.

There is something in the very beginning of this problem that I don’t get:

“suppose I describe a physical system using a pure state Psi, and you describe the same system using a different pure state Phi != Psi”

Why would I (or you) describe a system by a pure state if I’m not sure of it? This sounds pretty irrational to me. I mean, pure states are defined as the common eigenvector of a complete set of commutable observables (CSCO), aren’t they? Therefore, it is only after having measured all these observables that I can really assign a pure state to my system description.

I think I don’t get what the problem is… needless to say the solution of it…

Fernando #53: The key point is that PBR don’t want to

presupposethat quantum mechanics provides the right description of nature (if you do, then the desired conclusion is indeed obvious, as you say). Rather, much like in Bell’s Theorem, they want to allow for the possibility that there are “hidden variables” of which ψ provides some incomplete statistical description. And they then want to derive a contradiction, using only certain measurements thatcan in factbe performed in quantum mechanics (but not presupposing that the outcomes of those measurements will be governed by the Born probabilities and no additional information besides that).>Greg Kuperberg says

“But another reason is that there is no way to carry away the heat from a 3-dimensional block of transistors.”

Currently mainly an engineering problem. You could consider having a cube with heat-transporting gas or something. There is ways to go.

But otherwise, yes:

-> http://en.wikipedia.org/wiki/Von_Neumann-Landauer_limit

-> http://en.wikipedia.org/wiki/Reversible_computing

A philosophical question of some interest is, why doesn’t Nature get warm when it computes so much at every point in space?

Actually, Yatima, it is more than an engineering problem. The rate at which a region of space can be cooled scales as the surface area, where as the heat produced scales as the number of irreversible gates. For a 2D array these scale in the same way, but for a 3D array the heating scales as the volume (R^3) where as the cooling scales as the surface area of a bounding box (R^2). Clearly you need to balance the rate at which heat is produced with the rate at which it is removed, and hence you have a scaling problem with 3D arrays. This is entirely independent of the cooling mechanism.

Engineering or not, it’s quite striking that real computers are very close to two-dimensional, and yet they are mostly used in a RAM machine mode with an emulation of complete circuit connectivity.

Scott, if you don’t believe the hidden variable interpretations are “true” in the end, does that mean you are more sympathetic to MWI?

Follow up to the original paper:

http://arxiv.org/abs/1111.6304

Scott, does this impact your previous conclusions?

Mike #59: No, not really, since I never took issue with PBR’s “factorizability” assumption in the first place. It looks nice though.

Here is a comment that Matt Leifer posted regarding the newest paper:

“As far as I can see, this paper is a fairly straightforward extension of PBR, but I only think that one of the weakened constraints is conceptually interesting. The original proof required a factorizability condition, i.e. for product states you have a Cartesian product of ontic state spaces and the distribution is independent over the factors. This can be replaced by a “local compatibility” condition, which is just the condition that if lambda is a possible ontic state for a single copy of a bunch of different states, then n copies of lambda is possible for any tensor product of n states chosen from that set. This drops the independence part of the assumption. Why this is true is very easy to see, since this is the only property of factorizability used in the original PBR result.

Hall also claims to have weakened this further to a condition of “compatibility”. This is supposed to go beyond reductionist models, which say that each system has its own individual ontic properties and the properties of composite systems are simply the collection of properties of all the parts. Hall tries to go beyond this by allowing the ontic state space of two systems to be arbitrarily different from the cartesian product of the ontic state spaces of the individual systems. I don’t think this has been achieved, since one still needs to know how the properties of the global system are related to the properties of the subsystems. Hall says that if we know that lambda is compatible with some states of one system, then we need only know that lambda is compatible with n-fold products of those states. However, since the state spaces are completely distinct, I don’t think that it makes sense to consider lambda as a possible ontic state for both a subsystem and the full composite system. This is not the case in the original theorem, or in the version with local compatibility, in which case the state on the global system is n copies of lambda rather than just one. Therefore, I don’t think that this part of the paper makes much sense.

Hall also points out that the probability distribution over the ontic state need not be independent of the choice of measurement, since only one measurement is considered for each pair of states. Whilst this is true, and perhaps interesting because it places a constraint on certain types of retrocausal theory, it does not allow the original PBR conclusion to be drawn. If another choice of measurement were made then the distributions could overlap and the quantum state would be epistemic. It is this loophole that I hope to exploit in developing an epistemic retrocausal theory. Perhaps this is worth saying, but it is certainly not groundbreaking.”

Scott,

Apparently a third paper on this issue appeared and somehow I missed it 😉

http://arxiv.org/PS_cache/arxiv/pdf/1111/1111.6597v1.pdf

The abstract reads:

“Given the wave function associated with a physical system, quantum theory allows us to compute predictions for the outcomes of any measurement. Since, within quantum theory, a wave function corresponds to an extremal state and is therefore maximally informative, one possible view is that it can be considered an (objective) physical property of the system. However, an alternative view, often motivated by the probabilistic nature of quantum predictions, is that the wave function represents incomplete (subjective) knowledge about some underlying physical properties. Recently, Pusey et al. showed that the latter, subjective interpretation would contradict certain physically plausible assumptions, in particular that it is possible to prepare multiple systems such that their (possibly hidden) physical properties are uncorrelated. Here we present a novel argument, showing that a subjective interpretation of the wave function can be ruled out as a consequence of the completeness of quantum theory. This allows us to establish that wave functions are physical properties, using only minimal assumptions. Specifically, the (necessary) assumptions are that quantum theory correctly predicts the statistics of measurement outcomes and that measurement

settings can (in principle) be chosen freely.”

Perhaps you saw this already.

Since this discussion is still going on, I wanted to share one of my favorite paragraphs by P.A.M. Dirac. (The Principles of Quantum Mechanics, 4th Ed, Pg 9)

“Some time before the discovery of quantum mechanics people realized that the connexion between light waves and photons must be of a statistical character. What they did not clearly realize, however, was that the wave function gives information about the probability of one photon being in a particular place and not the probable number of photons in that place. The importance of the distinction can be made clear in the following way. Suppose we have a beam of light consisting of a large number of photons split up into two components of equal intensity. On the assumption that the intensity of a beam is connected with the probable number of photons in it, we should have half the total number of photons going into each component. If the two components are now made to interfere, we should require a photon in one component to be able to interfere with one in the other. Sometimes these two photons would have to annihilate one another and other times they would have to produce four photons. This would contradict the conservation of energy. The new theory, which connects the wave function with probabilities of one photon, gets over the difficulty by making each photon go partly into each of the two components. Each photon then interferes only with itself.

Interference between two different photons never occurs.

The association of particles with waves discussed above is not restricted to the case of light, but is, according to modern theory, of universal applicability.”

but why should photons annihilate ?

And does it make a sense to distinguish between photons ?

last not least : “Each photon then interferes only with itself. ” this statement is defintely wrong !!

Depends on where you look at it from as it seems.

A lot of people want to define ‘photons’ as some sort of ‘excitations’ of waves. If you do you can then start assuming all sorts of properties to them, not only a ‘frequency’.

“According to the wave model of light, the speed of the electrons should be related to the intensity of the light. But that’s not what happens. In reality the speed of the electrons depends only on the frequency of light, and the light intensity determines the number of electrons that fly off.” (about the photoelectric effect.)

To me a photon is a photon, not a wave though.

I still fail to see how this paper provides anything but a more confused version of the EPR/Bell inequality result.

As far as the philosophical issue that result already required you reject either locality, realism about the physical properties of an object before measurement, or that we could choose experimental conditions (measurement angle) without having that choice spied on by the experiment we were about to perform,

Admiring lurker here – I just want to say, Lubos Motl is simply a troll, and it’s time everyone in the extended community of those interested in QM from a computational perspective simply put him on auto-ignore. I’ve had the unpleasant experience of dealing with him in another venue and it occurred to me at some point that he is a fanatic, simply wants attention, and doesn’t have any real intellectual honesty or firepower to back up his constant stream of ad-hominem abuse. These days, I let him talk to the hand.

[…] (a class of hidden-variable theories that was also the subject of the recent PBR Theorem, discussed previously on this blog). My talk also included material from my old paper Quantum Computing and Hidden […]