With two looming paper deadlines, two rambunctious kids, an undergrad class, program committee work, faculty recruiting, and an imminent trip to Capitol Hill to answer congressional staffers’ questions about quantum computing (and for good measure, to give talks at UMD and Johns Hopkins), the only sensible thing to do is to spend my time writing a blog post.

So: a bunch of people asked for my reaction to the new Nature Communications paper by Daniela Frauchiger and Renato Renner, provocatively titled “Quantum theory cannot consistently describe the use of itself.”  Here’s the abstract:

Quantum theory provides an extremely accurate description of fundamental processes in physics.  It thus seems likely that the theory is applicable beyond the, mostly microscopic, domain in which it has been tested experimentally.  Here, we propose a Gedankenexperiment to investigate the question whether quantum theory can, in principle, have universal validity.  The idea is that, if the answer was yes, it must be possible to employ quantum theory to model complex systems that include agents who are themselves using quantum theory.  Analysing the experiment under this presumption, we find that one agent, upon observing a particular measurement outcome, must conclude that another agent has predicted the opposite outcome with certainty.  The agents’ conclusions, although all derived within quantum theory, are thus inconsistent.  This indicates that quantum theory cannot be extrapolated to complex systems, at least not in a straightforward manner.

I first encountered Frauchiger and Renner’s argument back in July, when Renner (who I’ve known for years, and who has many beautiful results in quantum information) presented it at a summer school in Boulder, CO where I was also lecturing.  I was sufficiently interested (or annoyed?) that I pulled an all-nighter working through the argument, then discussed it at lunch with Renner as well as John Preskill.  I enjoyed figuring out exactly where I get off Frauchiger and Renner’s train—since I do get off their train.  While I found their paper thought-provoking, I reject the contention that there’s any new problem with QM’s logical consistency: for reasons I’ll explain, I think there’s only the same quantum weirdness that (to put it mildly) we’ve known about for quite some time.

In more detail, the paper makes a big deal about how the new argument rests on just three assumptions (briefly, QM works, measurements have definite outcomes, and the “transitivity of knowledge”); and how if you reject the argument, then you must reject at least one of the three assumptions; and how different interpretations (Copenhagen, Many-Worlds, Bohmian mechanics, etc.) make different choices about what to reject.

But I reject an assumption that Frauchiger and Renner never formalize.  That assumption is, basically: “it makes sense to chain together statements that involve superposed agents measuring each other’s brains in different incompatible bases, as if the statements still referred to a world where these measurements weren’t being done.”  I say: in QM, even statements that look “certain” in isolation might really mean something like “if measurement X is performed, then Y will certainly be a property of the outcome.”  The trouble arises when we have multiple such statements, involving different measurements X1, X2, …, and (let’s say) performing X1 destroys the original situation in which we were talking about performing X2.

But I’m getting ahead of myself.  The first thing to understand about Frauchiger and Renner’s argument is that, as they acknowledge, it’s not entirely new.  As Preskill helped me realize, the argument can be understood as simply the “Wigner’s-friendification” of Hardy’s Paradox.  In other words, the new paradox is exactly what you get if you take Hardy’s paradox from 1992, and promote its entangled qubits to the status of conscious observers who are in superpositions over thinking different thoughts.  Having talked to Renner about it, I don’t think he fully endorses the preceding statement.  But since I fully endorse it, let me explain the two ingredients that I think are getting combined here—starting with Hardy’s paradox, which I confess I didn’t know (despite knowing Lucien Hardy himself!) before the Frauchiger-Renner paper forced me to learn it.

Hardy’s paradox involves the two-qubit entangled state

$$\left|\psi\right\rangle = \frac{\left|00\right\rangle + \left|01\right\rangle + \left|10\right\rangle}{\sqrt{3}}.$$

And it involves two agents, Alice and Bob, who measure the left and right qubits respectively, both in the {|+〉,|-〉} basis.  Using the Born rule, we can straightforwardly calculate the probability that Alice and Bob both see the outcome |-〉 as 1/12.

So what’s the paradox?  Well, let me now “prove” to you that Alice and Bob can never both get |-〉.  Looking at |ψ〉, we see that conditioned on Alice’s qubit being in the state |0〉, Bob’s qubit is in the state |+〉, so Bob can never see |-〉.  And conversely, conditioned on Bob’s qubit being in the state |0〉, Alice’s qubit is in the state |+〉, so Alice can never see |-〉.  OK, but since |ψ〉 has no |11〉 component, at least one of the two qubits must be in the state |0〉, so therefore at least one of Alice and Bob must see |+〉!

When it’s spelled out so plainly, the error is apparent.  Namely, what do we even mean by a phrase like “conditioned on Bob’s qubit being in the state |0〉,” unless Bob actually measured his qubit in the {|0〉,|1〉} basis?  But if Bob measured his qubit in the {|0〉,|1〉} basis, then we’d be talking about a different, counterfactual experiment.  In the actual experiment, Bob measures his qubit only in the {|+〉,|-〉} basis, and Alice does likewise.  As Asher Peres put it, “unperformed measurements have no results.”

Anyway, as I said, if you strip away the words and look only at the actual setup, it seems to me that Frauchiger and Renner’s contribution is basically to combine Hardy’s paradox with the earlier Wigner’s friend paradox.  They thereby create something that doesn’t involve counterfactuals quite as obviously as Hardy’s paradox does, and so requires a new discussion.

But to back up: what is Wigner’s friend?  Well, it’s basically just Schrödinger’s cat, except that now it’s no longer a cat being maintained in coherent superposition but a person, and we’re emphatic in demanding that this person be treated as a quantum-mechanical observer.  Thus, suppose Wigner entangles his friend with a qubit, like so:

$$\left|\psi\right\rangle = \frac{\left|0\right\rangle \left|FriendSeeing0\right\rangle + \left|1\right\rangle \left|FriendSeeing1\right\rangle}{\sqrt{2}}.$$

From the friend’s perspective, the qubit has been measured and has collapsed to either |0〉 or |1〉.  From Wigner’s perspective, no such thing has happened—there’s only been unitary evolution—and in principle, Wigner could even confirm that by measuring |ψ〉 in a basis that included |ψ〉 as one of the basis vectors.  But how can they both be right?

Many-Worlders will yawn at this question, since for them, of course “the collapse of the wavefunction” is just an illusion created by the branching worlds, and with sufficiently advanced technology, one observer might experience the illusion even while a nearby observer doesn’t.  Ironically, the neo-Copenhagenists / Quantum Bayesians / whatever they now call themselves, though they consider themselves diametrically opposed to the Many-Worlders (and vice versa), will also yawn at the question, since their whole philosophy is about how physics is observer-relative and it’s sinful even to think about an objective, God-given “quantum state of the universe.”  If, on the other hand, you believed both that

1. collapse is an objective physical event, and
2. human mental states can be superposed just like anything else in the physical universe,

then Wigner’s thought experiment probably should rock your world.

OK, but how do we Wigner’s-friendify Hardy’s paradox?  Simple: in the state

$$\left|\psi\right\rangle = \frac{\left|00\right\rangle + \left|01\right\rangle + \left|10\right\rangle}{\sqrt{3}},$$

we “promote” Alice’s and Bob’s entangled qubits to two conscious observers, call them Charlie and Diane respectively, who can think two different thoughts that we represent by the states |0〉 and |1〉.  Using far-future technology, Charlie and Diane have been not merely placed into coherent superpositions over mental states but also entangled with each other.

Then, as before, Alice will measure Charlie’s brain in the {|+〉,|-〉} basis, and Bob will measure Diane’s brain in the {|+〉,|-〉} basis.  Since the whole setup is mathematically identical to that of Hardy’s paradox, the probability that Alice and Bob both get the outcome |-〉 is again 1/12.

Ah, but now we can reason as follows:

1. Whenever Alice gets the outcome |-〉, she knows that Diane must be in the |1〉 state (since, if Diane were in the |0〉 state, then Alice would’ve certainly seen |+〉).
2. Whenever Diane is in the |1〉 state, she knows that Charlie must be in the |0〉 state (since there’s no |11〉 component).
3. Whenever Charlie is in the |0〉 state, she knows that Diane is in the |+〉 state, and hence Bob can’t possibly see the outcome |-〉 when he measures Diane’s brain in the {|+〉,|-〉} basis.

So to summarize, Alice knows that Diane knows that Charlie knows that Bob can’t possibly see the outcome |-〉.  By the “transitivity of knowledge,” this implies that Alice herself knows that Bob can’t possibly see |-〉.  And yet, as we pointed out before, quantum mechanics predicts that Bob can see |-〉, even when Alice has also seen |-〉.  And Alice and Bob could even do the experiment, and compare notes, and see that their “certain knowledge” was false.  Ergo, “quantum theory can’t consistently describe its own use”!

You might wonder: compared to Hardy’s original paradox, what have we gained by waving a magic wand over our two entangled qubits, and calling them “conscious observers”?  Frauchiger and Renner’s central claim is that, by this gambit, they’ve gotten rid of the illegal counterfactual reasoning that we needed to reach a contradiction in our analysis of Hardy’s paradox.  After all, they say, none of the steps in their argument involve any measurements that aren’t actually performed!  But clearly, even if no one literally measures Charlie in the {|0〉,|1〉} basis, he’s still there, thinking either the thought corresponding to |0〉 or the thought corresponding to |1〉.  And likewise Diane.  Just as much as Alice and Bob, Charlie and Diane both exist even if no one measures them, and they can reason about what they know and what they know that others know.  So then we’re free to chain together the “certainties” of Alice, Bob, Charlie, and Diane in order to produce our contradiction.

As I already indicated, I reject this line of reasoning.  Specifically, I get off the train at what I called step 3 above.  Why?  Because the inference from Charlie being in the |0〉 state to Bob seeing the outcome |+〉 holds for the original state |ψ〉, but in my view it ceases to hold once we know that Alice is going to measure Charlie in the {|+〉,|-〉} basis, which would involve a drastic unitary transformation (specifically, a “Hadamard”) on the quantum state of Charlie’s brain.  I.e., I don’t accept that we can take knowledge inferences that would hold in a hypothetical world where |ψ〉 remained unmeasured, with a particular “branching structure” (as a Many-Worlder might put it), and extend them to the situation where Alice performs a rather violent measurement on |ψ〉 that changes the branching structure by scrambling Charlie’s brain.

In quantum mechanics, measure or measure not: there is no if you hadn’t measured.

Unrelated Announcement: My awesome former PhD student Michael Forbes, who’s now on the faculty at the University of Illinois Urbana-Champaign, asked me to advertise that the UIUC CS department is hiring this year in all areas, emphatically including quantum computing. And, well, I guess my desire to do Michael a solid outweighed my fear of being tried for treason by my own department’s recruiting committee…

Another Unrelated Announcement: As of Sept. 25, 2018, it is the official editorial stance of Shtetl-Optimized that the Riemann Hypothesis and the abc conjecture both remain open problems.

### 157 Responses to “It’s hard to think when someone Hadamards your brain”

1. Quantum theory - Nature Paper 18 Sept | Page 2 | Physics Forums Says:

[…] is a recent comment by Scott Aaronson on Frauchiger's and Renner's paper: https://www.scottaaronson.com/blog/?p=3975   Lord Jestocost, Sep 25, 2018 at 4:09 […]

2. Usman Nizami Says:

Sir how can great mathematician like Michael Atiyah come up with false proof of Riemann hypothesis? He is making mistakes in recent years. Is that because of his age. Dpse no one do a good math in old age? What do you think? Atiyah has said “Solve the Riemann hypothesis and you become famous. If you are famous already, you become infamous, ”

And what is problem with Inter-universal Teichmüller theory. Shinichi Mochizuki is serious mathematician. How one should handle such complicated work.

3. Usman Nizami Says:

So Michael Atiyah’s proof of Riemann hypothesis is wrong.

4. Peter Says:

Wow, they really went out of their way to rehash a pretty basic reformulation of psychologism, I guess this time with quantum mechanics. Maybe I’ve just had to read and reread too much Frege over the years but the second I saw “it must be possible to employ quantum theory to model complex systems that include agents who are themselves using quantum theory” in the abstract I knew exactly what the rest of the paper was going to be arguing for.

I think that your technical arguments against their conclusion are sound and do a great job of showing part of what the issue is. I would love to see them engage with the philosophy on this topic and defend why they think quantum mechanics opens the door to psychologism, rather than implying it ambiently and just focusing on a technical physical argument.

5. Jay Says:

So clear, thanks!

Unrelated question about entropy, computation and neural nets.

Because of adiabetic computing we could theorically perform a computation as long as we want for free. Because of Landauer’s principle we know that measuring the result (and preparing the computation) must always incur at some entropy cost.

Now, non linearities are considered a key ingredient for deep learning (actually for any neural net, deep or shallow). However, because of the former lines it seems that deep learning could theorically rely on linear transforms only, and delays the true non linearities until we want to measure the resulting net. Are they known caveats? For example, one could imagine that measuring the error gradient might require either a true non linearity (and some entropy cost) or keeping track of all the examples (at no entropy cost but at the cost of increasing computation length or memory requirement). Do you know the answer or where you would start looking for an answer?

6. Scott Says:

Peter #4: But this is actually not a question that can be resolved with verbal philosophy. The reason their paper was published in Nature is that they give a technical argument for why the axioms of QM, together with certain reasonable-looking auxiliary postulates, lead to a contradiction. That demands a response that meets their argument on the battlefield, for example by identifying an error, rejecting one of the postulates, or (as I think I did) identifying and rejecting an additional unstated postulate.

7. David Says:

Peter #4

“just focusing on a technical physical argument”

Well, they are physicists, not philosophers.

8. DarMM Says:

Scott,

Bob (I think I have the F,F’,W,W’ to Alice,Bob,Charlie,Diane Map right! :)) when reasoning using the initial state would conclude he and Alice will get (okay,okay) 1/12 of the time. However when reasoning about Charlie’s reasoning (concerning Diane) he would conclude this is impossible.

The bulk of the paper concerns Bob “transferring” Charlie’s conclusion to himself, via a special case of the Born Rule (P(E) = 1) and the assumption of consistent reasoning between agents.

However your point is that one of the steps in the transference involves hearing about an Alice measurement. However an Alice measurement is the exact scenario that would invalidate Charlie’s initial reasoning and thus Bob should discard Charlie’s original reasoning about Diane.

Is this remotely correct?

9. Scott Says:

DarMM #7: Yes, that sounds right to me.

10. DarMM Says:

Scott #8: Thanks, kind of you to answer.

11. John K Clark Says:

The inability of the mathematical community to figure out if the proofs of the ABC conjecture and the Riemann hypothesis are valid makes me wonder if that could have implications for another unsolved problem, is P= NP? If they are not equal then you’d expect it would be fundamentally easier to check a proof than find a proof, but then why are world class mathematicians unable to check them? If I have a valid proof of the ABC conjecture but it would take you as much brain power to understand it as it would for you to find a proof of it on your own have I accomplished anything of value, would there be any point in you reading it?

John K Clark

12. Daniel Says:

This seems like yet another example of someone trying to pound a round peg (quantum mechanics) into a square hole (classical measurement theory) and getting confused. The contradiction is with the antiquated assumptions of the authors, not with quantum mechanics.

13. Scott Says:

John #10: No, it’s much more prosaic than that. According to Erica Klarreich’s excellent Quanta article, Scholze identified the issues with Corollary 3.12 of Mochizuki’s paper not long after the paper came out, but held off on going public because he hoped someone else would do it, and it sounds like the same may have been true for other experts. As for Atiyah’s claimed half-page proof of RH, it looks like people were pointing out serious issues with it within hours.

14. Neel Krishnaswami Says:

Hi Scott, can you explain to me what people are trying to achieve with “quantum interpretations”? (This sounds more hostile than I mean it — I’m actually genuinely confused.)

To my understanding, undergrad QM consists of basically four claims:

1. States of an experimental system are described by elements of a Hilbert space.
2. Observables are self-adjoint operators of that Hilbert space.
3. The time evolution of an experimental system is described by the Schrodinger equation.
4. Observing an experimental system is described by Born’s law.

The repetition of “experimental system” here means that we are talking about controlled experiments: a system which is isolated from the rest of the universe, with the sole interaction being the process of observation. (This ideal is hard to achieve in practice, but experimental physicists and engineers are getting ever better at it.)

Since (a) Born’s law has a statistical, probabilistic character unlike the other three parts of QM, and (b) observation is a primitive concept of quantum mechanics, it is reasonable to wonder if it can be derived. That is, can we model both an observer and an experiment as a larger, interacting quantum system, and then derive the Born rule for the observer from an analysis of the joint system? (Basically, this is what I understand the measurement problem to be.)

This seems like a really great, natural question to me. If we can, that’s great news! We now know where Born’s law comes from, in the same way that statistical mechanics explains where thermodynamics comes from. If we can’t, that’s even better, because it means that there must be some deep physics we have overlooked, which make Born’s law work out!

But, it seems like to answer that question, we have to do the work: someone has to come up with a physically reasonable model of a coupled system consisting of an experiment and observer, and then actually solve the equations. Only then can the model be compared with the empirical fact of Born’s rule.

Quantum interpretations (like MWI) mostly seem like an assertion that if you wave your hands enough, you don’t need to do this work, a claim I find deeply dubious. But plenty of smart people — people who have thought much more about QM than me! — seem to think that there is something serious going on with these interpretations. So there must be something I’m missing.

15. Peter Says:

Scott #5, David #6

I feel as though both of you missed my point.

“I would love to see them engage with the philosophy on this topic and defend why they think quantum mechanics opens the door to psychologism, rather than implying it ambiently and just focusing on a technical physical argument.”

I said I would love *to see* them do this. As in, it would make me happy if they did this. So saying, “well they wrote a technical paper” isn’t a counter to what I said. I didn’t say they should be reprimanded and banished from academia for doing what they did, I said I would love it if they had engaged one of the glaring issues in their argument instead of only presenting a technical argument. I am not going to say that every physics paper requires half of it to be about philosophy, it doesn’t in the slightest. But why is me saying “I wish they had engaged in the philosophical issues here” in regards to a paper that makes claims about questions at the very heart of the grey area between physics and philosophy, reductionism and it’s ilk, refutable by pointing out that they aren’t philosophers?

Secondly, Scott your second example of what should be done is exactly what I’m saying should be done. On of their postulates is that “contrary to philosophical consensus, this revivification of psychologism is valid.” That is a presupposition they used in their argument, it is not a conclusion that is drawn from their other presuppositions. Me bringing this up is exactly what you described as, “identifying and rejecting an additional unstated postulate.” So I agree completely that we should engage them on their own battlefield. But their battlefield is a grey area and part of their assumptions involve opening the door to a almost universally dismissed school of thought and as I said, I would have loved if they engaged that part of their argument.

Refuting psychologism isn’t about verbal philosophy. The rejection of a reduction of physical laws to psychological entities and then drawing conclusions about the nature of those laws given various postulates about those psychological entities isn’t just a matter of toying with language. Dismissing someone drawing universal physical and metaphysical conclusions based off of the internal psychological states of agents isn’t just ‘dismissing their language games as pseudo-problems.’

16. Scott Says:

Neel #14: Yes, what people are trying to accomplish with “interpretations” is basically just to understand how unitary evolution and measurement can coexist in the same universe, and to put it crudely, how the universe knows when to apply one rule and when the other. On the spectrum of possible positions, MWI is the extreme that answers this question by holding that only unitary evolution has any fundamental ontological status; “measurement” is just an approximate concept that observers use to describe their experience of not knowing which branch of the wavefunction they’re going to find themselves in (i.e., indexical uncertainty). You don’t have to like it — many people don’t! — but it’s certainly a natural point in possibility space; if Hugh Everett hadn’t proposed it then someone else would have.

17. Joe Says:

Scott, when are you speaking at JHU? I can’t find any information about the talk online.

18. Kevin Van Horn Says:

Forgive my ignorance, but how are the {|0>, |1>} and {|+>, |->} bases related? You mentioned the Hadamard transformation; is that what relates them?

19. Scott Says:

Kevin #18: Yes.

20. Aula Says:

Scott #13: It seems to me that Atiyah’s claimed proof of RH is in some ways a rerun of Deolalikar’s claimed proof that P!=NP. In each case, the claimed proof both has/had obvious elementary errors (Atiyah appears to have forgotten some basic facts about complex analysis) and is/was trying to prove too much (there are functions that don’t satisfy an analogue of RH but have all those properties of the Riemann zeta function that Atiyah uses), so it’s no wonder that people started to pick apart both attempts almost immediately.

21. Craig Gidney Says:

> *in my view it ceases to hold once we know that Alice is going to measure Charlie in the {|+〉,|-〉} basis*

I would describe this slightly differently. Consider how you would actually go about implementing a measurement in the {|0 BobSawAndThoughtAbout0〉+ |1 BobSawAndThoughtAbout1〉, |0 BobSawAndThoughtAbout0〉- |1 BobSawAndThoughtAbout1〉} basis. I would do it as follows:

Step 1: Uncompute those pesky “BobSawAndThoughtAbout” qubits. As in literally reverse time for Bob, so he unthinks his thoughts.
Step 2: The relevant information is now factored into a single qubit. Perform a {|+〉,|-〉} basis measurement on that qubit.
Step 3: Recompute the “BobSawAndThoughtAbout” qubits.

The reason that the measurement is a problem is because it forces us to uncompute Bob thinking about his initially-valid conclusion, then recompute him thinking the same things but the conclusions are no longer valid (because the initial qubits are no longer in the |00〉+|10〉+|01〉state).

In order for Bob’s thoughts to actually be correct, he has to think something like “If this is before the uncomputation and recomputation, and I saw 0, then Alice is definitely in the + state. But if this is after the recomputation, I don’t know what Alice’s state is.”.

22. Craig Gidney Says:

Hm, actually, I think this “recompute with different thoughts” paradox has a classical analogue.

1. Alice and Bob are loaded into separate reversible classical computers.
2. We flip a coin to generate a random bit, then give Alice and Bob each a copy of that bit.
3. Suppose w.l.o.g. that the random bit is 0.
4. We run the computer for a bit. Alice thinks “My bit is 0, therefore Bob’s bit is 0.”. This conclusion is valid.
5. We uncompute Bob back to the initial state, flip the bit we gave him, and recompute.
6. Bob’s bit must be in state 1. But using “transitivity of knowledge” it must be in state 0 because Alice validly concluded that his bit was in state 0. Paradox.

Obviously the mistake here is assuming that Alice’s conclusions about Bob’s state after step 3 must also apply to Bob’s state after step 5. It’s much easier to see the problem here than in the quantum case because the perturbation of Bob is directly described (we flip his bit), instead of hiding behind an anti-commuting measurement.

23. Scott Says:

Joe #17: Info for my JHU talk is here. It’s on Thursday at 10:30am.

24. Edward Measure Says:

Re Atiyah and the RH: Many years ago, two semi-famous physicists (SFP) were listening to Eddington, then an old man, expound one of the highly questionable theories of his old age. SFP1 to SFP2: “Is that going to happen to us?” SFP2: “Don’t worry – a genius like Eddington may go nuts, but guys like you just get dumber and dumber.”

25. Harry Johnston Says:

Neel #14:

That is, can we model both an observer and an experiment as a larger, interacting quantum system, and then derive the Born rule for the observer from an analysis of the joint system?

I believe so, sort of. But if I understand correctly, when you ask the question, “what is the probability that I will observe that the other observer got result X” you have to use the Born rule to answer it.

That’s not a trivial result. It means that it doesn’t matter when you apply the Born rule – you can apply it to the original measurement, or to your observation of the other observer, and you’ll always get the same answer, and that’s important. But it doesn’t really count as an independent derivation of the Born rule.

Obviously, the “actually solve the equations” bit doesn’t explicitly model a conscious observer, or even a real-world measuring device, but a simplified model of one. I believe one way you might model a measuring device is by allowing the state of the system being measured to interact with a thermodynamic reservoir, which introduces decoherence. You can sum over the microstates of the reservoir to turn the pure quantum state into a density matrix, and that gives you your classical probabilities – but you’re implicitly using the Born rule when you do that.

I’m told that Everett actually did the necessary math way back in 1957.

[Epistemic status: mostly guesswork. I haven’t personally read the relevant articles, and I don’t know whether Everett used the approach I suggest above or something different.]

26. ppnl Says:

Scott #16

” Yes, what people are trying to accomplish with “interpretations” is basically just to understand how unitary evolution and measurement can coexist in the same universe, and to put it crudely, how the universe knows when to apply one rule and when the other.”

Why doesn’t decoherence answer this in a simple and obvious way?

You put a system in a box that isolates it and unitary evolution applies. If you allow a tiny interaction with the outside then the Born rule applies in a tiny way. You allow strong thermodynamic interactions and the Born rule is applied many many times in tiny increments and the system looks classical.

There is no unitary or Born rule. There is just the amount of quantum information leaking out. The Born rule is an expression of the lack of information and so by definition must be random.

27. Andrei Says:

Scott,

My take on these thought experiments involving “isolated” systems is that they are not possible, even in practice.

There is no way you can build a box so that an outside observer cannot find out, without opening it, if there is a dead cat or alive cat inside. One can for example measure the electric, magnetic, gravitational fields produced by the particles that make up the cat. These fields are of unlimited range and cannot be completely blocked by any box, whatever the material this box is made of.

In other words, all observers, no matter where they are have access to the same amount of information about the system under consideration, so all those paradoxes disappear.

Thank you,

Andrei

28. mjgeddes Says:

Both metaphysics and foundations of mathematics need to be clarified to understand quantum mechanics and crack the Riemann hypothesis.

Coming back to the ‘3 Worlds’ of Penrose (Physics, Mathematics and Cognition) , my current view is that 2 of them *are* fundamental (Physics and Mathematics) and 1 is *not* fundamental- Cognition is emergent and composed of the other two primitives, having no reality over and above the other two.

Budding philosophers of meta-physics that are realists about physical reality and abstract objects often want to unify mathematics and physics somehow, and the first ideas that leap to mind are that ‘all is math’ (modern platonism) or ‘all is information’ (‘it from qubit’). But this is precisely where all the confusion starts! The ‘it from qubit’ and Tegmark multiverse ideas try to have the physical world emerging from a foundation of information/math, but it just doesn’t make sense. This had me so confused for a long long time. The fault is with the ideas! These ideas have really confused everyone and lead them seriously astray!

Rejecting ‘it from qubit’ and Tegmark multiverse, I settled on the only other sensible possibility : physical and mathematical existence are mostly separate. I came to the conclusion that *both* physical and mathematical existence are co-foundations of reality, but neither is primary. One doesn’t ’emerge’ from the other. Whilst there may be some over-lap, most mathematical (abstract) objects *aren’t* physically realized. There just isn’t a single foundation of reality – there are *two* foundations.

So what is the nature of ‘information in this picture’? I think it’s where physical and mathematical existence *do* over-lap. Computation is the portion of mathematics that *is* physically realized. But *not* everything physical is computation. The ‘it from qubit’ project errs in the claim that ‘everything is computation/information’. I think the mistake is to stretch the definition of ‘computation’ to the point of it being meaningless. My new view is that only goal-directed systems that form symbolic representations of reality qualify as ‘computation’ – computers, brains and minds. The *mind* is made of information. Computers and brains too. But most of physical reality is not. So, in this picture, Cognition is synonymous with information (the portion of math that is physically realized).

This clarification of metaphysics should hopefully lead to a clarification of quantum mechanics and quantum computing.

As to mathematics, perhaps it has a ‘dual-foundation’ as well. The main candidate for the foundation of math is ‘set theory’, but it appears to me that ‘arithmetic/number theory’ qualifies as a genuine rival. According to wikipedia, it appears that most of classical mathematics can be derived from second-order arithmetic, and sets aren’t needed. Just as mathematical/physical existence could form a dual-foundation of reality in metaphysics, what if sets/numbers form a dual-foundation for mathematics? Perhaps mathematics just doesn’t have a single foundation either.

The Riemann hypothesis can be approached purely from number theory and sets dispensed with. John Baez in a recent tweet mentioned a hypothetical ‘F1’ (finite field with one element), which looks very intriguing – if such a thing existed it would completely revise abstract algebra (by dispensing with the need for sets), and lead to a solution to RH.

29. fred Says:

Scott, when you write

“In quantum mechanics, measure or measure not: there is no if you hadn’t measured.”

How is that different from the first claim of superdeterminism that there’s really no such thing as counterfactuals, ever?

30. fred Says:

Scott, what do you make of the claim in the paper that

“We conclude by suggesting a modified variant of the experiment, which may be technologically feasible. The idea is to substitute agents F¯¯¯ and F by computers.”

Does it mean there’s really a chance that this entire setup could be done practically?

31. fred Says:

Andrei #27

“My take on these thought experiments involving “isolated” systems is that they are not possible, even in practice.
[..]These fields are of unlimited range and cannot be completely blocked by any box, whatever the material this box is made of.”

Black holes!

32. Scott Says:

fred #29: The two things have nothing to do with each other. To violate the Bell inequality doesn’t actually require any counterfacual reasoning, involving “if I had measured this state in this basis” (even though I didn’t). All it requires is repeating the Bell experiment many times, while randomly varying the measurement bases. To deny the possibility of doing that requires denying that it’s ever possible to make any random choices at all, which is much much crazier than anything we’re talking about here.

33. Scott Says:

fred #30: Yes, you could even do the experiment today, not with conscious beings in superposition but at least with qubits (i.e., the original Hardy’s Paradox). But even if you were able to do the experiment with conscious beings, it would still tell you nothing whatsoever that you didn’t already know. All you’d find is that the measurement outcomes are exactly the ones predicted by quantum mechanics — so in particular, that the “impossible” outcome occurs with probability 1/12. But that would still leave the task of identifying the fallacious assumption in the argument for why that outcome was “impossible.”

34. Amir Says:

Scott #33:You sound like a mathematician – assuming that once we have a consistent theory, experimental results will agree with it. I’m sure physicists would insist on actually doing the experiment, for instance to disprove spontaneous collapse theories.

35. Andrei Says:

fred #31:

I am not sure if you intended it as a joke, but from the point of view of an external observer the cat will never pass the event horrizon, so it will never be inside the “box”.

It is also questionable if black holes do have an interior where one can make experiments.

36. Harry Johnston Says:

@Andrei #27, as far as I’m aware the purpose of the box in the Schrödinger’s cat thought experiment is just to make it clear that the experimenter is not looking directly at the cat during the experiment. It isn’t necessary for the observer to have no way to tell whether the cat is dead or alive, so long as they don’t actually go ahead and make the necessary measurements.

In other words, it isn’t access to the information that counts, it’s the information you actually choose to collect. Do you have some reason to believe differently?

37. Andrei Says:

Scott #33:

“To violate the Bell inequality doesn’t actually require any counterfacual reasoning, involving “if I had measured this state in this basis” (even though I didn’t). All it requires is repeating the Bell experiment many times, while randomly varying the measurement bases. To deny the possibility of doing that requires denying that it’s ever possible to make any random choices at all, which is much much crazier than anything we’re talking about here.”

I think that you may find superdeterminism more acceptable if you agree with the following steps:

Step 1: When thinking about classical physics forget about Newtonian billiard balls that travel in straight lines and only interact by direct collisions. Think about field theories like general relativity or classical electromagnetism. Such theories have built-in the type of contextuality required to understand QM.

Step 2: A Bell/EPR experiment, just like any other experiment, is nothing but a particular case of charged particles interacting with other charged particles. The particle source, the detectors, the human experimenters or whatever you may use are made of such charged particles (electrons and quarks). According to classical electromagnetism these particles are continuously influencing each other’s motion as the result of their associated electric and magnetic fields. Such influence never stops because of the unlimited range of those fields.

Step 3: the hidden variable (say the polarization of the entangled photons) is expected to depend on the electric and magnetic fields acting at the location of the particle source. So, if you agree on Step 2 you would accept that the hidden variable does depend on the detectors’ settings, which is all superdeterminism is about.

38. Harry Johnston Says:

Scott #33, while I’m personally reasonably confident that yes, you would get the result predicted by QM, aren’t Frauchiger & Renner claiming otherwise? (I haven’t read the entire paper, but the abstract says “This indicates that quantum theory cannot be extrapolated to complex systems, at least not in a straightforward manner.”)

… of course, I guess they’re talking about their version of the experiment, where the entangled states are agents, not the experiment that we can actually perform right now.

39. Scott Says:

Amir #34: No, it would be batshit insane to believe that “once we have a consistent theory, experimental results will agree with it”—since there are many consistent theories that disagree with each other (as any ‘mathematician’ surely understands…!).

But we’re not talking here about some arbitrary theory that happens to be consistent—we’re talking about quantum mechanics, about which any proposed experimental test has to be evaluated for novelty in light of a century of previous tests (every single one of which QM has passed).

Googling it just now, there’s indeed a whole literature on experimental tests of Hardy’s paradox, reporting exactly the results (Pr=1/12, in the example in my post) that QM predicts. So, would an experimental test of Frauchiger-Renner need to check what happened if we replaced the qubits by conscious observers? If so, how would we know when they were conscious? Or would it be enough to replace the qubits by AI’s? If so, then why couldn’t a qubit already count as an “AI” for our purposes—extremely rudimentary, of course, but recording the relevant information and therefore good enough to test the prediction? But if so, then the experiments on Hardy’s paradox have already tested Frauchiger-Renner, and shown that the results are just the ones predicted by QM.

40. Scott Says:

Andrei #37: No, superdeterminism is still crazy. Of course humans and their detectors are made of the same charged and uncharged particles as everything else. But science has only ever worked because it’s possible to isolate some things in the universe from other things—not perfectly, but well enough. And this is completely uncontroversial in cases other than the Bell inequality, which shows that the superdeterminists aren’t consistent about their own theory. E.g., a political poll of 1000 households reveals 48% support for a certain candidate, plus or minus 5%. Why isn’t anyone free to believe that the real number is actually 1%, or 99%, because the pollster’s auto-dialer is also made of subatomic particles governed by physics, so maybe it was predetermined since the Big Bang that the dialer would overwhelmingly pick the numbers of the candidate’s supporters (or opponents)? I’ll tell you why: because, absent some positive reason to believe it and some account of how it happened, that would be stupid!

Yet with the Bell inequality, and only with the Bell inequality, we’re asked to believe that a cosmic conspiracy exactly like the above one is in force—and, the craziest part of all, this conspiracy does not lead to faster-than-light communication, or any of the other world-changing effects that it could just as easily lead to and that we might expect, but only to the precise results that QM already predicted for us without any need for such a conspiracy, like winning the CHSH game 85% of the time (and not 86%). Occam’s Razor yearns to slice.

41. Scott Says:

Harry Johnston #38: OK, point taken. But if one has any experience with experiments in the foundations of QM, one knows full well what’s going to happen next. Namely: some experimental group will do a slightly souped-up test of Hardy’s Paradox, of course getting just the results that QM predicts, and will then market it in Science or Nature as “the first experimental probe of the logical contradiction at the heart of QM … who could’ve imagined that the ‘impossible’ outcome would occur with probability 1/12?” And then the science journalists will wet themselves with excitement. It’s all nearly as predictable as QM itself! 🙂

42. Andrei Says:

Scott: #40

“science has only ever worked because it’s possible to isolate some things in the universe from other things—not perfectly, but well enough.”

There are two main reasons why it is possible to treat a subsystem (say the Solar system) of a large system (our galaxy) in isolation.

1. Physics is local. The state of a subsystem is completely described by a specification of positions and velocities of particles and the field magnitudes. But of course, this does not imply that the subsystem is independent from the large system because the local fields are themselves a function of position/momenta of all particles in the whole system.

2. If there is a large distance between the subsystem and the rest of the particles one can ignore, to some extent the fields produced by them. However, it is important to notice that this approximation only works if you are only interested in the relative motions of the particles inside the subsystem. For example you can ignore the galactic field if you want to calculate the trajectory of the Earth around the Sun. But if you are interested in the relative motion of the Earth and some planet in another region of the galaxy you need to know the galactic field. Assuming that the two distant planets move independently would lead to false predictions as there is no good reason why they should orbit the galactic center.

In a Bell test we are in a similar situation as two distant planets in a galaxy. We are not interested in the internal evolution of each detector and of the particle source but in their relative evolution. We are interested in the correlations of distant measurements. So, in this case, ignoring the mutual EM interactions leads to wrong predictions.

” it’s possible to isolate some things in the universe from other things—not perfectly, but well enough. And this is completely uncontroversial in cases other than the Bell inequality, which shows that the superdeterminists aren’t consistent about their own theory.”

The choice one makes does depend a lot on the available options. There is no need to appeal to superdeterminism in the case of the political poll because we can explain those results within the limits of accepted physics. There is nothing surprising about them. On the contrary, Bell’s theorem gives us only dificult options, like:

1. non-realism

If you agree with non-realism will you be consistent enough to apply this non-realist view to the political poll?

2. non-locality

If you agree with non-locality will you be consistent enough to apply this non-local view to the political poll?

The purpose of any interpretation of QM is to recover QM’s predictions. So, unless you think there is a conflict between QM and the political poll there is no reason to expect a superdeterminist to have a different take on it as opposed to a Bohmian or QBist.

43. John Sidles Says:

Scott remarks  “If one has any experience with experiments in the foundations of QM, one knows full well what’s going to happen next …

This sentence has many possible completions, of which the following completion is suggested as consonant with the traditional excellences of Shtetl Optimized discourse:

…  scientists will employ ingenious new theoretical insights, and ingenious new experimental techniques, to add more decimal places to measurements of the fine structure constant α

This particular continuation is motivated by the extraordinary success of ongoing efforts to measure α within ever-smaller error bounds. The theoretical, experimental, and social reasons for this α-success brightly illuminate (for me anyway) two recent quantum computing preprints, namely, “How many qubits are needed for quantum computational supremacy?” (arXiv:1805.05224v2, mainly out of MIT/CalTech) and “Fluctuations of energy-relaxation times in superconducting qubits” (arXiv:1809.01043v1, mainly by the Google/Martinis group).

In brief, the two chief technical paths to higher-precision measurements of α directly parallel the two chief technical paths to demonstrating quantum supremacy. The first path emphasizes coherent quantum dynamics, as exemplified by “g-2″/”geonium” experiments (e.g., arXiv:0801.1134), and by low-noise qubit arrays (e.g., arXiv:1709.06678v1). The second path emphasizes error-corrected/topologically-protected quantum dynamics, as exemplified by quantum metrology triangles (QMTs, e.g., arXiv:1204.6500), and by proposals for scalable quantum error correction (e.g., arXiv:1801.00862).

In a nutshell, better appreciation of the realities of α-measurement techniques are helpful in evolving better-informed views regarding the scalable viability of the extended Church-Turing Thesis, versus the scalable feasibility of Quantum Supremacy demonstrations.
——
PS: Michael Atiyah’s recently proposed α-theory inexplicably considers none of these subtly interlocking quantum electrodynamical issues … which is one more overarching reason to agree with the Shtetl Optimized editorial policy, that Atiyah’s theory probably is not right.

44. Ted Says:

Scott’s argument at the end of the main post (emphasizing the importance of distinguishing between human experimenter’s thoughts in situations in which certain experiments are or are not performed) reminds me a lot of Guy Blaylock’s article “The EPR paradox, Bell’s inequality, and the question of locality” (Am. J. Phys. 78, 111 (2009), arXiv:0902.3827). Blaylock argues that the real takeaway of Bell’s theorem isn’t the failure of local realism in quantum mechanics, as is often claimed, but actually the failure of counterfactual definiteness – e.g. he claims that Many-Worlds completely respects local realism. (To clarify, he defines “causality” to mean “no FTL communication” and “locality” as the stronger requirement “no establishment of spacelike correlations, whether or not they can be used for communication”. In this terminology, all interpretations of QM are causal, but not all are local – e.g. Many-Worlds is but Copenhagen isn’t.)

45. ppnl Says:

Harry Johnston @36

” … as far as I’m aware the purpose of the box in the Schrödinger’s cat thought experiment is just to make it clear that the experimenter is not looking directly at the cat… ”

No that’s just wrong. The whole point of Schrödinger’s cat is to try to isolate when a measurement is made. Say you put the cat in a box and watch it by closed circuit TV. Well you are watching so the wave collapses right? OK try again but you do not look at the TV but only record it so that you can watch it later. Wave collapse? Ok what if you chop up the cd containing the recording? Well you could in principle reassemble the cd. So let’s try burning it. In a deep quantum sense the information is still out and so in theory the quantum wave must collapse. Think black hole information paradox where not even feeding the disk to a black hole destroys the information.

Now think of all the other ways information could leak out of the box. Do you smell decomp? How about the sound of the cat’s heart beating? What is that scratching noise? What about body heat radiated that you cannot see but affects the air around the box?

No, the whole thing makes no sense unless you are talking about total isolation in the box.

The difficulty of making quantum computers is exactly the difficulty of building Schrödinger’s cat type boxes around every logic gate in a computer. And then allowing the boxes to interact with each other without interacting with the rest of the world.

If it were just about not directly looking quantum computers would be easy.

46. AM Says:

Scott,
Just by looking at Mochizuki’s web page: http://www.kurims.kyoto-u.ac.jp/~motizuki/IUTch-discussions-2018-03.html

1) Scholze and Stix: “We, the authors of this note, came to the conclusion that there is no proof”.
Well, don’t expect to see something like: “Let’s assume Mochizuki’s Corollary 3.12 holds true, then by the above argument XYZ it leads to a contradiction.” In fact, in their write-up you won’t find even a single theorem/lemma by the authors!

2) Nevertheless, Mochizuki does a good job addressing (in a VERY accessible way!) their imprecise arguments.

Scott, you always present yourself as a careful and polite thinker, but it seems unfair to judge a mathematical breakthrough based on a popular science article.
In Ivan Fesenko’s words: “[It] almost entirely consists of ignorant opinions of a small group of closely related people who have been active in negatively talking about IUT publicly but have their research track record in the subject area empty and are not known to have applied serious efforts to study even anabelian geometry.”

Thanks. Alex

47. Lorenzo Maccone Says:

Hi Scott, thanks for you blog post which is really nice! The connection to Hardy’s paradox is nontrivial but explains very neatly the argument. Thanks! (Please keep up your blog)

48. Renato Renner Says:

Hi Scott

There are currently many who are blogging about our result, so I started to use my little quantum random number generator on my desk: outcome “0” means I don’t react to it, outcome “1” means I write a reply. For your blog the outcome was “1”!

Now, we are already in a situation that involves superposed agents, namely me who wrote this reply and you who are reading it. (I am assuming that you do not believe in objective wavefunction collapses and hence accept assumption Q of our paper.) You would now probably say that this does not prevent us from reasoning as usual, but that we would be getting in trouble if our brains were subject to measurements in the Hadamard basis. So far I would definitively agree. And I would certainly subscribe the claim: “It is hard (if not impossible) to think after someone has Hadamarded your brain.”

But this brings me to the core of my reply. Note the small difference between my claim and your title. While I agree that it is hard to think *after* someone has Hadamarded your brain, I do not see any reason to deny that we can think *before* the Hadamarding.

Talking more technically, the reason why, as you noted, I do not endorse your scenario (the “Wigner’s friendification of Hardy’s paradox” or maybe the “Hardyfication of Wigner’s paradox”) is that it neglects a key element: Your simplified argument completely ignores the timing. But, clearly, it makes a difference whether I think before or after my brain is Hadamarded. In our argument, we were therefore careful to ensure that, whenever one agent talks about the conclusions drawn by another agent, he does so *before* any Hadamarding.

This should be apparent from Table 3 of our paper, which essentially summarises our entire argument. Take, for example, the reasoning by agent \bar{W}. He reasons around time 0:23 about the conclusions drawn at time 0:14 by agent F. The key fact to notice here is that both relevant agents, i.e., F and \bar{W}, are in a similar situation as we are (hopefully) now when reading this text. While, from an outside viewpoint, they may be in a superposition state, no Hadamard has been applied to them.

The only way I can hence make sense of your claim that we are using an additional implicit assumption in our argument (the chaining of statements) is that you are questioning the step that, in Table 3, corresponds to going from the third column to the fourth (the “further implied statement”). Did I get this right? (All the other steps are explicitly covered by our three assumptions, Q, C, and S.)

Before concluding, and since you mentioned this several times in your blog, let me stress that “consciousness” does not play any role in our argument. The agents may as well be computers, which are programmed with rules corresponding to our assumptions Q, C, and S, and which they use for their reasoning (summarised in Table 3). So, when we talk about “agents”, we just mean “users” of quantum theory. After all, the question we are asking is: “Can quantum theory be used to consistently describe users of the same theory?” This question has little to do with consciousness (which is why we tried to avoid this term).

49. Harry Johnston Says:

@ppnl #45, I’m not entirely sure whether we’re disagreeing about the physics or just the philosophy around it. But see the Quantum Eraser Experiment.

50. DarMM Says:

If \bar{F} gets tails, then he knows (given the state he prepares) that the L lab will evolve into the fail state. Thus W will measure fail definitely.

If agent F measures z=+1/2, F can conclude that \bar{F} knows r=tails.

Already here I’m a bit confused.

F himself would think that since he sees z=+1/2 he and his lab are not in a determined state in the fail,okay basis. From that he would think W could get either fail or okay.

However since z=+1/2 => r=tails, he could reason that \bar{F} is certain W will get the fail result. However he would know that \bar{F}’s conclusions about his lab result from \bar{F} reasoning about him using a superposed state.

Is there not already a contradiction at this point between F and \bar{F}’s reasoning. F would reach one conclusion about W based purely on his own z=+1/2 result and a different one when reasoning via \bar{F}’s superposed treatment of his lab.

Or (more likely) am I missing something?

51. Scott Says:

Harry Johnston: ppnl is 100% correct. The purpose of the box, in the Schrödinger’s cat experiment, is to isolate the cat from the entire rest of the universe, not merely from some particular observer. Any interaction with the environment could have the effect of entangling the cat with the environment, and thereby changing the local state of the cat from a pure state (i.e., a superposition) to a mixed state (i.e., either dead OR alive), which is a completely different situation.

52. fred Says:

Scott #40

“science has only ever worked because it’s possible to isolate some things in the universe from other things—not perfectly, but well enough.”

Well, I guess it also depends on what we mean by “works”.

In the case of gravitational mechanics, isolation is difficult.
Even in the simple case of the three body problem, there’s no closed solution, and predictions become hard because numerical instabilities (chaos).

Another example is accounting for the effect of an electron on itself.

53. fred Says:

I have a dumb question for anyone’s who understand black holes and the holographic principle.

Assuming a black holes forms from a collapsing star, is all the material of the star crossing its own even horizon has the collapse progresses?
What about the very first few particles (or clumps of such particles) where the collapse initiates? Aren’t they always inside the black hole? But if so, how would their information ever end up encoded on the BH surface?

54. fred Says:

Btw, those sorts of difficulties related with the “everything is connected to everything else” (like chicken and egg issues between fields and particles) or “an infinite amount of effects need to be accounted for” (like when summing all the possible paths in Feynman QED)… make me really question the general claim that the universe is so “elegantly” mathematical.

Either there’s some clever type of mathematics we’ve not discovered yet or the physical universe basically has infinite resources (aka magic)… or we need to understand better what’s going on a the Planck’s level.

55. sf Says:

Dear Scott,
Grazie mille for this extremely clear, intuitive explanation. On the one hand you make perfectly clear what was not kosher with the paradox, but, on the other, one has to worry whether we have enough defences to be able to catch other potential traps on the fly, before we fall into them. It often seems one has to get to the absurd punch line each time, before backtracking and spotting the flaw.

It seems to me that if one wants to formalize the Born rule a bit, one of the rules would have to say that you can apply it by repeating an experiment and counting outcomes, but there’s no messing around with the internals of the experiment during repetitions. The ‘measurement’ involves the input set-up or preparation of a quantum state, just as much as it involves the meter reading at the output. The references to ‘knowing’ what Alice et al know are trying to get around this Bohr type censorship of “messing around with the internals”. The only admissible ‘knowledge’ should be in reference to states where everything in the system has collapsed. If this doesn’t change anything about our view of QM, it could provide for some subtleties about what ‘knowledge’ can mean in a QM world. In fact, maybe ‘knowledge’ should be grounded in notions of prediction, and repeated experiments.

One issue I wonder about; is it worth trying to formalize QM ‘measurement’ in terms of a Turing machine which does the ‘measurement’ and counts outputs in a series of trials? ie the ‘measurement process’ should consist of more than just a one shot experiment; something more like an ensemble (over time). Is it necessary or useful to have the series of repetitions defined, as part of the context for Born’s rule? I guess this is trying to make the notion of probability more operational/empirical.

Turing like ‘states’ are somehow conceptually very different from QM states; they are is in some sense designed, with a way to control them, and so that they recur ‘as desired’.

If this doesn’t relate directly to consciousness, it can illustrate how notions like ‘access’ and grounding aka embodiment, have to be involved, to have a sensible discussion of any notion of consciousness.

56. Sniffnoy Says:

AM #46: That’s not how refuting a proof works? Scholze and Stix aren’t claiming Corollary 3.12 is false, they’re claiming that its proof is invalid. You don’t show a proof is invalid with a proof of your own, you do it by pointing out the error. Yes, obviously if someone has made a false claim, then proving the falsity of their claim is a good thing to do, but it’s a fundamentally seperate thing to do, in that it doesn’t tell you where the error is. (And the error could be in your own proof. Or, technically, in neither, because there’s an inconsistency in mathematics.) In short, writing proofs and refuting proofs are not the same sort of thing, so it does not make sense to complain that, in their refutation, Scholze and Stix do not include a proof of their own.

57. Scott Says:

Renato #48: Thanks; I’m glad that my blog post was one of the lucky ones to earn a reply from you! 🙂

I acknowledge how much care and attention you devote in your paper to the issue of timing. But I contend that, no matter how we formalize the statements in question, and what it means for the agents to “know” the statements, there will some place where we illegitimately cross the temporal Rubicon between before and after Charlie’s brain gets scrambled by a measurement in the Hadamard basis. Somewhat flippantly, I might say: we know this must be the case, because the end result contradicts the predictions of QM! 🙂 More seriously: at two nearby stages of (my version of) your argument, we conclude that Diane’s brain is in the state |1⟩, and then that Diane’s brain is in the state |+⟩. So, I can isolate where I get off your train to somewhere between the former statement and the latter one…

Incidentally, point taken about the word “consciousness.” But that leads to an interesting question: you say it’s not important if Charlie and Diane are “conscious”; all that matters is whether they’re “agents using quantum mechanics.” But if so, then couldn’t we treat even a single qubit as a “QM-using agent,” in the same sense that one qubit could be said to “measure” another qubit when they’re entangled? In that case, would you agree that the experimental tests of Hardy’s Paradox have already tested your paradox as well?

58. Job Says:

What would be the consequences of this result?

Does it conflict with Quantum Computing in any way? Is that why Scott found it both interesting and annoying?

59. fred Says:

mjgeddes #28

“My new view is that only goal-directed systems that form symbolic representations of reality qualify as ‘computation’ – computers, brains and minds. The *mind* is made of information. Computers and brains too. But most of physical reality is not.”

We can be more specific by noting that “high level” properties of physical systems are the basis of the symbols. By “high level” I mean from a statistical mechanics point of view, such as temperature, shapes, etc … of big clusters of atoms.

“Symbolic” means that we see the spontaneous appearance of small and stable systems with microstates that are extremely sensitive to the macro states of some much larger and/or distant systems. E.g. a thermostat is a small system where a few atoms are very sensitive to the average temperature in the room it’s in. Similarly, a few neurons in the brain of a cosmologist are very sensitive to the shapes of very distant and gigantic clumps of atoms (galaxies). This “sensitivity” can be also interpreted as an isomorphism between the properties of two systems of very different size (a massive reduction is going on).
The micro states of the small systems are also somewhat robust to their own internal noise/randomness. Like, all PCs running a given software have the same values in their registers, even though they’re all different at the atomic level. Resilience to QM randomness is the main property of a “computation”.

But this picture is not enough to understand one thing:
Information is relative – e.g. there’s no single answer to the question “how many circles are there in this room?”, or, if we look at an arbitrary system of atoms, we can’t answer questions like “how much software is running in there?”.
It’s the same difficulty “information integration” theories are running into. They try to extract objective/universal information metrics from systems, but you just can’t do it, because information is relative (the same thing is pointed out by Kolmogorov complexity metrics).

In other words, we can only consider/recognize the dictionary of symbols in ourselves and in all our human artifacts, but we can’t necessarily recognize the existence of such mappings in external systems (who’s to say that a city isn’t conscious?).
Another way to see this difficulty is to note that a dictionary (whether an actual book with definitions of words, or an actual brain with connections between all its stored symbols) is really a collection of circular relationships. The definition of every single word/symbol relies on other words/symbols. If it’s all circular, how does get boostrapped?
Yet, we, as conscious beings, do experience a somewhat grounded/specific interpretation of our world.

What’s missing in the scientific approach to understanding emergence of consciousness (like information integration) is that they fail to recognize the existence of implicit symbols that are irreducible and beyond their reach, those symbols cannot be expressed in terms of words or broken down by reductionism, they are simply beyond the reach of the scientific method. So the dictionary of our mind isn’t all circular but ends up in basic symbols that are either the content of consciousness or consciousness itself. Like “blue” or “pain”… not the words or sounds of the words, but the bottom experience of “blue” that is instantly recognizable to us, and no amount of extra physical facts about it (blue is this wavelength, it excites certain cells in the eyes, …) is ever going to add anything to the bottom mystery that is experiencing blue.

60. fred Says:

Scott #57

“in the same sense that one qubit could be said to “measure” another qubit when they’re entangled?”

Isn’t this a bit like reducing something subtle like the halting problem (about the power of Turing Complete machines running other TC machines) to noting that a billiard ball hitting a couple other billiard balls is some sort of classical computing operation too, so it should be enough to cover everything? 😛

61. Ilya Shpitser Says:

It was great chatting with you, Scott!

62. Harry Johnston Says:

Scott #51, I’ll take your word for it, I guess. But we can perform a two-slit experiment with electrons, right? And we still get an interference pattern despite the fact that the state of the electromagnetic field, if measured sufficiently accurately, would allow us to determine which slit the electron went through. I don’t see how the Schrödinger’s cat experiment is any different.

63. Scott Says:

Harry Johnston #62: The dilemma you point out also confused me when I was first learning the subject. It has an actual resolution. Yes, you can do the double-slit experiment with an electron, and yes, that temporarily sets up a superposition over two different configurations of the electromagnetic field. However, that does not mean that any record gets created in the external world about which slit the electron went through. Indeed, if superposing the electron suddenly created records arbitrarily far away in the universe, that would be faster-than-light communication! Rather, the differing field configurations mean only that there’s the potential for a record to be created—if, for example, we put a charged particle in the field, the closer the better, and a record is created of its displacement. The interaction between the superposed electron and the charged particle would be mediated by an exchange of virtual photons, which has some amplitude to happen and some amplitude not to happen. If we succeed in observing interference between the two different paths of the electron, then that very fact tells us that the total amplitude for all the Feynman diagrams that would’ve led to an external record being created was small.

And again: if maintaining a system in superposition were as easy as personally forgetting its state, then building a scalable quantum computer would be child’s play.

64. mjgeddes Says:

fred #59,

Yes, I agree with your first couple of paragraphs. Information processing is an emergent property and anyone who thinks reality at base is composed of a string of 0s or 1s (or the qubit generalization) is out to lunch 😉

I don’t think consciousness and the nature of symbols is ‘beyond the ‘scientific method’ though. It’s just that a full understanding needs to go beyond physics to the world of mathematics. Mathematics, it seems to me, is precisely all about how knowledge is represented or ‘coded’. It does indeed seem that one needs to begin with some irreducible ‘base codes’ or ‘base representations’ if one wants to grasp how minds work. Physical states alone can’t provide an understanding of that. That’s why I’m a mathematical realist – I ascribe actual reality to abstract objects – I think they exist ‘out there’ and are not just a language we use or invent. The combined might of physics *and* mathematics, I believe, should be enough to provide a full explanation of consciousness.

Draw a 2-d graph with ‘mathematical existence’ along the x-axis, and ‘physical existence’ along the y-axis. I think these two types of existence are orthogonal in the sense that you can’t reduce one to the other or dispense with either if you want a full explanation of reality. You need both.

Now physical existence is all about the structural properties of things: particles , fields (inc forces) and space-time. Mathematical existence is about abstract patterns, or how knowledge is represented: sets, numbers and functions. One could consider that each has it’s own ‘arrow of time’: physics time (physics) and logical time(mathematics). Then the graph shows the progression of logical time (x-axis), and physics time (y-axis).

Both ‘arrows of time’ can be unified by the concept of ‘complexity’. ‘Physics time’ is about the complexity of the physical states of a system, whereas ‘logical time’ is about the complexity of ‘mathematical proofs’. Then I can define cognition (inc. mind, ‘computation’ and ‘information’) as a composite (emergent) property built up from both physics time and logical time.

Thinking of the graph as ‘the dimensions of existence’, then in a real sense, one can say that a new ‘layer’ of reality is being generated as physics time and logical time ‘progress’. The base layers are physical and mathematical existence, and cognition (inc. consciousness) is the emerging new layer! Think of cognition (inc. consciousness) as the ‘high entropy’ state of existence. Both physics and logical time ‘point’ towards the emergence of cognition.

65. Renato Renner Says:

I also felt lucky when I saw that my quantum random number generator chose your blog. 🙂 But now, after rethinking the consequences that future Hadamards can have, at least according to your interpretation, I am afraid that the result of my random number generator may not even exist …

More seriously: I would expect that anyone who claims that our argument is flawed should be able to localise the flaw. So, here is the challenge: Read Table 3 (in the order of increasing superscripts, which specify at what time they are made) and identify the first entry you disagree with. This should be a rather easy task: each entry corresponds to a well-defined statement that you (if you were an agent taking part in the experiment and had observed the outcome indicated in the second column, labelled “assumed observation”) should either be willing to make or not.

Having said this, I would of course never try to impose homework on you, Scott. 😉 Therefore, starting from your analysis of your simplified “Alice-Bob-Charlie-Diane” argument, I tried myself to reverse-engineer what you would say about our original thought experiment. This reverse-engineering is certainly not unique (partially because you dropped all timing information). However, I found that the only statement to which your concern that we “illegitimately cross the temporal Rubicon” may apply, at least remotely, is the very first of the table, i.e., \bar{F}^{n:02}, for it relates an observation at time n:01 to an observation at time n:31. But my conclusion would then be that you just disagree with Assumption Q (which would of course be fine).

Unrelated to this: Your question about experimental tests is indeed an interesting one. I’ll comment on it later (to avoid making this comment even longer).

66. sf Says:

It may also be interesting to consider the Frauchiger – Renner paper in terms of
Scott’s definition of
“State”:
In physics, math, and computer science, the state of a system is…
https://www.edge.org/response-detail/27127

which subtly gets around a lot of non-trivial technical difficulties.

There is a problem coming from ‘knowing’ being both a physical fact about brains (in QM here), and at the same time a property of observers, more associated with their classical features. The definition of “State” that Scott suggests requires one to choose a minimal description, eliminating redundancies, but also avoiding potential internal conflicts.

This said, the notions of “State” that are used in physics and computer science are not necessarily (or a priori?) compatible; at the least there’s some coarse graining to go from the continuums of physics to the discrete world of computer science, which may involve some quotienting operation, or an additional notion of identity/equivalence. Giving more global entities some kind of ontological status then seems to cause problems, like in the mind-body problem. The point may be that the equivalence relation should have some physical basis, but it’s usually regarded as some abstract construct of a meta-theory.

67. asdf Says:

Neel #14: “That is, can we model both an observer and an experiment as a larger, interacting quantum system, and then derive the Born rule for the observer from an analysis of the joint system?” I think that is what decoherence theory set out to do, unsuccessfully though.

https://plato.stanford.edu/entries/qm-decoherence/#ConApp

68. ppnl Says:

asdf #67:

” The measurement problem, in a nutshell, runs as follows. Quantum mechanical systems are described by wave-like mathematical objects (vectors) of which sums (superpositions) can be formed (see the entry on quantum mechanics). Time evolution (the Schrödinger equation) preserves such sums. Thus, if a quantum mechanical system (say, an electron) is described by a superposition of two given states, say, spin in x-direction equal +1/2 and spin in x-direction equal -1/2, and we let it interact with a measuring apparatus that couples to these states, the final quantum state of the composite will be a sum of two components, one in which the apparatus has coupled to (has registered) x-spin = +1/2, and one in which the apparatus has coupled to (has registered) x-spin = -1/2. The problem is that, while we may accept the idea of microscopic systems being described by such sums, the meaning of such a sum for the (composite of electron and) apparatus is not immediately obvious. ”

Well no I think it is pretty obvious. Yes the measuring apparatus can be seen as being in a superposition of states after the measurement. But if the measuring apparatus is in contact with the rest of the universe then it is in a decohered state. That means any observer also in contact with the rest of the universe will see it as simply a classical object in a classical state. There is no way to tell the difference between decoherence and wave collapse even in principle so they are effectively the same thing. In a weird way we can do away with wave collapse entirely and simply see it as a consequence of decoherence.

Any macroscopic object looking out at the universe be it human or measuring apparatus will see a decohered universe. That means it will seem to have a coherent past and follow largely classical rules.

Another way to look at it is all the order in the universe is composed of a vast pattern of superposed states as viewed from inside that superposition. The superposed cat knows very well if he is alive or dead if you ask him. But you have to be in the box in order to ask.

Lemme see if I got this…

State |ψ>, and Alice and Bob will measure the first and second qubits of this state in the basis {+,-}…

There are three components to the state |ψ>, and in them:

1. |00>: If Alice were to measure in {0,1} and Bob were to measure in {+,-}, Bob would measure |+>. If Bob were to measure in {0,1} and Alice were to measure in {+,-}, Alice would measure |+>.
2. |01>: If Alice were to measure in {0,1} and Bob were to measure in {+,-}, Bob would measure |+>. If Bob were to measure in {0,1} and Alice were to measure in {+,-}, it is uncertain what Alice would measure.
3. |10>: If Alice were to measure in {0,1} and Bob were to measure in {+,-}, it is uncertain what Bob would measure. If Bob were to measure in {0,1} and Alice were to measure in {+,-}, Alice would measure |+>.

Frauchiger and Renner then turn these counterfactual subjunctive “were to measure in {0,1}”s into actual measurements by having Charlie and Dianne do their own measurements in the {0,1} basis on the first and second qubits, branching the universe into (1), (2), and (3).

1. In this branch Charlie knows that Bob will measure |+> were he to get around to measuring before decoherence of the second qubit takes place, and Dianne knows that Alice will measure |+> were she to get around to measuring before decoherence of the first qubit takes place.
2. In this branch Charlie knows that Bob will measure |+> were he to get around to measuring before decoherence of the second qubit takes place.
3. In this branch Dianne knows that Alice will measure |+> were she to get around to measuring before decoherence of the first qubit takes place.

In each branch, Charlie and Dianne write down, respectively, “I have measured qubit 1 in the {0,1} basis, and I may know that Bob will measure |+>” and “I have measured qubit 2 in the {0,1} basis, and I may know that Alice will measure |+>”.

Frauchiger and Renner then apply the fact that Charlie and Dianne have measured and the principle of the excluded middle to conclude that it is logically impossible—no matter how the branching has taken place—for both Alice and Bob to simultaneously measure |->.

If we then opened the boxes and decohered Charlie and Dianne, we would be done.

But…

Frauchiger and Renner then apply quantum erasers to Charlie and Dianne. The quantum eraser leaves their “I have measured…” messages intact and visible. But the quantum erasers recombines the branches and the restored coherent state is still (or again?) |ψ>.

And then when Alice and Bob do their measurements in the {+,-} basis, 1/12 of the time we find |- ->

Is that what is going on here?

And is the point that either the principle of the excluded middle or the standard use of the subjunctive must fail for QM to be true?

70. ppnl Says:

Harry Johnston #62

” But we can perform a two-slit experiment with electrons, right? And we still get an interference pattern despite the fact that the state of the electromagnetic field, if measured sufficiently accurately, would allow us to determine which slit the electron went through. I don’t see how the Schrödinger’s cat experiment is any different. ”

Well a dead cat produces a smell of decomp. We could measure that smell and thus know if the cat is dead or alive right? Except the smell is locked inside the box and no information can pass through the walls of the box.

The electron has an electric field yes. But that field is locked in a type of box with the electron. That box is created by the distance between the electron and anything that that field could interact with. Remember the field is very weak and the electron is traveling very fast. The chances for any particular electron interacting with anything is small.

If you do put a detector close enough to the electron to measure the field then its wave will collapse. You have opened the box. And it will collapse even if you never look at the detector. It just being there is enough to open the box.

71. Schmelzer Says:

Andrei #42:
There would be no problem to accept “non-locality”, given that it is only non-Einstein-causality, and perfectly local theories with maximal speed of information transfer of 1.001c have to be named “non-local”.
Moreover, the “non-locality” is a well-defined one, described by explicit equations which we already use (namely the formula for the Bohmian velocity, which appears as the velocity in the continuity equation for the probability flow which is one half of the Schroedinger equation).
Essentially all one has to do to preserve realism and causality is to go back to the Lorentz ether interpretation of relativity. It has a preferred frame, which can be used to make faster than light causal influences compatible with classical causality. The generalization of the Lorentz ether to gravity is simple: Consider the Einstein equations of GR (or the equations of whatever covariant metric theory of gravity you prefer) in harmonic coordinates, and interpret the harmonic condition as continuity and Euler equations for the Lorentz ether.
Instead, superdeterminism is much worse. arxiv:1712.04334 looks at it also from the point of view of Bayesian probability – which, following Jaynes, is simply the logic of plausible reasoning. There, the superdeterminism loophole does not even exist: Once we have no information about, say, how a dice is faked, we have to assume equal probability to all outcomes. This is sufficient to rule out superdeterminism by the way too. Not as a hypothesis about reality (real dices may be faked) but as a consequence of the fact that we have no information about this big conspiracy we cannot take it into account in plausible reasoning.

72. Harry Johnston Says:

OK, I’ve read the paper, and there’s something I don’t understand: from the point of view of W and W¯, doesn’t L become entangled with L¯ when F measures the spin S? And doesn’t that mean that when W¯ makes their measurement, it affects not only the state of L¯ but also the state of L? F¯ doesn’t seem to me to be taking that into account when predicting w.

(I’d also note that the measurement W¯ makes is forcing L¯ into a particular superposition of the F¯-measured-heads and the F¯-measured-tails eigenstates, even if it was originally in one or the other, which is effectively equivalent to winding back that measurement and therefore presumably thermodynamically impossible. But does that depend on your interpretation of what a measurement is?)

[@ppnl: yes, agreed. One point I’d overlooked was that an electron moving in a straight line doesn’t produce electromagnetic radiation. But I also hadn’t thought the QM math through properly, as was obvious once Scott pointed it out. Careless of me. Sorry.]

73. Jochen Says:

It seems to me that Scott raises a valid question. If I understand correctly, then one could also think about this as follows: so suppose \bar(W) surmises that the state of the lab L is |1/2>, i.e. makes the statement “I am certain that F knows that z=+1/2 at time n:11.”. This statement is equivalent to saying “If I were to measure L, I would obtain |1/2> with certainty.”

Before \bar(W) makes any measurement, we have a state with components |h>|-1/2>, |t>|-1/2>, and |t>|1/2>, where |h> and |t> refer to the states of Lab 1, including the knowledge of F, and |1/2> and |-1/2> refer to the states of Lab 2, including the knowledge of \bar(F).

If we rewrite this in the {|OK>,|Fail>}-basis of \bar(W), we get something with components |OK>|1/2>, |Fail>|-1/2>, and |Fail>|1/2>, right? So after \bar(W) measures and obtains |OK>, only the component |OK>|1/2> survives. Hence, if she were to measure L, she would obtain |1/2>.

But if we now re-express that state in the {|h>,|t>}-basis, then we would no longer have a state in which F is certain that W would obtain w=fail, since that state now would have components |h>|1/2> and |t>|1/2>. In essence, the transformation by \bar(W) on the lab has “erased” the prior knowledge of F.

So the statement \bar(W)(n:22) is correct, in that it refers to the certainty F had at n:11; but the statement \bar(W)(n:23) does not follow—rather, due to the knowledge obtained in her measurement, \bar(W) is now no longer certain that F would predict that W observes ‘fail’; rather, she knows that if she were to ask F now, they’d reply that they have no idea what W will observe.

74. Schmelzer Says:

Renninger #65:
Given that dBB is quite obviously self-consistent, all one has to do is to trace what happens in dBB, and trace the “inconsistency” down to every detail. This is missed in the paper. You have vaguely discussed dBB, with the conclusion “One possibility could be to … add the rule that its predictions are only valid if an experimenter who makes the predictions keeps all relevant information stored.” Even if true: If this is sufficient to restore self-consistency, it was never violated.

Let’s look at picture 3 and ask ourselves what makes Alice think that $$w\neq ok$$. It is the consideration of only a part of the experiment, namely she knows the initial state she gives to Bob, and then considers the final experiment W. This reduction is quite explicit:

“While the time information contained in the plot $$s^E$$ is thus more coarse-grained than that of $$s^{F1}$$, it is still sufficient to apply the laws of quantum theory.”

Fine, indeed, it is. But given that the reduced picture omits the experiment A, which changes the state of F2, the reduced scenario is simply not applicable to the full experiment. Yes, this change of the state of F2 by the measurement of F1 by A is that spooky action at a distance which is related with entanglement, and the Bohmian trajectory of both laboratories will be surrealistic ones, but this does not make them inconsistent.

The same logical error I see in the derivation is sufficient to prove a classical analog. Alice tells Bob she is communist. Bob will tell Diana what he knows about Alice political views. Knowing that Bob is a honest guy, Alice will surely think that Diana thinks she is communist. Everything fine, no contradiction, given that Alice at that time is communist. In the classical world, somebody who is once communist remains communist forever.

Now we add the analogon of quantum theory where Charlie can give Alice Solshenizyn for reading, looking at her reaction to measure if she is communist. With some probability for tunneling, that she becomes anti-communist and so Charlie thinks she is anti-communist.

Alice does not talk to Bob after this, so the prediction remains, Alice will surely think that Diana thinks she is communist. So, there may be multiple outcomes about what people think about Alice being communist or not.

But Bohm claims that we are in the time of the internet, and once Diana asks Bob about what Alice thinks, he has, as a honest person, to answer “she told me he is communist long ago, but sorry, wait, I will check if this is correct yet”, and then give Diana the actual information about Alice’s political views. And we have, again, a world without different people having different views about what Alice believes.

Now, apply your argument to this Bohmian internet theory. You consider, for Alice, the reduced version (she tells Bob she is communist, Bob answers Diana’s question). This is a consistent story, you can apply it. Once there is no Solshenizyn reading in this story, Alice remains communist, and Bob will tell Diana that Alice is communist even if he checks the Bohmian internet. But this reduction gives a different result than the full story. Does it follow that the theory is somehow inconsistent?

75. sf Says:

Not only do ’we’ know, as Scott points out, “that Alice is going to measure Charlie in the {|+〉,|-〉} basis, which would involve a drastic unitary transformation (specifically, a “Hadamard”) on the quantum state of Charlie’s brain”, BUT EVEN CHARLIE KNOWS THIS.

The Frauchiger-Renner paper says:
“We analyse the experiment from the viewpoints of the four agents, F, F’, W, and W’, who have access to different pieces of information (cf. Fig. 2). We assume, however, that all agents are aware of the entire experimental procedure as described in Box 1, and that they all employ the same theory“.
(F, F’, W, and W’, are C,D,A,B, here)

So Charlie, Diane even know that they are each just part of an entangled, superposed state which will soon be collapsed in a different basis. It is only when they neglect this in their collective reasoning that the paradox survives. In fact, assumption (Q) is structured to have them selectively neglect such things, but it’s not clear that this makes sense, insofar as it contradicts the quote above.

We may be able to weaken the assumptions on how much C,D, know, so that they can still reason coherently and unknowingly apply the Born rule to get predictions that are invalid from the viewpoint of higher level observers, such as A,B, or us, but this is essentially coming back to Wigner’s friend.

Re: #75

State |ψ>, and Alice and Bob will measure the first and second qubits of this state in the basis {+,-}…

There are three components to the state |ψ>, and in them:

1. |00>: If Alice were to measure in {0,1}, then Alice would know that if Bob were then to measure in {+,-}, Bob would measure |+>. If Bob were to measure in {0,1}, then Bob would know that if Alice were then to measure in {+,-}, Alice would measure |+>.

2. |01>: If Alice were to measure in {0,1}, then Alice would know that if Bob were then to measure in {+,-}, Bob would measure |+>.

3. |10>: If Bob were to measure in {0,1}, then Bob would know that if Alice were then to measure in {+,-}, Alice would measure |+>.

Frauchiger and Renner then turn these counterfactual subjunctive “were to measure in {0,1}”s into actual measurements by having Charlie and Dianne do their own measurements in the {0,1} basis on the first and second qubits, branching the universe into (1), (2), and (3).

1. In this branch Charlie knows that if the wave function has collapsed—if the universe has branched—Bob would measure |+> were he to get around to measuring before decoherence of the second qubit takes place. In this branch Dianne knows that if the wave function has collapsed—if the universe has branched—Alice would measure |+> were she to get around to measuring before decoherence of the first qubit takes place.

But Charle and Dianne know that **even though they have done their measurements** they are in their boxes, and hence the wave function has not yet collapsed—the universe has not yet branched.

Thus Charlie and Dianne know that when it comes time for Bob and Alice to do their measurements in {+,1}, there will be contributions not just from the $\frac{|00>{{\sqrt{3}}$ component of $|\psi>$, but from the $\frac{|01>{{\sqrt{3}}$ and the $\frac{|10>{{\sqrt{3}}$ components of $|\psi>$ as well.

And so they do not know that if Bob were to measure in {+,-} Bob would measure |+> and that if Alice were to measure in {+,-} Alice would measure |+>.

Instead, they know that they are uncertain about what Bob and Alice will measure. They know that the facts that Charlie and Dianne know that they have obtained definite results in the {0,1} basis have (or will have had) no consequences for the true wave function, which remains the original $|\psi>$, and will remain $|\psi>$ until Alice and Bob do **their** measurements.

2. Similar…

3. Similar…

?

And is the lesson that:

(A) Many-worlds does not have a problem if agents properly understand what the branching structure of the universe will be when decoherence occurs.

(B) Other approaches have a big problem, because not even conscious and certain measurement by Turing-Class intelligences justifies a movement from the quantum-superposition to the classical-probabilities level of analysis.

?

77. Andrei Says:

Harry Johnston #36:

“@Andrei #27, as far as I’m aware the purpose of the box in the Schrödinger’s cat thought experiment is just to make it clear that the experimenter is not looking directly at the cat during the experiment. It isn’t necessary for the observer to have no way to tell whether the cat is dead or alive, so long as they don’t actually go ahead and make the necessary measurements.

In other words, it isn’t access to the information that counts, it’s the information you actually choose to collect. Do you have some reason to believe differently?”

Sorry, I didn’t notice your post so I am answering it only now.

Let’s consider the case of a double-slit experiment. We know that by placing a particle detector at one slit the interference pattern disappears. It does not matter if you actual notice the detector, or if you look at its output. It is the presence of the detector that changes the behavior of the incoming particles, not your knowledge.

In the case of a cat in the box I can always place a detector outside the box to determine if the cat is alive or not (by detecting its gravitational field with a very sensitive accelerometer for example). We are still limited by the uncertainty principle, but for a macroscopic object such as a cat it is irrelevant. So, the box is useless. There is no way to place a cat in superposition by placing it in a box.

78. sf Says:

> 1. |00>: If Alice were to measure in {0,1}, then Alice would know…

The measurements are done on the superposition (or sum) of the three components, not separately on each component. You correctly insist on the ordering of the two measurements here, which wasn’t so clear in the previous version, #69.

Jumping to the end, it’s hard to find one definitive way of looking at things here. There are many viewpoints with interesting insights. So far it’s not clear that there’s any consensus on any approach.

Re #78

Touché…

80. Andrew Says:

Quick question — what is the functional difference (if any) between infinite hidden variables and infinite parallel universes? Specifically, if a “complex” system inhabits a world of infinite parallel universes, _or_ an infinite number of “hidden” variables are at play in determining the state of said system, is there any real difference that would manifest itself in the math between those two “interpretations” of what’s “actually” happening beyond what we can observe? I presume the answer is “no”, otherwise we could devise a test to determine which is “true”, but perhaps I’m missing a large chunk of something.

81. Andrei Says:

Schmelzer #71:

“superdeterminism is much worse. arxiv:1712.04334 looks at it also from the point of view of Bayesian probability – which, following Jaynes, is simply the logic of plausible reasoning. There, the superdeterminism loophole does not even exist: Once we have no information about, say, how a dice is faked, we have to assume equal probability to all outcomes. This is sufficient to rule out superdeterminism by the way too. Not as a hypothesis about reality (real dices may be faked) but as a consequence of the fact that we have no information about this big conspiracy we cannot take it into account in plausible reasoning.”

OK. Three points here.

1. I will argue that, under acceptable assumptions, Bell’s statistical independence assumption fails, so the “dice is faked”.

2. I will show that it is not reasonable to ascribe equal probability to all outcomes for a fake dice.

3. I will show that the argument presented in your paper against superdeterminism is question-begging, so it fails.

1.

Let’s assume that the quantum world is actually described by a classical, local, field theory. I will use classical electromagnetism as an example.

One can observe that such a theory does not allow one to split a system (say a source of entangled particles + particle detectors) in independent subsystems. The fields in any region depend on the distribution/momenta of all particles, and the trajectory of each particle depends on the local fields. In fact, for a system of point charges (and in our universe charge is indeed quantized) one can show that the fields in any infinitesimal region is unique for a certain particle distribution/momenta. So, at least at the level of detector/source microstates, Bell’s statistically independence assumption is demonstrably false. It is possible that if all those microstates are correctly counted the statistically independence would be restored, but there is no reason to ascribe a high probability for it to happen.

In conclusion we have good reasons to believe that the “dice is faked/loaded”.

2.

If we know that a dice is faked/loaded we know for sure that ascribing equal probability to all outcomes is the worst possible strategy, because it has 0 chance to succeed. That pretty much follows from the definition of the words. It is better to ascribe a high probability to any value, at random, and you have a 1/6 chance of winning.

3.

As far as I understand your rejection of superdeterminism is based on the null hypothesis:

“Will A increase or decrease the probability of B? Without any information, we have no reason to prefer one of the two answers. The only rational answer is the one which makes no difference – that A is irrelevant for B, that both are independent of each other, P(A ∧ B) = P(A)P(B).”

This is all nice but the same line of reasoning can be used to reject non-local connections as well. Not only we do not know if a measurement at one location has an instantaneous effect at another, distant location, but everything we know about physics seems to preclude such a behavior.

Your argument is question-begging because it ascribes a very low probability for superdeterminism (and I agree with that) but does not provide us with any reason to ascribe a higher probability for non-local connections. In fact, the situation is exactly opposite. When confronted with a new type of correlation for which the cause is not known the most reasonable assumption is that we are in the presence of a past, even if unknown cause, not in the presence of a non-local cause. This is the standard way to approach new phenomena in science. Occam’s razor strongly favors a mechanism that does not require new physics, just another, albeit convoluted case of Bertlmann socks over a new entity like a real wave-function which cannot even exist in our 4D-spacetime but nevertheless can move particles around.

82. John Sidles Says:

To avoid the need to discuss “minds”, it suffices to have the four agents of the game each certify their predictions, by placing a record of their predictions in a “Newcomb Box” (as we will them). As usual in prediction games, once the prediction-contents of a Newcomb Box have been initialized, all participating agents promise not to alter subsequently, by quantum dynamical interactions, the contents of that Newcomb Box. We will call agents that obey these rules “Newcomb Agents”.

Working through the dynamical details, and in particular, unraveling projective measurements as Lindbladian dynamical processes, resolves the paradox as follows (as it seems to me anyway): Newcomb Agents are not allowed to “Hadamard brains” (in Scott’s happy phrase), because the projective unravellings that generate Hadamarded brains necessary include (disallowed) Hamiltonian interactions between Newcomb box-contents and measurement reservoirs.

Specifically, the Newcomb Box certificates, that were originally deposited by agents F and agents F-bar, are subsequently altered by the Lindbladian dynamical processes that generate the projective measurement processes of agents W and W-bar. Hence, with particular reference to Table 3 of the Frauchiger/Renner article, the Newcomb certificates that are associated to the deductions of the first two rows (as generated by agents F and F-bar) necessarily are dynamically altered, ex post facto and contrary to the rules of prediction games, by the Lindbladian generators of the projective observation processes of the second two rows (as imposed by agents W and W-bar).

In a nutshell, by the usual rules of prediction games, agents W and W-bar are cheating. Yet their cheating method is so delightfully non-obvious (for me at least), that the Frauchiger/Renner analysis acquires the character of an magic trick; a trick that initially astounds us, and subsequently — once the mechanism of the trick is understood — brightly illuminates some of the way(s) that we humans think about predictive processes (and even predict them).

In summary, a followup-up analysis, in which deductive inferences are certified using Newcomb Boxes, will conclude — as seems likely to me anyway — that “Quantum agents generally cheat at prediction games, but when everyone plays fair, no contradictions arise”

83. Renato Renner Says:

Schmelzer #74:

Good you are mentioning the de Broglie-Bohm theory (dBB). It serves as an excellent example to illustrate how our thought experiment is different from the simplified version that Scott proposed.

As you are writing, one can indeed apply dBB to our thought experiment and trace in detail what happens. What you will find (our paper does not go into detail here, but it is rather straightforward to do the calculation) is that dBB contradicts statement \bar{F}^{n:02} of Table 3. In other words, according to dBB, the implication “r=tails ==> w=fail” is not correct.

Conversely, if one applies dBB to Hardy’s paradox, the result is different. Here the statement corresponding to “r=tails ==> w=fail” is always valid.

So, how can it be that dBB gives a different result when applied to our thought experiment rather than to Hardy’s? The reason is, roughly speaking, that Hardy’s paradox is based on “counterfactual” reasoning, i.e., the different statements are established in different runs of the experiment with different measurement choices. In contrast, in our thought experiment, all measurement outcomes by all agents are obtained in one single run (and hence, in dBB, represented by the corresponding Bohmian particle positions in that single run). One can therefore reason about them without referring to counterfactuals.

84. David Thornton Says:

Dr. Renner, please see the entry of 26 September in ‘The Reference Frame’ blog of Lubos Motl where he refutes your proof.

85. fred Says:

David #84
Motl’s short conclusion (in case you don’t want to deal with all the profanity stuff on the blog) :

“Their invalid “proof” that the “Copenhagen Interpretation” requires to abandon “C” boils down to their incorrect assumption that it doesn’t matter, from some observers’ viewpoints, whether an observable was measured (by someone else). But the measurement of a quantity whose outcome isn’t certain at the moment always changes the situation – and it changes the situation from all observers’ viewpoint.”

86. fred Says:

So, if the “alive” version of a Shrodinger’s Cat that’s in the isolated box happens to measure a qubit, it (the collapse) “affects” all observers outside the box, so a measurement is a “universal”/”absolute” event?

But yet this collapse of the qubit’s wave function would also have to be in superposition with the case where there was no collapse (the cat is dead and couldn’t measure the qubit)?

Doesn’t seem that obvious to me…

87. Jochen Says:

Renato Renner #83:
“In other words, according to dBB, the implication “r=tails ==> w=fail” is not correct.”

I’m not totally convinced one can make this inference in ordinary quantum theory. Sure, if \bar{F} assumes their observation collapses the state, that’s OK, but it seems to me that, in order to predict W’s observations, they should apply QM from W’s perspective, which will in the end correctly yield that W could obtain either OK or Fail…

88. fred Says:

The paper is fundamentally flawed because its assumption that one can assemble a group of agents who all agree on how to apply QM isn’t a given at all… haha.

89. Renato Renner Says:

David #84:

Next to my random number generator that determines which blogs I should comment (see #48) I have an even more elaborate device that tells me which ones I should not even read. 🙂

90. ppnl Says:

fred #86

” So, if the “alive” version of a Shrodinger’s Cat that’s in the isolated box happens to measure a qubit, it (the collapse) “affects” all observers outside the box, so a measurement is a “universal”/”absolute” event? ”

It isn’t clear what precisely you are asking here. But OK first of all the cat interacts with a qubit. It does not matter if the cat is dead or alive when it does so. In order for that interaction to affect observers outside the box it has to be outside the box and that is exactly what the box is intended to prevent.

The cat is constantly interacting with qubits inside the box. After all the whole point is to have the cat interact with an unstable particle (qubit) in order to place itself into superposition.

And wave collapse is not universal. We view the cat as being in superposition. But from the cat’s point of view we may be in superposition. In order for us and the cat to agree there must be information flow across the walls of the box.

Think of wave collapse as a very bad social disease. Any contact at all and you spread the cooties. The box protects the cat from our cooties. And protects us from the cat’s.

91. Renato Renner Says:

Jochen #87:

If r=tails then, according to the protocol, agent \bar{F} must prepare the state as “spin right”. The Born rule is then applied to this state. So, I do not think anything about “state collapses” needs to be assumed here. (Agent \bar{F}’s conclusion is of course only correct if one knows that she indeed saw r=tails. But this is taken into account in the chain of reasoning; see in particular the second row of Table 3, i.e., agent F’s reasoning.)

92. Dandan Says:

2. Whenever Diane is in the |1〉 state, she knows that Charlie must be in the |0〉 state (since there’s no |11〉 component).
3. Whenever Charlie is in the |0〉state, she knows that Diane is in the |+〉 state

As I see it, this is already a contradiction – Diane in the |1〉state knows that Charlie knows that Diane is in the |+〉state.

I think the problem here is that Diane’s “discovery of herself” is a measurement with the result unknown to Charlie. But if Charlie knows how Diane measure herself (that is in {|0〉,|1〉} basis) she can update her beliefs from |+〉to a mixed state of |0〉and |1〉. This mixed state is still not equal to |1〉but this discrepancy is justifiable.

Anyway, this whole thing is very “thought-provoking”. Thank you for awesome exposition!

93. ppnl Says:

Andrei #81:

” Let’s assume that the quantum world is actually described by a classical, local, field theory. ”

That would mean that any supposed quantum computer would actually be a classical computer right?

If they develop large scale quantum computers that give exponential speed up over classical computers would that disprove super-determinism?

#89 seems wise…

95. Jochen Says:

Renato Renner #91:
“So, I do not think anything about “state collapses” needs to be assumed here.”

Well, the thing is, \bar{F} can reason through the setup in exactly the way we can, to derive that \bar{W} will get the fail-result. And it seems to me that they must know that this is what occurs, in the same way that Wigner’s original friend must know that a suitable experiment performed on their entire lab will show interference. It wouldn’t do to claim that, since the friend observed a certain outcome, they must now predict the non-existence of interference in such an experiment.

But then, by the same token, it seems to me that \bar{F} ought to reason that the state from the point of view of \bar{W} has components |h>|-1/2>, |t>|1/2>, and |t>|-1/2>, which after W’s measurement is projected onto |OK>|1/2>, which is an equal superposition of |h>|1/2> and |t>|1/2> (see my comment #73 above). Consequently, \bar{F} shouldn’t predict the fail-outcome.

They should only predict ‘fail’ if the state is equal to |-1/2> + |1/2> to \bar{W}—but it’s not, and \bar{F} knows this.

96. Jochen Says:

“Well, the thing is, \bar{F} can reason through the setup in exactly the way we can, to derive that \bar{W} will get the fail-result.”

This was supposed to be “…may get the OK-result”.

97. sf Says:

With apologies to Tim Hardin, crossing over to the silly side:

If I were a particle,
and you were a wavelet,
would you measure me any way,
would you be my A-gent?

Would you still find me
Carryin’ the charge I gave
in our Feynman diagram

98. Ahron Maline Says:

It seems to me that the authors of the paper have failed to include, in their list of assumptions, the crucial one: that one who observes a measured value may conclude that the state, after measurement, is an eigenstate corresponding to that value. Without this, the first line of Table 3 in the paper is clearly wrong: when agent Fbar measures r=tails, he has no right to treat the right-polarized atom he is sending as “the true state”, and use that to make predictions about what W will measure. He knows perfectly well that his other self, who measured r=heads, is sending a down-polarized atom, which may affect W’s result. This reasoning by Fbar only makes sense with what amounts to a collapse assumption. Of course this contradicts the assumption that QM can be applied to systems that include observers.

On the other hand, Fbar is justified, without additional assumptions, to conclude that “if I ever see W’s result, it will be w=fail”. Making predictions from one’s observations to one’s own future observations must be valid in all interpretations, although the justification may vary. But here, What’s measurement ruins the prediction: by Hadamarding Fbar’s brain, he makes it impossible for any continuously-aware version of Fbar to become aware of W’s result. Therefore there is no problem if W in fact measures w=ok.

99. Andrei Says:

ppnl #93 October:

“If they develop large scale quantum computers that give exponential speed up over classical computers would that disprove super-determinism?”

Not at all. The purpose of superdeterminism is to provide a local explanation for QM, not to deny QM. So, if faster computers are possible according to QM they would be just as possible according to a superdeterminist interpretation of QM.

Sure, such computers would be classical computers, but they would be a different kind of classical computers because they would make use of a different classical effect, entanglement. An electric engine and a petrol engine are both classical but they need not have the same performance.

100. Schmelzer Says:

Andrei #81:

The objective Bayesian interpretation derives the probabilities from the available information. So, it forces us to accept 1/6 if we have no information which makes a difference between the six outcomes. So, even if we know the dice is faked, as long as we don’t know in which direction it is faked, 1/6 remains the only rational choice.

Your argument (2) presupposes some reality to probabilities, following the frequency interpretation. But this is not what matters in the logic of plausible reasoning. The point there is logical consistency, and the available information. So, if you don’t use 1/6 for everything, you get a different distribution if you simply use different numbers – but your information remains unchanged, thus, your probabilities should remain unchanged too.

(1) has essentially the same problem. Your proof (3) fails: “Not only we do not know if a measurement at one location has an instantaneous effect at another, distant location, but everything we know about physics seems to preclude such a behavior.” Sorry, no, we have a theorem that without such instantaneous influence we would have Bell’s inequality.

101. Schmelzer Says:

Renato Renner #83:

So, dBB in your version gives a consistent trajectory, without counterfactual reasoning.

Where is the inconsistency, then? It appears in the reasoning of Alice thinking that w≠ok. She is reasoning about a counterfactual experiment – one where Charlie does not measure anything, and therefore does not distort Bob’s state.

102. Scott Says:

Andrei #99: You’ve given the logical and correct answer to the question, but that’s different from the answer that the “chief superdeterminist,” Gerard ‘t Hooft, gives! ‘t Hooft is on record predicting that it will never be possible to build a quantum computer that outperforms a classical computer, because of the classical cellular automaton that he believes underlies Nature. On the other hand, he also believes that this CA is able to account for the Bell inequality violations, because superdeterminism! So I’ve always wondered: why doesn’t ‘t Hooft point out that superdeterminism could just as easily account for the successful running of Shor’s algorithm (as, in fact, it could account for anything whatsoever)?

103. Andrei Says:

Schmelzer #100:

” So, even if we know the dice is faked, as long as we don’t know in which direction it is faked, 1/6 remains the only rational choice.”

This is only true for a single run. If you throw the dice only once you could ascribe equal probability. But if you repeat the experiment many times the equal probability assumption is a certain looser for a loaded dice. Given the fact that a Bell test comprises many measurements we are not in the single-run case. So, the reasonable assumption is that the hidden variable and the settings of the detectors are not independent parameters.

“(1) has essentially the same problem.”

I do not understand your point here. In 1 I have provided evidence that a field theory such as classical electromagnetism implies that the “dice is loaded”, in other words, such theories are superdeterministic, according to Bell’s definition. This is about the mathematical structure of the theory, not about prior probabilities.

“Your proof (3) fails: “Not only we do not know if a measurement at one location has an instantaneous effect at another, distant location, but everything we know about physics seems to preclude such a behavior.” Sorry, no, we have a theorem that without such instantaneous influence we would have Bell’s inequality.”

You are just proving my point here, about begging the question. Bell’s theorem does not prove the existence of an instantaneous influence. It allows you to choose between such an influence and superdeterminism. So, what you need to prove here is that taking the non-local option is much more reasonable than the superdeterminist option. With 0 evidence for non-local influences (outside the issue of entanglement which is the subject of our debate) and plenty evidence for physical systems that are not independent of each other your job is quite difficult.

104. Andrei Says:

Scott,

The exact quote of ‘t Hooft is (from his paper, The Cellular Automaton Interpretation
of Quantum Mechanics):

“Yes, by making good use of quantum features, it will be possible in principle, to build a computer vastly superior to conventional computers, but no, these will not be able to function better than a classical computer would do, if its memory sites would be scaled down to one per Planckian volume element (or, in view of the holographic principle, one memory site per Planckian surface element), and if its processing speed would increase accordingly, typically one operation per Planckian time unit of 10−43 seconds.”

So, he is not saying that it “will never be possible to build a quantum computer that outperforms a classical computer”, but it will never be possible to outperform a Planck-classical computer. It seems to me that his view is not necessitated by a classical foundation of QM but by the discrete structure of space and time of the CA. A continuous space-time background would not impose any such limitation, so, a classical computer could be equivalent to a quantum one.

105. Jochen Says:

Ahron Maine #98:
“when agent Fbar measures r=tails, he has no right to treat the right-polarized atom he is sending as “the true state”, and use that to make predictions about what W will measure.”

Yes, this is what worries me, too. It strikes me as being akin to Wigner’s friend concluding that there won’t be any interference if an experiment is performed on the whole lab, where at most they can say that they don’t know.

It doesn’t strike me as that different from the question of whether two events are simultaneous in special relativity: just that they are to you doesn’t necessarily mean that they are to every observer. In fact, the notion is meaningless without specifying a frame of reference.

In the same way, ‘the state’ of a quantum may differ according to different observers, and you can’t generally assume that the state you assign to a system is assigned to it by every observer. In particular, this seems to be the case with systems including you as a proper part.

Wigner’s friend can’t decide whether Winger will observe interference, as this depends on whether they measured a system in an eigenstate of the measurement, or in a superposition. Likewise, \bar{F} can’t decide the outcome of \bar{W}’s measurement, as this, too, depends on whether the coin was in an eigenstate of ‘tails’ or in the superposition specified in the paper. Given the knowledge of the initial state, however, they should conclude that \bar{W} may observe either outcome.

106. fred Says:

Scott #102

“because of the classical cellular automaton that he believes underlies Nature.”

It’s the same idea that Wolfram had, right?

While those ideas may be flawed, the real issue is that no-one has any robust theory for what’s going on at the Planck scale, no?
I.e. coming up with a discrete structure (made of “cells”) for space and time that’s also fitting special and general relativity?
(I don’t know enough about string theory to tell if it solves this).

107. fred Says:

About quantum supremacy and whether the universe is actually a simulation or not (whether an actual QC or a classical computer simulating a QC really poorly).

Both “classical” and “quantum” computers are digital machines, i.e. reality is described as numbers.

But reality itself appears analog (until we know WTF is going on at the Planck scale) and magically super-hyper-parallel.

Reality doesn’t seem to care or struggle to make stuff happen at various scales (from super clusters of galaxies down to the proton), while digital machines need more and more resources to manipulate things consistently at both small scales and large scales.

Reality doesn’t seem to care whether it’s solving a two-body problem or a trillion-trillion-body problem, it doesn’t need to pretend there such a thing as “isolated systems” to make things happen – every point in space sees the superposition of all the fields (gravity, EM) created by all the particles in the visible universe, no compromise (but apparently that’s still not enough information flowing in a given point of space to cause black holes to spontaneously appear? :P)

108. bcg Says:

Andrei #37 (but generally): How does the underlying field determine the angles the researchers choose, such that the researchers are (unknowingly!) conspiring to only present Bell inequality violations? If it’s “I know it must be complicated, and I don’t know how, but it does,” that’s fair, but it seems like a hard sell. If it’s, “That’s not what I’m saying,” then we’re not talking about Bell inequalities.

109. ppnl Says:

Andrei #99

” Sure, such computers would be classical computers, but they would be a different kind of classical computers because they would make use of a different classical effect, entanglement. An electric engine and a petrol engine are both classical but they need not have the same performance. ”

But you seem to be postulating a classical process that violates the extended Church/Turing thesis (ECT). If entanglement is a classical process then it should be limited in the same way that all other classical processes are. Transistors are faster than electric relays but they only allow a poly increase in speed. Ditto every other classical process.

Can you show me a CA that allows a violation of ECT? Do you believe there is such a thing? Does ‘t Hooft postulate any such thing?

Andrei #104

“ ‘t Hooft—–Yes, by making good use of quantum features, it will be possible in principle, to build a computer vastly superior to conventional computers, but no, these will not be able to function better than a classical computer would do, if its memory sites would be scaled down to one per Planckian volume element (or, in view of the holographic principle, one memory site per Planckian surface element), and if its processing speed would increase accordingly, typically one operation per Planckian time unit of 10−43 seconds.”

Neither memory size nor processing speed seem relevant. It is the exponential speed up with size that matters. There are on the order of 2^620 plank unit volumes in the universe. Now imagine a classical computer with that many processors or memory elements that performed an operation in 106^-43 seconds. There seem to be simple problems that this computer could not solve in the age of the universe. But a not overly large quantum computer could. That exponential speedup for some specific problems is hard to beat.

Now as far as I know (and this is just a hobby for me so…) there only two ways around this. Either BQP is equal to BPP or there is a classical process that violates ECT. I don’t think the smart money is on either of these. By a lot. Showing either would make you famous totally aside from any relevance to super-determinism. In fact I doubt that it would convince anyone that super-determinism is true.

My understanding – and my understanding is limited – was that ‘t Hooft was claiming that on some scale quantum computers would fail to deliver the exponential speed increase due to the limits of the underlying classical process. This would allow him to avoid the problem with ECT and BQP=BPP. But I have never seem him or anyone else address this directly.

110. Renato Renner Says:

Ahron #98 and Schmelzer #101 and Jochen #105:

“when agent Fbar measures r=tails, he has no right to treat the right-polarized atom he is sending as “the true state”, and use that to make predictions about what W will measure.”

To make sure I understand your concern, consider a “truncated” variant of the thought experiment where agent Fbar is never subject to a measurement (i.e., Wbar does nothing to her). While you deem the first row of Table 3 as incorrect in the case of the original thought experiment, I guess that you would agree with that row in the case of the truncated experiment, right?

If yes, then you probably have in mind a restricted version of Assumption (Q) which, along the title of Scott’s blog, we may define as follows:

(Q_noHadamard): Like Assumption (Q), but the rule is only applicable under the condition that the brain of agent A (who prepared the system S in state psi at time t_0) is never going to be subject to a Hadamard (or any other non-trivial operation).

If one replaces (Q) by (Q_noHadamard) in our analysis of the thought experiment then I would certainly agree that the contradiction disappears (see my comment #48).

111. mjgeddes Says:

fred #107

Yes indeed,

The more I really think on this, the more I realize how absurd the idea of ‘physics as computation’/ ‘physics as simulation’ really is, and it’s amazing that anyone fell for this nonsense. Two very smart people (Stephen Wolfram and Gerard de Hooft) are proposing a specific model of computation ( cellular automaton ) as the foundation of physics, which is even more implausible!

You only need to look at the mathematical foundations of physics to see that it has *nothing* whatsoever to do with the theory of computation. Firstly, physics is mostly based on differential equations, which needs a continuum just for starters. Secondly, no fruitful new physics has *ever* come from theory of computation…it’s explanatory power as regards physics is nada, zip. Thirdly, theory of computation has abstracted away all the structural details of physics – many possible kinds of underlying physics could be compatible with the *same* theory of computation, therefore ToC simply can’t explain these physics details.

As to the idea of reality as simulation (pan-computationalism), this is just a modern, supped-up version of idealism/solipsism, which can easily be dismissed. Without a physical model of how ‘computation’ is supposed to work, the whole notion of ‘a simulation of the universe’ is simply meaningless, which is just to say, computation needs some underlying hardware, and unless one specifies how this hardware is supposed to work, there’s no coherent theory there. Elon Musk is an example of a really smart guy that fell for this latest incarnation of old recycled nonsense.

112. Andrei Says:

bcg #108:

“How does the underlying field determine the angles the researchers choose, such that the researchers are (unknowingly!) conspiring to only present Bell inequality violations?”

If I understand your view correctly you imply that superdeterminism requires:

1. In reality Bell inequality is not violated (QM is wrong)
2. There is a conspiracy preventing us to measure some of the pairs so that we are under the (false) impression that QM is right.

I reject both claims above.

I think that the violation of Bell inequality is a true feature of Nature. My (qualitative) explanation for this fact is the following:

There is an infinity of states (position velocities of electrons and quarks, electric and magnetic field magnitudes) that the source and the detectors can be in. But, for each physically possible state there is an infinity of states that are not physically possible. Example: take a real, existing state, move a single electron from the source with 1nm and leave everything unchanged. This new state is forbidden because the electric field at any location (including the location of the detector) does not correspond anymore with the charge distribution.

So, in order to estimate the prediction of classical electromagnetism for a Bell test one needs to only count the states that are physically possible and discard the others. My hypothesis is that when the states are properly counted the violation of the inequality would result.

You are perfectly justified in disbelieving this hypothesis, but such a disbelief is not enough to rule out the theory. If you claim that the theory cannot violate the inequality you need to actually count those states or come up with a different argument. Bell’s theorem in its present embodiment, that assume the source states and the detector states are independent, is of no use here.

113. Jochen Says:

Renato Renner #110:
I would not want to be that specific in reformulating the assumption. Rather, I would maintain that one can’t in general make predictions about a system of which one is a proper part; furthermore, to me, that’s just applying quantum theory. Otherwise, you already get in trouble in Wigner’s original thought experiment, where the friend might falsely predict the absence of interference.

There’s also work by Thomas Breuer to the effect that the assumption that one can always predict the results of measurements on systems including oneself yields contradictions, see e. g. “The Impossibility of Accurate Self-Measurements”.

114. Andrei Says:

ppnl #109:

“you seem to be postulating a classical process that violates the extended Church/Turing thesis (ECT).”

First, I must confess that my knowledge in this field is very limited, so if I say something stupid please accept my apologies.

OK, let me present the extended Church/Turing thesis:

http://www-inst.eecs.berkeley.edu/~cs191/fa08/lectures/lecture17.pdf

“The extended Church-Turing thesis is a foundational principle in computer science. It asserts that any ”reasonable” model of computation can be efficiently simulated on a standard model such as a Turing Machine or a Random Access Machine or a cellular automaton.”

The lecture continues:

“But what do we mean by ”reasonable”? In this context, reasonable means ”physically realizable in principle”. One constraint that this places is that the model of computation must be digital. Thus analog computers are not reasonable models of computation, since they assume infinite precision arithmetic. In fact, it can be shown that with suitable infinite precision operations, an analog computer can solve NP-Complete problems in polynomial time. And an infinite precision calculator with operations +, x, =0?, can factor numbers in polynomial time.”

So, it seems to me that a classical process that uses infinite precision (such as the evolution of a system of charged particles described by classical electromagnetism) is an example of an analog, not digital computer, and such a system can in principle be used to solve the problems that quantum computers are able to solve in comparable time.

“Can you show me a CA that allows a violation of ECT? Do you believe there is such a thing? Does ‘t Hooft postulate any such thing?”

I am not going to defend ‘t Hooft’s model as it does not seem to be the most promising path. I would rather go with a theory like stochastic electrodynamics (just classical electrodynamics with a special type of initial state – a real EM field, called the zero-point field, that plays the role of QED vacuum). This theory has passed some non-trivial tests, like giving a classical explanation for:

Planck’s law
Nature volume 210, pages 405–406 (23 April 1966)

Debye law
Phys Rev A. 1991 Jan 15;43(2):693-699

electron’s spin
J. Phys.: Conf. Ser. 504 012007

“My understanding – and my understanding is limited – was that ‘t Hooft was claiming that on some scale quantum computers would fail to deliver the exponential speed increase due to the limits of the underlying classical process. This would allow him to avoid the problem with ECT and BQP=BPP. But I have never seem him or anyone else address this directly.”

It seems to me that he is saying exactly that:

” these will not be able to function better than a classical computer would do, if its memory sites would be scaled down to one per Planckian volume element”

115. Renato Renner Says:

Jochen #113:

“Rather, I would maintain that one can’t in general make predictions about a system of which one is a proper part; furthermore, to me, that’s just applying quantum theory. Otherwise, you already get in trouble in Wigner’s original thought experiment, …”

I definitively agree that one cannot make predictions about a quantum system of which one is a part. In fact, avoiding the need for such self-predictions was a main design principle of our thought experiment (see the discussion on the top of the right column on page 5 of our article).

Scott has a good mini-thought experiment in his book(QCSD) regarding counterfactuals and conditional probabilities and observers in superposition and it illustrates the subtle error of reasoning valid in classical but not quantum scenarios that this slightly more complex paradox uses. As another blogger pointed out, Dr. Renner used the same setup to argue for MWI over single world interpretations. It doesn’t do that either. This “paradox” is just standard QM which works for everything: people, particles, or universes.

117. fred Says:

mjgeddes #111

“As to the idea of reality as simulation (pan-computationalism), this is just a modern, supped-up version of idealism/solipsism, which can easily be dismissed. Without a physical model of how ‘computation’ is supposed to work, the whole notion of ‘a simulation of the universe’ is simply meaningless”

Although I agree that from a physical description of reality point of view, “reality as a simulation” doesn’t get us very far, because we can’t say anything about the “hardware” (since we’re trapped in the system), there’s a different more practical approach to this:

– consciousness is the only thing that can’t be denied – whether we’re brains in vats, whether the world is a simulation, whether the world is made of particles that are mathematical singularities, whether this is all a dream… the rock bottom truth is our experience of being.

– our view of external reality, i.e. the information in our sensory data streams (and all the contents of our consciousness) is computable.

– the dynamical evolution of external reality (i.e. physics) is such that life spontaneously appears, and life eventually creates computers, lots of them, and those computers keep getting better and better. Basically not only the universe is computable, but eventually it will organize itself into computers.

– the content of our consciousness can be generated from a computer (aka virtual reality), not only in a way that’s indistinguishable from the external reality, but in ways that transcend external reality (from an evolutionary point of view).

If you buy all this, it’s then not hard to see where this will lead us.
And the unlikely situation is the one where we would not already be several recursion deep inside a collection of realities within realities.

118. JimV Says:

mjgeddes at 111 says “Firstly, physics is mostly based on differential equations, which needs a continuum just for starters.”

That turns out not to be the case. Engineering uses differential equations for lots of things that are actually discrete, such as stress and strain and vibration of materials, which are actually made out of discrete molecules (and in the case of alloys, grains that can be seen under low-power magnification). Fluid flow has some famous differential equations, to deal with H20 molecules.

Calculus is a great approximation to discrete systems. Finite-difference equations also work, and have the same types of solution as differential equations of the same form.

Meanwhile, Zeno, Democritus, and recently George Ellis, have pointed out that the universe makes more sense in finite, discrete form. With infinities and infinitesimals you get Cantor’s Hotel and other paradoxes.

119. sf Says:

Dear Renato,
I wonder if it would be possible to formulate your result as a theorem about the impossibility of extending the Feynman diagram method (for calculating amplitudes), or if it would get too messy? The idea is that Assumption (Q) corresponds to adding some kind of extra operations to Feynman diagrams, ie the agents correspond to certain ‘subclusters’ in the diagram, and operations on them correspond to the inferences/communications between the agents. This might give a language where the various viewpoints in this discussion could be unified as some theorem about what can and cannot be done legitimately (as inference methods) within the context of Feynman diagrams.

One motivation; Feynman in some sense refined Bohr’s approach by allowing one to embed QM experiments inside others, in a nested structure, but this required replacing the measurement step by a branching interaction structure. Your paper is also embedding QM experiments inside others, so the Feynman diagram method seems like a natural way to look at things here.

120. Jochen Says:

Renato Renner #115:
“In fact, avoiding the need for such self-predictions was a main design principle of our thought experiment”

But surely, a measurement on the state of Lbar is a measurement on the state of Fbar? I mean, after that measurement, I know exactly what state Fbar is in, don’t I?

Anyway, my main point remains: Fbar is only justified in assuming that Wbar obtains ‘fail’ if they’re justified in assuming that the state of the coin is ‘tails’. But I don’t think that’s the case, as the total state of Fbar and the coin is an entangled one.

W, in other words, would seem to be perfectly justified in reasoning that Fbar’s knowledge of QM and the measurement setup leads them to conclude that they aren’t able to predict Wbar’s measurement outcome—which, of course, is exactly what turns out to be the case.

121. fred Says:

Isn’t this expected after all?

Physics assumes systems can be isolated, but the world is really fully interconnected and deterministic (or deterministic with some pure randomness, whatever that means).

So, strictly speaking, there’s really no such a thing as a part of a system making a prediction about another part of the system.
And a system can’t simulate itself fully, a part of the whole by definition has less resources than the whole.

Also, the “prediction” itself and all its apparatus have been instigated by the initial conditions of the system they belong too. They’re no more independent than any other sequence of events in the system (one wonders why a deterministic system comes up with the notion of “predictions” and “counterfactuals”, it’s not like the “predictions” are going to alter the system’s evolution in some magical independent way… just like other human made concepts, e.g. “free will”).

Given all this, it’s really not surprising to find that there are limits to the application of physics and contradictions are bound to happen as you try to apply it all recursively.

122. Harry Johnston Says:

To answer my own question #72, from F’s perspective S is in a known state so there is no entanglement between Lbar and L.

It still seems like an impropriety to allow Wbar to perform a measurement that requires the ability to predict the exact microstates of Fbar’s measuring device. Wouldn’t the simplest way to eliminate the apparent paradox be to modify assumption S to explicitly require that a measurement be thermodynamically irreversible, and assumption C to explicitly require that all measurements be thermodynamically possible?

123. ppnl Says:

Andrei #114:

There is no difference between an analog computer and a digital computer. Or more precisely the only difference between them is an abstraction layer. If you zoom in on a digital memory cell past the abstraction layer what you see is an analog computer programmed to emulate a digital memory. Turns out that is the easiest problem for analog computers to solve.

Why emulate digital computers? Analog computers have a problem with noise. An analog computer without digital emulation will never calculate hundreds of digits of pi because you could never measure the output that accurately. And a fly fart on the other side of the galaxy would disrupt the calculation anyway. Digital computers have calculated sixty trillion digits of pi. In almost every instance the gain you get from the ability to control noise far outweighs the penalty from the emulation layer. Flies are free to fart all they want.

In short a digital computer can be seen as a particular kind of analog computer algorithm that allows you to control noise. But what that means is that anything that allows you to build better analog computers also allows you to emulate better classical computers. An infinite precision analog computer would allow us to build an infinitely fast digital computer. In a deep sense these are the same thing.

But infinite precision is a chimera. That would imply infinite information density and your computer would collapse into a black hole dragging the rest of the universe with it. The universe contains a finite number of atoms, finite energy and finite space. As a result it also contains a finite amount of information. This prevents any infinite precision analog computer. Else we would have to junk 99% of all we think we know about physics. There is no simple fix to get around this.

A quantum computer isn’t faster than a classical computer in the sense of more operations per second or more information density. A quantum computer changes the definition of information and computation in a way that probably allows new algorithms that are exponentially faster on some problems than any possible classical algorithm. That means a comparatively modest quantum computer could outperform any classical computer made with the resources of the entire universe. And necessarily also any analog computer as they have the same limits.

And remember a quantum computer is only faster on a tiny subset of problems. On most things it is no better than classical. It just isn’t generally faster.

124. bcg Says:

Andrei #112

> So, in order to estimate the prediction of classical electromagnetism for a Bell test one needs to only count the states that are physically possible and discard the others.

The probability of non-quantized spin being exactly +1/2, or exactly -1/2, is zero. Forget violating BI; how can classical electrodynamics even *obey* BI? This is regurgitated Wikipedia here, so if this is moonman talk, please be kind.

125. Renato Renner Says:

Scott,

I am afraid that it will look as if I’m hijacking your blog if I continue answering the many comments. But I appreciate the variety of reactions that have appeared here, ranging from something like “it has already been done” (either by Wigner or by Hardy) to “it’s really not surprising to find that there are limits to the application of physics” and to “it is fundamentally flawed”. 😉

For now it’s probably better if I remain silent until someone manages to not merely claim the existence of a flaw but to also localise it (which I think is not too much to ask, for our argument is described step-by-step in the article). Even more because I do not know how to reasonably reply to indirect allegations of the type “I have a simplified version of your thought experiment where no contradiction arises, so your argument must be flawed somewhere”.

Anyway, my conclusion so far (on the superficial level of the blog’s title) is: Yes, it is hard to think after someone Hadamarded your brain, but certainly not before!

126. Andrei Says:

ppnl #123:

“But infinite precision is a chimera. That would imply infinite information density and your computer would collapse into a black hole dragging the rest of the universe with it. The universe contains a finite number of atoms, finite energy and finite space. As a result it also contains a finite amount of information. This prevents any infinite precision analog computer. Else we would have to junk 99% of all we think we know about physics. There is no simple fix to get around this.”

I disagree. Nature works with infinite precision because space and time are continuous. This is true in QM and it is also true in general relativity. The magnitudes of fields, locations of particles, etc. are real numbers that have an infinite number of digits. If you introduce a discretisation you run into problems, like violations of symmetries and conservation laws. This is a significant problem for ‘t Hooft’s CA model and he is aware of that. It might be that a solution to this problem exist, but, regardless, our current understanding suggests space-time is continuous.

GR does not predict that a black-hole appears any time the distance between two objects is not represented by a rational number, so it is clear that there has to be a mistake in your reasoning. My guess is that the mistake originates in the confusion between the information contained in the system itself and the information that can be extracted by using an experimental procedure. For example if you want to measure the location of a particle with infinite accuracy you need a photon of 0 wavelenth, or infinite frequency, and such a photon would carry an infinite amount of energy. It is that energy that creates the black-hole.

So, a classical, analog computer “computes” with infinite precision, and the only limit is imposed by our ability to measure the initial state (input) and the final state (output).

“A quantum computer changes the definition of information and computation in a way that probably allows new algorithms that are exponentially faster on some problems than any possible classical algorithm.”

OK, let’s be precise about what “classical” means here. For the purpose of our discussion a classical theory is a local and realistic theory. Classical electromagnetism, or general relativity are examples of such theories. Now, do you have a proof for your above statement? Can you prove that a computer that is based on local, realistic physics cannot achieve the same performance as a quantum one?

127. Andrei Says:

bcg #124:

The probability of non-quantized spin being exactly +1/2, or exactly -1/2, is zero. Forget violating BI; how can classical electrodynamics even *obey* BI?

Quantization might be achieved classically as well.

Please take a look at this paper:

Emergence of quantization: the spin of the electron (A M Cetto1, L de la Peña2 and A Valdés-Hernández) – J. Phys.: Conf. Ser. 504 012007

The article can be read here:

http://iopscience.iop.org/article/10.1088/1742-6596/504/1/012007/pdf

128. fred Says:

ppnl #123

“But infinite precision is a chimera. That would imply infinite information density and your computer would collapse into a black hole dragging the rest of the universe with it. The universe contains a finite number of atoms, finite energy and finite space. As a result it also contains a finite amount of information.”

But information density, holographic principle, Hawking radiation… that’s still all entirely speculative, no?
The only thing we’ve ever observed (indirectly) about black holes is their gravity effects (orbits of neighboring stars and gravity waves), and the quantization of space is a difficult nut to crack (https://en.wikipedia.org/wiki/Doubly_special_relativity)

Dear Professor Renner:

With respect to your “it’s probably better if I remain silent until someone manages to not merely claim the existence of a flaw but to also localise it”, I’m not sure that it’s a flaw, but I would like to ask your reaction to this response:

Rauchiger-Renner write: “At time n:01 \bar{F} observes tails and thinks: Statement {\bar{F}^{n:02}: ‘I am certain that W will observe w = fail at time n:31.'”

In fact, even though at time n:01 \bar{F} observes tails \bar{F} is not certain that W will observe w = fail at time n:31, for in fact it is not certain that W will observe w = fail at time n:31.

What \bar{F} does think, using quantum mechanics, is, rather, this:

>I have just made a measurement and observed tails. If this measurement of mine were to have branched the multiverse—decohered the state—then I would be certain that W will observe w = fail at time n:31. But I know, looking forward into the future, that my measurement has not branched the universe—decohered the state—or, rather, I know that my brain will be Hadamarded in the future to reconstitute the state, so that nothing will remain in the universe of my measurement of “tails” other than the fact that everyone will agree that I made it.

>Because I know quantum mechanics, I know that the reconstitution of the state |\psi> requires that my reasoning now, pre-Hadramard, take account not just of what I have observed but of what my ghostly self on the branch that observed heads observed, just as their reasoning must take account not just of what they have observed but of what they see as the ghostly me on this branch that observed tails observed. And when I take account of that, properly using quantum mechanics, I see that even though I know—right now— that the wave function is |0+>, quantum interference with |1+> + |1-> raises the possibility that W will not observe w = fail at time n:31.

But I am not sure whether I have identified a flaw in your argument. Perhaps I have merely understood what your argument is…

Yours,

Dear Ahron:

With respect to “On the other hand, \bar{F} is justified, without additional assumptions, to conclude that ‘if I ever see W’s result, it will be w=fail…’”

Should that be:

>\bar{F} is justified, without additional assumptions, to conclude that “if any version of me that retains a memory or any other signal of the fact that I measured tails ever sees W’s result, it will be w=fail…”

?

Yours,

131. Andrei Says:

Renato Renner #125:

For now it’s probably better if I remain silent until someone manages to not merely claim the existence of a flaw but to also localise it (which I think is not too much to ask, for our argument is described step-by-step in the article).

OK, let me try.

You write:

“One observer, called agent F, measures the vertical polarisation z of a spin one-half particle”

and

“The other observer, agent W, has no direct access to the outcome z observed by his friend F. Agent W could instead model agent F’s lab as a big quantum system”

The assumption here is that it is possible to shield the content of the lab from an observer situated outside of the lab. My question is simple. How would you do it? How would you build a lab so that no information about the interior can leak outside. More to the point how can you stop the gravitational, electric or magnetic fields that are correlated to the contents of the lab to be measured from outside?

Thanks!

132. Jochen Says:

Renato Renner #125:
“For now it’s probably better if I remain silent until someone manages to not merely claim the existence of a flaw but to also localise it”

Let me just say that I really appreciate your willingness to comment on your work online, but I also understand if it gets a bit… tiresome. Also, even though I don’t think I quite agree with the conclusions of your paper, that doesn’t mean I think it’s not worthwhile—far from it, I think it already has stimulated and galvanized much thinking on the matter, and will continue to do so, which is, to me, definitely a good thing. As the (alleged) Bohr quote goes, the opposite of a profound truth may well be another profound truth.

That said, to me, it’s becoming more and more clear exactly where I disagree, and I’m gonna use the benefit of having access to a keyboard again to try and explain myself as best I can (for whatever it’s worth). Perhaps I’ll later try and write up something more formal, if I feel my argument contributes anything to the discussion.

First of all, I’m gonna agree with you that there is indeed a contradiction that arises from your assumptions Q, S, and C. However, I disagree that assumption Q reasonably follows from using quantum theory.

To localize my disagreement, I do not think that Fbar(n:02) is a statement that Fbar can reasonably make. The reason for this is, essentially, that the inference from observing ‘tails’ to the total state being |t>(1/sqrt(2)(|-1/2> + |1/2>)) does not follow.

To break this down, Fbar has two lines of reasoning available to them:

A: “I have observed ‘tails’; hence, I prepare the state 1/sqrt(2)(|-1/2> + |1/2>). Nothing Wbar does changes anything about this, as the total state is the tensor product |tails>[1/sqrt(2)(|-1/2> + |1/2>)]. W’s measurement hence yields ‘fail’, as |ok> = 1/sqrt(2)(|-1/2> – |1/2>) is orthogonal to the state reduced to Lbar.”

B: “I have observed ‘tails’. However, I know that the coin was prepared in the state |init> = 1/sqrt(3)|heads> + sqrt(2/3)|tails>. Hence, the total state now is |Ψ> = 1/sqrt(3)|heads>|-1/2> + sqrt(2/3)|tails>1/sqrt(2)[|-1/2> + |1/2>] = 1/sqrt(3)(|heads>|-1/2> + |tails>|-1/2> + |tails>|1/2>). Written in the {|okbar>,|failbar>} basis, this is |Ψ> = 2/sqrt(6)|failbar>|-1/2> + 1/sqrt(6)|failbar>|1/2> – 1/sqrt(6)|okbar>|1/2>. This is an entangled state.

After W observes the outcome ‘okbar’, the total state will be |okbar>|1/2> = 1/sqrt(2)[|heads>|1/2> – |tails>|-1/2>]. But this isn’t orthogonal to |ok>, so W may obtain w = ok.”

It seems to me that the preferable line of argumentation is B. For one, it has the advantage that it yields the right conclusion, and Fbar must know this in the same way that we do. But more importantly, it doesn’t make the (IMO) unjustified assumption that just because Fbar has made an observation, they can conclude what the total state is for another observer.

Because in the end, this is a self-measurement: |heads> and |tails> include Fbar’s observation of the coin as ‘heads’ or ‘tails’, and while there’s originally no component |heads>|1/2>, this is there after Wbar’s measurement. So, if we were to tell Fbar after the measurement of W that the outcome was ‘ok’, they’d say: “Of course, that’s very well possible; after all, I observed ‘heads’ at the start!”. So in that sense, no contradiction ever arises—even though one may squirm a little at the ‘undoing’ of Fbar’s knowledge/measurement result.

But this isn’t even something unique to quantum mechanics: even classically, given the power of re-wiring your brain, I can make you believe that a coin came up ‘heads’ even if you observed it coming up ‘tails’. “I shall then suppose, not that God who is supremely good and the fountain of truth, but some evil genius not less powerful than deceitful, has employed his whole energies in deceiving me…”

In the general case, we can’t assume Fbar knowing the initial state of the coin, so this seems to prohibit the argumentation B. But then, it doesn’t follow that A is the right way to think. Rather, Fbar ought to reason that they have no way of knowing whether the state, after preparing the spin, is |tails>[1/sqrt(2)(|-1/2> + |1/2>)]: there could well be entanglement between the state of the coin (and consequently, the state of Fbar after observing the coin) and the state of the spin. And by Breuer’s argumentation I have cited above, Fbar cannot decide whether that’s the case. Consequently, they aren’t entitled to derive any conclusions regarding the total state; but this, likewise, blocks the inference of Fbar(n:02).

This may seem a troubling conclusion in itself, since it implies that there are some things about the total state of the universe (if it’s meaningful to speak of such an entity) that we can’t know; but after all, we can at least derive our own future experiences, and, at least according to what we ever can get to know, this won’t yield any contradictions.

133. fred Says:

Jochen #132

“This may seem a troubling conclusion in itself, since it implies that there are some things about the total state of the universe (if it’s meaningful to speak of such an entity) that we can’t know”

Why is this troubling or even surprising?
By definition isn’t the universe made of mostly “stuff” we can’t fully resolve?

– The part can’t contain (and know) the whole.
– Perfect knowledge of something is duplication, and the no-cloning theorem says it can’t be done.
– Short of duplicating a system, we can probe it, and such action is always destructive.
– The instrument and the object studied can’t be truly independent (for every action there is a reaction), so knowing everything about an object would require knowing everything about yourself perfectly as well.

The only time we are truly in control is when we’re talking about bits, because they’re relative to us, we made them, they live entirely in our minds!

134. Ahron Maline Says:

Renato Renner #110

Sorry for the delayed response, especially now that you want to withdraw from the discussion. Nevertheless, I’d like to try to further clarify my point.
What you called “Q-no-Hadamard” is indeed a fair description of the “bottom line” as to when “QM predictions” based on observations are expected to be correct. That is why there is no paradox, and I suspect that Scott intended something similar.
However, I am making a stronger statement: I believe that your work, which is framed as establishing a new no-go combination of assumptions, does not in fact do so. Your assumption Q is not a sensible position to take, regardless of the other two, and so ruling it out teaches us little.
If I understand correctly, assumption Q states that if one observes a particular classical reality, he may take the corresponding quantum state and use it to make predictions (at least when the probability is 1), and these predictions will be satisfied by future observations of any possible observer. In our case, Fbar is using his observation r=tails to predict W’s observation as w=fail.
But there is no sensible reason that this should work! Wigner’s friend experiment presents us with a clear dichotomy. Either:
A. Observations made, at least by humans, represent the unequivocal reality, with all other possibilities reduced to “might have been”. This is held by the Copenhagen Interpretation, among others. Wigner pointed out that this implies QM will fail if used by an observer outside the lab, and Deutsch spelled out the failure in detail.
So if A is true, assumption Q is “reasonable”, but known to be false. The other possibility is:
B. Even after an observation has been made, the other possibilities continue to be “elements of reality” in some form. This may be as “parallel worlds” as in MWI, as “parts of the wavefunction that are empty of Bohmian particles” in dBB, or others. If this is the case, then assumption Q is really quite unreasonable. Why would it make sense for Fbar to ignore the r=heads “branch” when it is understood to be “real”?
So assumption Q is not something anyone holds, and could not have been.
Now it may be asked: according to option B, which is quite popular, how can we ever use QM at all? There are always likely to be countless “branches ” different from anything we can know about, so how do we ever make predictions?
The most common answer to that is decoherence: the assumption that states that are “classically different” will remain too far separated in Hilbert space for any interference to ever be observed. This indeed justifies QM predictions, but of course it does not apply to scenarios where observers are measured in a Hadamard basis.
And somewhat more generally, what is required for using QM is something like “Q-no-Hadamard”: Predictions made from an observation are valid within the “world” that “remembers” that observation, but not for a world where the observation has been “Hadamarded away”.

135. David Byrden Says:

Thank you for your lucid and helpful explanation of the Renner-Frauchiger puzzle.
It’s immediately obvious that the essence of their puzzle is contained in the 1992 Hardy paradox.

And, by thinking about that in a geometric way (as vectors in 4 dimensional Hilbert space) I was able to see what is really going on.

It’s just like Asher Peres said; unperformed measurements have no results. In the {|+〉, |-〉} basis a qubit CANNOT be read as 1 or 0. The apparent “paradox” results from illegally combining facts from different measurement bases.

136. Ian Says:

Hi Scott,

I was wondering if you had any final thoughts on Renato’s comments here? It seems the two camps ( there is a paradox / there is no paradox ) are still solidly separated…

137. Scott Says:

Ian #136: Yes, my final thought is still that there’s no paradox, 🙂 for the reasons David Byrden #135 says. I can’t completely localize what I consider the illegal step within the exact language that Frauchiger and Renner use, and I think it would be very worthwhile for someone to take the effort to write a response paper spelling it all out (which I expect will happen). But I can certainly state in broad terms what I consider the issue to be (and did).

138. David Byrden Says:

Let me spell out what I visualised.

The system has 4 physical states (combinations of qubit values), so there are 4 basis vectors spanning its Hilbert space.

The initial system state is a vector equidistant from three of those basis vectors, and orthogonal to the fourth, which is |11〉. As you wrote, “|ψ〉 has no |11〉 component”

But Alice and Bob intend to measure in a different basis, namely { |+〉,|-〉} for each qubit.

In that basis, the state vector is quite close to |++〉and equidistant from the other three vectors, which include |–〉. Don’t try to visualise this unless you are four dimensional !

Now, the trouble starts. As you wrote:
“… conditioned on Alice’s qubit being in the state |0〉… ”

Alice’s qubit will read |0〉only if you measure it. This collapses the state vector into the plane spanned by |00〉|01〉.
( don’t forget to renormalise. )

In its new position, the state vector is orthogonal to |–〉. So of course you cannot get there any more.

So, in conclusion, when you say “suppose Alice’s qubit were zero” you are really saying “suppose the system were in a different state”.

But Alice’s qubit is NOT zero. Bob’s qubit is NOT zero. The state vector is pointing out between their ones and zeros, to a spot from which it can collapse into |–〉.

139. sf Says:

Scott #137
It would only make sense to “localize … the illegal step” if you are sure that this is being done in a formally consistent system. But, granting that the precise hypotheses of their article do formally lead to a contradiction, this wouldn’t be the way to resolve the Frauchiger-Renner ‘paradox’; their system would be inconsistent. So which CONSISTENT logical system should one be working in?

The issue is more likely whether the inconsistency can be ‘localized’ to a smaller set of hypotheses, including some implicit hypotheses. (You suggest in the post some hidden hypothesis, I also made some attempt at this in #75, and was then convinced their system is inconsistent.) There is no objective way to say which hypothesis of an inconsistent system doesn’t belong, unless one passes to a larger context, ie working outside the formal system in question.

This is also why I suggested Feynman diagrams in #119 as a potentially preferable context. (The divergences of Feynman integrals would have to be avoided, but this isn’t so relevant here).

140. David Byrden Says:

Feynman diagrams? There’s no need for anything like that. This is a simple, basic, comprehensible “spot the mistake” puzzle.

The mistake is: Renner and Frauchiger put their system into a state, then they collect some statements which would each be true in *other* states. Those statements are not true in the *actual* system state.

Finally they combine the statements, as if they were all true at once, which they are not. It’s like saying “I have just enough money for one beer. I have just enough money for a sandwich. So tonight, I will dine on a beer and sandwich”.

141. David Byrden Says:

DarMM #50 :

> “If \bar{F} gets tails, then he knows … the L lab will evolve into the fail state.”

That’s true.
But, in more detail; he knows that F will evolve into a superposition of z=+1/2 and z=-1/2.
But, the laboratory L contains a quantum device that reads the qubit and passes the results to W without causing decoherence.
The human being within that lab will read the device and decohere, but no trace of him will emerge from the perfectly sealed lab.
So, the two superposed versions of data emerging from lab L will combine, and W will measure “fail”.

> “If agent F measures z=+1/2, F can conclude that \bar{F} knows r=tails.”

That is true.

> “F himself would think that since he sees z=+1/2 he and his lab are not in a determined state in the fail,okay basis.”

F would be right about that.

> “From that he would think W could get either fail or okay.”

No!
F knows that he’s in a superposition. There are two of him, seeing different data.
And the lab equipment creates constructive interference between his data and his doppelganger’s data.
They will both contribute to W’s measurement, resulting in “fail”.

———–

Here’s another example of this phenomenon. Imagine a Young’s slit experiment. You are speaking to the photon:

“Hey, you just went through the Left slit, didn’t you?”
“Sure did!”
“Well, there’s a version of you who went to the Right.”
“I know, he voted for Trump. We’re not on speaking terms.”
“I have good news. You’re about to meet up and work together!”
“What? Why?”

———–

So, in conclusion; the agents in the labs should remember that they themselves are in superpositions, because those superpositions will come into play when the external agents measure everything.

142. sf Says:

David Byrden #140
Basically I agree with you, and the idea is to start with the approach you use. But to respond to the challenge that Renato and Scott were discussing you don’t get the option to finish things up that way. You have to either isolate the mistake within the formalism as Renner and Frauchiger framed it; with agents, ‘knowing’ and inferring things in their ‘partial states’ which aren’t quite quantum or classical states, or you have to say specifically where their setup just doesn’t make sense. Overall no one claims it makes sense, since it gives a paradox, but the issue is if this comes from the global setup or from some more localizable conflict, ie whether there’s anything interesting and deep that’s new about the paradox or just the usual incompatibility of quantum and classical physics as theories of everything, presented in a slightly different way. I completely agree that you can be happy to stop where you do, but this doesn’t satisfy Renato.

I only meant to use a very rudimentary form of Feynman diagrams; basically a branching Many-worlds interpretation, since one wouldn’t really want to translate things to set theory, but one needs some consistent framework to work in.

143. David Byrden Says:

Ah, now I see what you intended with the Feynman diagram.
Yes, that’s a good idea. So I tried it.
There are 6 branches.
Everything is consistent until the final step of the experiment, when Agent W makes his measurement in a different basis to the one that everybody’s been using.

144. DarMM Says:

Hi David,

Thanks for the response. I see what you mean, but by considering himself in superposition F reaches the conclusion that W can never observe “ok”, which seems to be in contradiction to predictions based purely on the quantum state where the chance should be 1/12 (which is the result observed in the Hardy paradox case as Scott mentioned).

It would seem to me that the agent is simply wrong to not take that his own measurement and collapse (from his perspective, I.m not saying this implies MWI is wrong) into account and shouldn’t reason based on viewing himself in superposition.

In fact he seems to be composing facts from a collapse and no collapse stand point.

Collapse to reason that \bar{F} got tails.
Then no collapse to reason that he should adopt \bar{F}’s view of him.

It’s only with these composed that he obtains W = fail always, in contradiction with the 1/12 prediction from the actual quantum state.

145. DarMM Says:

David,

Sorry scratch the first part of that. Concluding W cannot get “ok” is fine for the reasons Scott mentioned in the main post. At that point W would get fail. It’s not until \bar{W}’s measurement that this conclusion is invalidated.

However I’d like to see an interpretation neutral take on F’s two different conclusions. Yours requires something like Many-Worlds.

It seems F could either consider him and \bar{F} to be in the state:
|tails,fail>
or the state:
|tails,1>

and both give different predictions for W.

Now \bar{F} has P(W = fail) = 1, certain. So it seems F would need to adopt the superposition view of himself for superobservers who can make measurements on his lab at the atomic scale.

Many-Worlds sees this as because there are two F observers. However not all interpretations view superposition like this.

146. David Byrden Says:

sf #142

So I should specify exactly where the paper contains errors. All right, here’s the first error as an example:

Refer to Table 3. Its first entry reads :

/F observes r = tails at time n:01

Now, this agent is in superposition because she could have observed either “heads” or “tails”. In the “many worlds” interpretation, there are two of her in separate worlds. This entry describes only the “tails” copy of agent /F.

She knows this too. But the paper claims that she will use Quantum Mechanics when thinking about things.

Her deduction is:

“I am certain that W will observe w = fail at time n:31.”

From her point of view, this seems a foregone conclusion. She will send a qubit to agent F, in an equal superposition of “up” and “down”. That will put agent F into a superposition. Then agent W will measure agent F, and one of his measurement vectors will coincide exactly with the state vector of agent F. The result of “fail” seems inevitable.

But she’s doing it wrongly.

She’s ignoring the other copy of herself, in the parallel “heads” world. That other copy of agent /F will also pass a qubit to agent F. The two qubits may come from separate worlds, but they are coherent and they will combine (like the two paths in a Young’s Slit experiment). Agent F will receive a qubit containing information from BOTH the “heads” and “tails” worlds. The proportions will NOT be fifty-fifty. Agent /F made a wrong calculation.

She did not use Quantum Mechanics properly, as claimed.

That’s it, in a nutshell. I could prove it with algebra, I could draw it in 4 dimensional Hilbert space, but I think you have the idea now.

147. DarMM Says:

I’m a dumbass, the three observer case I’m asking about is nothing more than the usual oddness for running interference experiments on other observers that one encounters in the usual expanded discussions of Wigner’s friend.

148. David Byrden Says:

So, I’m putting my refutation of this paper online at;

http://byrden.com/quantum/consistency.html

If there is interest, I will flesh it out. (For example, you might wonder what happens if the labs are unsealed.)

David

149. sf Says:

David Byrden #146

Thanks. What you say looks right to me, but I’m hoping to have a strong coffee and look at the F-R (Frauchiger-Renner) paper a bit harder soon and possibly play devil’s advocate a bit, just to see if there are still issues of interpretation to be settled. Also, in #75, above, I had some vague hunch related to your point. I’m a bit worried that there are ’straw-men’ littering the terrain; maybe we’re taking the paper to be claiming more than it really does, or maybe F-R is even attacking viewpts that nobody really holds on unlimited use of QM.

In the meantime I noticed that
https://fqxi.org
features a link to a more recent Renner project
https://fqxi.org/community/articles/display/231
Dissolving Quantum Paradoxes – The impossibility of building a perfect clock could help explain away microscale weirdness.

The project coauthor, Lídia del Rio, also nicely presents F-R in a video:

https://www.perimeterinstitute.ca/videos/journal-club-frauchiger-renner-no-go-theorem-single-world-interpretations-quantum-theory

I had wondered a bit if there was any issue with taking for granted the fact of giving agents in F-R access to synchronized clocks? But probably this is just a technical point.

A similar issue is that agents seem to have a way of calibrating their Hilbert space bases; eg the same spin up/down test is shared by the independent labs L, /L. But a priori there shouldn’t be any relation of these bases, especially because F-R assert that labs L, /L are in pure states to start. If they calibrate, then they are entangled. This could be resolved by building in the basis at the outset, but that may have other problems, depending on which interpretation one works in.

150. David Byrden Says:

> “agents seem to have a way of calibrating their Hilbert space bases; eg the same spin up/down test is shared by the independent labs L, /L.”

That’s true, but I thought it would be trivial to do that?

e.g. if the polarisation of a photon is the shared qubit, then it’s only necessary to have the measuring devices set up with their measurement axes parallel. No special entanglement is needed. I’m sure I’ve seen that done in various quantum experiments.

Am I missing something there?

151. sf Says:

>I’m sure I’ve seen that done in various quantum experiments.

But the classical measuring devices there are set up using CLASSICAL INTERACTIONS to align them, with measurement axes parallel. Here we have 2 labs that are strongly isolated from each other, as quantum systems. So this ‘classical interaction’ is not available.

In fact there could be pretty arbitrary unitary operators on the whole quantum computer/lab interfering with alignment after some time passes, insofar as quantum mechanics makes it hard to rule out some initial drift/jitter. There might be a pretty deep problem with getting 2 QM-isolated labs to interact in any meaningful way. I wonder if this has been discussed anywhere?

152. David Byrden Says:

Ah, yes, I see. An interesting point. So, we align the measuring devices, then seal them inside “labs”. Will they be aware if they accidentally rotate?

Well, under SR, your orientation in space is an absolute reference (but not your “speed” or position). So, surely you’d notice if you were rotating? Even when isolated? Because there’s spacetime in the box with you.
Or, when we build a perfectly isolating “lab”, would it cut you off from the absolute angles of spacetime?

Which prompts a more fundamental question:
This experiment, like Schrodinger’s Cat, requires the “box” or “lab” to perfectly block any information coming out from the inside.
But, wouldn’t both experiments work if information could enter from the outside?

153. sf Says:

David Byrden #152
There’s still a lot to try and clarify here. What seems OK so far is that calibration between labs would involve entanglement, but also that in an orthodox Copenhagen context, a physicist would prepare the 2 (or more) labs so that axes would be aligned for some finite time experiment. Then he/she can apply Born’s rule using that alignment. But I’m not sure the lab agents F,/F, can ‘know’ this; they don’t benefit from the Copenhagen version of Born’s rule, not having prepared this state themselves.

Your SR point is right (and analogously in Newtonian/Galilean contexts) but there’s a difference between ‘orientation in space is an absolute reference’ and giving access to measuring such absolute orientation to agents in a lab. This is analogous to the issue of time; QM assumes that time-like slices are well-defined, but this doesn’t mean that entities/agents in an experiment can read off this time from some absolute clocks. There’s even an uncertainty principle for time that obstructs this (and more subtle issues of interpretation involved there).

I don’t know yet if F-R mention any of this, still have to get back to the article.

The lack of absolute reference for “speed” or position may be enough to rule out access to measuring absolute orientation for lab agents; the (macro but quantum) lab apparatus can tilt by the uncertainty principle for the position of ‘one end’. This may involve one lab implicitly measuring the other’s position though.

>wouldn’t both experiments work if information could enter from the outside?
If it’s coming from one lab to the other, which is all that matters here, then there’s still decoherence or entanglement I guess.

154. Benjamin Says:

Renato Renner #125

I don’t think my ideas are new to this thread, but I frame my objections as an error in your Table 3, rather than “identifying and rejecting an additional unstated postulate” as Scott claims he is doing.

F’^{n:02} should really read “I am certain that W will observe w=fail after n:11 if I still know the S measurement in the |x> basis.”

F measures in the |z> basis. This alone does not matter to F’, since F’ does not have access to the F measurement. So at this point (t >= n:11) the F’ statement above still holds: a W measurement will yield w=fail. This is not inconsistent with F at this point either.

If W’ makes a measurement before W, and F measured |+z>, then W’ must find w’=ok’ (assuming consistency (C)). This is again consistent with both F’ and F. However, now F’ cannot know that S is in the |+x> state. Although not a direct measurement on the transferred bit, the W’ measurement fixes S=|+z>. So F’ can no longer be confident that W will observe w=fail — that conclusion was drawn from the knowledge that S=|+x>.

This is equivalent to Scott’s objection on the grounds of a Hadamard application to the brain, but I think it is helpful to think about what that actually means in terms of the thought process of F’. Before the W’ measurement (which F’ must be aware of), F’ knows S=|+x> because he made it that way. But F’ also knows the Stern-Gerlach experiment, and knows that if S is measured in the |z> basis, the S=|+x> knowledge is lost.

Now in this thought experiment we already know that F measures S=|+z>, but F’ does not know that so still has confidence that S=|+x>. The important step for F’ is the W’ measurement of w’=ok’. As you state in equation (6) of your paper, the w’=ok’ measurement implies S=|+z>. This is not inconsistent with anything any of our observers have seen so far, but it does mean that F’ loses confidence about S=|+x>. This is not inconsistent with (Q) or (S), but is inconsistent with the idea that labs L and L’ are independent, and I think that is where some of the confusion lies. The S bit, and {F’}’s knowledge of it, are entangled with the L’ lab since F’ prepared the bit and sent it to F. That step alone doesn’t cause concern — it’s the same as Schrodinger preparing his poison-release contraption before he puts it in the cat box. But when W’ measures w’=ok’, he is implicitly making a measurement on that bit in the L lab, even if he doesn’t actually “open the box”.

I’ll end with a final sanity check to show that F also has a consistent view with the other observers. F, being isolated from F’ and W’, is unaware of the W’ measurement so we are tempted to still see an inconsistency from statement F^{n:14}. But that statement must also be altered in the vein of F’^{n:02}, as it was deduced from an application of (C) to F’. So it is not accurate for F to think “I am certain that W will observe w = fail at time n:31.” Rather, F can only say “If W observes w=ok, then F’ must not know the state of S in the |x> basis, and none of us could have known for certain what W would observe.” A simple calculation shows that the probability of this from the F frame yields the expected 1/12.

155. tez Says:

@ David Byrden #146 – note the original title of the paper:

https://arxiv.org/abs/1604.07422v1

156. David Byrden Says:

I’m analysing this Gedankenexperiment and I have a question. I think I know the answer but I’d like someone who has qualifications in QM to tell me what it is.

Agent /F goes into superposition when she becomes entangled with the randomness generator.

She prepares a qubit which is sent to the other lab.

This qubit is not in her own state, but it’s in a state related by a unitary. We can implement that with a machine on the route connecting her lab to the other lab. In that case, agent /F sends out the qubit in her own quantum state.

So, agent /F is left sitting in her lab, waiting to be measured, in her own quantum state. Meanwhile the qubit is on its way to the other lab, and it’s in the same quantum state.

I think that runs foul of the No Cloning Theorem.

I think that the only way around this is for agent /F to read the randomness generator and deliberately set up the qubit in the appropriate state. I think it’s impossible for the randomness generator’s state to “pass through” undisturbed while the agent reads it.

Why does this matter? Because I want to confirm that reading the state of /F does not collapse the state of F. I want to be clear that they are two distinct states.

157. sf Says:

I’m not at all expert on this, but I think you’re right that there’s an interesting issue there. It would be interesting if you can give your answer in the meantime.

>”I want to be clear that they are two distinct states”
If I understand, you mean that it remains a superposition of two distinct branches or components?

My guess is that when you try to quantize the labs there has to be some allowance for error to creep in. Its crucial to have some quantitative bounds on this error, which may be a problem for F-R.

See
https://en.wikipedia.org/wiki/No-cloning_theorem#Imperfect_cloning
Buzek and M. Hillery showed that a universal cloning machine can make a clone of an unknown state with the surprisingly high fidelity of 5/6. Also,

Even to quantize a dynamical system given by polynomials of low degree is problematic, as I recall. The naive approach of using an atomic description/reduction of the labs is not valid because it doesn’t provide an exact description of a Turing machine (or human or whatever the lab is); the latter is an abstraction insofar as its error free etc. and its states are equivalence classes of physical events that aren’t quite physically defined.

There’s some interesting discussion of classical no-cloning in: