Here’s a really simple example, involving two agents who have a common prior, but nevertheless agree to disagree. The reason they can agree to disagree is that they don’t commonly know that they have a common prior. Examples like this one make me wonder whether the hypotheses of the agreement theorem even apply to ideally rational agents — since a rational agent could fail to know that another person is rational.

There are four worlds — Best, Good, Medium and Bad — and two agents — Alice and Bob. The “Perfect” prior over these worlds is 1/3, 1/6, 1/6, 1/3. The “Imperfect” prior is 1/6, 1/3, 1/3, 1/6. (The first has the 1/6ths on the “inside” worlds Good and Medium, the other has them on the “outside” worlds Best and Bad.)

The agents have different priors at different worlds. If the world is Best or Good, Alice has the Perfect prior. If the world is Medium or Bad, she has the Imperfect Prior. Bob always has the Perfect prior. So if the world is Best or Good, then the agents have a common prior.

As usual, the agents also know different things at different worlds. If the world is Best or Good, Alice knows that it’s Best or Good. If the world is Medium or Bad, Alice knows that it’s Medium or Bad. Bob never learns anything.

At every state — even the ones where they have a common prior — the agents commonly know that Alice’s posterior in the event {Good, Bad} is 1/3. Meanwhile they commonly know that Bob’s posterior in the event {Good, Bad} is 1/2. So they agree to disagree.

At states Best and Good, the agents in fact have a common prior but they don’t know that they do. We can cook up models where they do know, and know that they know…that they have a common prior, for any finite number of iterations of “know that they know”, but still agree to disagree. In fact we can also give examples where they commonly know that they have the same prior, but still agree to disagree because they don’t commonly know which exact distribution is the prior they share.

The hypotheses of the Agreement theorem can also be weakened in several other realistic ways, and under almost any such weakening the theorem ceases to hold! If you’d like a catalog of such counterexamples, you can check out the paper mentioned in comment 88.

Harvey

]]>Somewhere along the line, years after I was in college, I “internalized” the point that local realism (for instance, a box either has X in it or does not have X in it, be it a marble or a cat or a spin state) works at human scales and is deemed to be part of our “rationality.” But it manifestly DOES NOT WORK at smaller scales (nor at any scales, actually, but at large scales or long times, decoherence dominates).

And when one realizes that it is this QM-type of non local realism that makes for stable atoms, to name but one of a hundred such example, one realizes that QM is really more “rational” than the classical world is.

(Another way to put this is that really are only two choices for the “norm” of reality. The L1 “Manhattan geometry” norm or the L2 (“shortcut through the diagonal”) norm. (There are reason s why other norms can’t work. Scott’s book alludes to them, and real mathematicians no doubt can elaborate.) Both Lucien Hardy and Scott have made this point poignantly (and pointedly), and maybe it’s buried somewhere in John von Neumann’s papers, but this seems to be key. So, L1 is classical physics, L2 is quantum physics. There are no other “physical realities.” A really, really bright mathematician/physicist could have (perhaps) “predicted” a lot of quantum physics 200 years ago. Especially with just the results of Young’s light diffraction results. But the best we had, Newton, Lagrange, Laplace, Maxwell, Einstein, etc. did not.

The revolution of QM made this much clearer.

(And, in my own opinion, the “second revolution” of Bell, quantum computing, entanglement, which was largely a realization of the implications already laid-out by around 1930, has led to all this cool new stuff. Some say it was already known by way back then in the 1930s, but I think it too a while for people to think about the implications. And even probably wrong ideas like the AMPS “firewall” theory have served to trigger this huge flurry of papers and theories. Oftentimes a “wrong” theory stimulates fresh analysis. Einsteing, a critic of QM, was a major contributor just through is ER and EPR, separate analysis.

Lenny Susskind had a hilarious line at his Stanford lecture several weeks ago. About ER and EPR, “they didn’t call him Einstein for nothing.” The whole lecture hall erupted with laughter.

]]>Great point. A world based on Rationality alone might be a scary place.

Aren’t emotions like sentimentality, attachment etc. at odds with pure rational decision making?

]]>– Assume that the Fermat Theorem’s is consistent

– Assume that integers a, b, c, violate the Fermat Theorem

– Then, the Fermat Theorem is inconsistent.

Therefore, proving that the Fermat Theorem is consistent would immediately prove the Fermat Theorem.

In the same way, many mathematical statements are proven immediately as soon as a consistency proof is given. I.e., an independence proof is impossible for these statements and that’s why there aren’t independence proofs. Perhaps many of the unproven statements are actually independent but we can’t prove that.

Also, I personally find it striking that the Continuum Hypothesis is independent. If that is not disconcerting, I don’t know what would be.

]]>Actually it just means we need irrationality to make the world work.

]]>I mean sure lots of turning down happens in the routing process of scientific publishing but this sort of work seems very high impact just so long as it is legit.

I mean it doesn’t even have to be right but just has to meet the bars of legitimate scientific discourse to get published when the potential impact is so revolutionary.

]]>Exciting indeed! You lucky man. It pains me to think that I will never properly understand this stuff (I am 62 and not a physicist) but even the simplified versions convey a strong feeling of something both beautiful and fundamental. I’ll keep watching those guys on YouTube…..

]]>About hoping that Scott turns his guns towards physics over complexity theory, Lenny Susskind said at his recent Stanford lecture that his main focus in the big program is more complexity theory than some of the other areas.

(I was able to catch lectures by Polchinski, Preskill, Van Raamsdonk, and Susskind at Stanford in the last couple of years. What a great place and time I find myself in.)

A lot of the stuff about tensor networks and “cuts” in sketches of Escher-like diagrams is beyond me, but there seems to be something potentially there that is not just woo-woo.

Exciting times. Complexity theory seems unrelated, except as Scott has hinted at in his book, and utterly impractical (10^89 years, come on!), but then entropy looked similarly difficult to actually consider a century or so ago.

]]>