I’m not totally sure it’s impossible to do better than peers communicating what they know. The first step would be trying to find a point of disagreement even after the shared probability has been agreed upon, maybe by discussing hypothetical scenarios?

]]>Regarding your first point: yes, it’s true that if Bob wants to know all the evidence on which Alice based her opinion, then Alice needs to *communicate* all that evidence to Bob, and vice versa! I don’t see any conceivable way around that. The surprising part—the part that goes against some people’s intuitions—is that if Alice and Bob just want to agree on an answer (and *not* on the underlying evidence), then vastly less communication suffices. I hope I was clear enough in the paper about the distinction.

Another way in which your paper deviates from reality – When Alice communicates a value to Bob, in the real world it might be very computationally expensive for Bob to figure out how Alice could possibly have come to that state, and that isn’t factored in as something which can happen. As a general rule, this is what has happened whenever anybody responds to a non-misheard statement with ‘What?’

]]>Your question is a good one, and the answer is that O(1/ε^{2}) bits still suffice for that task, independent of the size of the knowledge bases. I even show that somewhere in my paper, as a lemma along the way to proving a convergence between Alice and Bob.

What I really want to know is, what is the communication complexity of us coming within epsilon of finding out what an agent who knew our combined sets of information would think? Is it still only dependent on epsilon or is it dependent on the size of our knowledge bases?

]]>Even without that, Bayesian updating will be *very* slow if every piece of evidence invokes its own Bayesian process to determine if the origin of this evidence is credible, if the authors were biased etc.

In other words Bayesian inference will not work well in a quasi-infinite dimensional problem space, which includes almost everything, from weather stations and tree rings to stolen emails and Al Gore’s business deals etc. …

]]>“Is the oldest child a girl?”

then the two parties agree before they even start: namely, they both agree that

Pr[oldest child is a girl] = 1/2.

(What we *mean* by agreement, in this setting, is just that they assign the same or nearly the same probabilities to the event in question.)

If you asked a different question, then the parties’ different knowledge might lead them to assign different probabilities. But then, if you try out an example, you’ll find that the parties *do* necessarily learn new things just by telling each other their probabilities. That’s basically the content of Aumann’s Theorem, which I encourage you to look at the original proof of (forgetting my complexity-theoretic version for the time being!). There are few results with a larger ratio of

Difficulty of intuitively accepting / Difficulty of proving.

]]>