Video Title: An Evening With The Philosophical Muser #9: Scott Aaronson And I Discuss Aumann's Agreement Theorem Video Author: The Philosophical Muser (Always Seeking The Truth) James: Good evening, everybody, and welcome to another edition of "An Evening With The Philosophical Muser." I'm really delighted to be joined today by Scott Aronson, a theoretical computer scientist and Professor of Computer Science at the University of Texas in Austin. Welcome, Scott. Scott: Thanks, it's great to be here. James: Yeah, I'm really stoked to have you with us. It's very timely as well, because you know those Facebook memories that pop up from the past showing you old posts? Well, one of them popped up this week and it was a post I'd written in about 2013 I think it was. And it was the people that I'd really like to have a conversation with who are currently alive today. I'd made a list of people and it had E.O. Wilson on it, Martin Scorsese, Paul McCartney, Deidre McCloskey the economist, David Friedman, Patti Smith, Joni Mitchell, and people like that. And yourself, Scott Aaronson. You are... Scott: I'm truly honored to be on that list. To be on such a list… I don't deserve it, and I'm happy to be alive, uh, still, 10 years after you wrote that. James: Yeah, absolutely. It's a distinguished list and I'm very, very glad you're on it. I think the only person who hasn't survived the list is Cormac McCarthy, who died quite recently. Scott: E.O. Wilson? James: Oh, did he pass away? Oh gosh, have I missed him? Scott: Yeah, a couple of years ago. James: Bless. Okay, well he won't be coming on the show then. They're really missed. Okay, so a lot of people won't know exactly why it's so exciting for me to have you on, but many, many will, and it's about a theorem called Aumann's Agreement Theorem which I've written about before and blogged about; and it's in two of my books. And I know that you, Scott, were into mathematics before you got formally into computer programming, and I guess that's probably where you came to learn about the theorem, isn't it? And stumbled upon it. So I'm really familiar with it and I've spoken about it a lot in my work but I'd love you to explain it in your words to give the viewers an understanding of what it is and hear it from your perspective and why it's such an intriguing theorem. Scott: Well, Aumann's theorem was proved in 1976 in a now-famous paper called "Agreeing to Disagree." And, you know, I think it was about a three or four-page paper and the statement and the proof of this theorem within it was about one paragraph. And you know, that is maybe the most recognizable thing for which Aumann later won the Nobel prize in economics. So, it was surprising to people. Basically, what it says is that under certain reasonable-sounding assumptions, two rational people should never be able to agree to disagree. Okay, so if they have a common, what we call common, knowledge of each other's opinion which means they know each other's opinion, they know that they know it, they know that they know that they know it, and so forth, right? But, by the way, one of the things that Aumann's paper is famous for is for formalizing the very concept of common knowledge. He explains why that is different from merely knowing each other's opinion. They have to not only know, but know that they know, and know that they know that they know, and so on, right? If they're in that state, we call it common knowledge. It dictates that under those conditions, their opinions must be equal. They must have the same opinion. Now, there is a key assumption that underlies this, which is called the Assumption of a Common Prior. The setting that we imagine is that we have two people. In computer science, we like to call them Alice and Bob. There is a probability distribution over all the possible states of the world. To the extent that Alice and Bob might disagree with each other, it is only because they have different information. In a complete vacuum, if we imagined Alice and Bob as souls who haven't been born yet, then in that original condition, they would assign the same probability to any possible state of affairs. For example, whether Rishi Sunak winning the election, or, uh, it raining tomorrow or any event at all, okay? But then they might disagree with each other because they have been born and lived different lives and have accumulated different experiences, right? Which have given them different knowledge. So, maybe, Alice has checked the weather report and Bob has not, or Bob knows people in politics and Alice doesn't. And so, because they have different information, if you ask them to estimate something like, 'With what probability will it rain tomorrow?' or 'With what probability will this candidate win the election?' then they might give you different answers. Alice is using the information that she has, and Bob is using the information that he has. But what Bayes' Theorem says is that if their opinion ever becomes common knowledge - so Alice knows Bob's estimate and Bob knows that Alice knows his estimate, and Alice knows that Bob knows that Alice knows, and so on and likewise for Alice's estimate - Bob knows it, Alice knows that Bob knows it, and so on, their estimates must actually be the same. That sort of restores the situation in this hypothetical original condition before they ever accumulated different knowledge and different experiences. James: Yeah, absolutely. So, for any that don't know what common priors are. Basically, what we're talking about are Bayesian priors, aren't we? So, In probability and statistics, the Bayesian theorem describes the probability of an event based on prior knowledge of conditions that might be related to the event. So you know, if the risk of X is known to increase with, say, Y, then rather than taking statistics from the set as a whole, Bayesian probability looks to assess X more accurately relative to changes in Y. As you said with the election example. I remember when I first stumbled upon this theorem, it was almost like a light bulb moment for me. It's like when someone articulates something that you've thought about and worked on for a long time. Bringing it together so elegantly and succinctly was really liberating for me. What it did was it gave me a renewed sense of hope. If people can agree in theory on a relatively short execution time, provided they are both seeking the truth and without too many impinging factors, it can help further and advance the conversation. In terms of how we can make progress as people, it was great to see that theorem formalized. You've proven a part of it in your paper, 'The Complexity of Agreement' in 2005. That's more about the execution time, right? And you're elaborating on it? Scott: Yes, I can tell you a story. Back in 2003 or so, I was reading things online and came across the webpage of an economist at George Mason University, named Robin Hanson. He had an enormous number of heterodox opinions, even crazy-sounding opinions, then and now. Maybe some listeners have encountered him, but he had a paper with another Economist named Tyler Cohen called "Are Our Disagreements Honest?" or "Are Our Disagreements Rational?". It explained Aumann's Theorem. That was the first time that I had ever encountered it. My first reaction on seeing it was that I must have misunderstood. This can't possibly be right that rational people should never agree to disagree about anything. So then, my immediate reaction was to start looking for loopholes in it. Indeed, many people have done that over the past half-century. One loophole might be this assumption of a common prior. We're assuming not only that Alice and Bob are Bayesians, as you said, which seems pretty reasonable. Being Bayesian could even be taken as one definition of rationality. It just says that you're updating your probabilities for different possible events — then you are updating those probabilities. You know, as you gain new knowledge in the way that math says you should. Right? That, you know, that's really, uh, all that being a Bayesian means. Okay, but, uh, then, you know, the set-up of Aumann's Theorem somehow goes further than that in saying that: first of all, Alice and Bob, you know, have probabilities for everything. They're never unwilling to say probabilities. They always have a probability, you know. You ask them, "Is there life on other planets?" "Is there a God?" They have a probability for it, everything is a question of probabilities for them. And crucially, until they gain knowledge, their probabilities are the same. And that, that is sort of the metaphysically weird part here, right? And I thought, "Why should we assume that their probabilities — their prior probabilities, as they're called — are the same?" But in their paper, Robin Hansen and Tyler Cohen had an argument for that just completely rocked my world. And it still does, I think. Their argument was the following: if you were Alice, then it's not inevitable that, as you're thinking about it, you would have been Alice. Maybe you would have been born as Bob instead. So, the very fact that you are you and not someone else is just another piece of information. That you know, another piece of evidence that you've gotten that you should then, uh, conditionalize on, you know, in the Bayesian way. James: Yeah, I think you said, didn't you, in your paper though, if Alice and Bob exchanged everything they knew then, uh... Scott: That's right, yeah. So I'm coming to that, yes. But you know, so um, what they said is, you know, the very fact that you are the person you are and have the DNA that you have, and so forth, these could all just be treated as knowledge that you've obtained and that you've conditionalized on, right? And somehow, you know, the sort of indexical facts, you know, the fact that you are you, should not change what your opinions are compared to if you were someone else, except insofar as it has given you different information, right? Except insofar as it has given you maybe privileged information that other people don't have, that you can then use as evidence. Okay, but then, Aumann's theorem comes into play and says that if your opinions are common knowledge, then the fact you have different evidence doesn't matter anymore. Then your opinion should really be the same. Okay, so then the next thought that I had was that, well, this must, maybe this principle is true, but clearly, it doesn't describe real life disagreements. In real life, when you see people disagree with each other, they don't just exchange a few messages and then gain common knowledge of each other's opinions, then come to an agreement. We almost never see that happen. More often, we see two people get even more polarized as a result of disagreeing with each other. So, I thought the mismatch between this theorem and reality might have something to do with complexity. Perhaps it relates to the length of a conversation required for people to actually attain common knowledge of each other's opinions and come to agreement. As he said, if two people exchanged all the knowledge that they had, uploaded their entire brain's worth of information into each other, they would effectively become the same person. At that point, it is obvious they'll have the same opinion because there is nothing anymore to break the symmetry and give them a different opinion. However, in real life, that would take an entire lifetime. If I were to have all the knowledge that you have in order to have the same opinion as you, I would basically have to have lived your life, right? And you would have to have lived my life in order to share my opinions. So, I thought that maybe, that's the loophole. Okay, so I thought maybe I can prove a theorem, you know, showing that the length of the conversation has to be very large. Okay, so now we come to Aumann's Theorem. It didn't talk directly about a conversation between Alice and Bob, it just sort of assumed that Alice and Bob are in this condition of common knowledge, however they got into it, and then it says that therefore their opinions must be the same. But then later authors, like Ginakoplos and Polemarchakis, in the 1980s, thought about how would Alice and Bob actually come to that condition of common knowledge. So, they imagine the conversation that would go as follows: that Alice would basically just tell Bob, "I think that there's a 70% chance of rain tomorrow." And then Bob would say, "Oh well, that's interesting. Now that you say that, the fact that you've said that causes me to update my opinion. And now I think that there's a 60% chance of rain tomorrow." And Alice says, "Well, now that you've said that, I actually think there's a 65% chance of rain tomorrow." And then eventually, Bob says, "Oh, well now I also think it's a 65% chance." Right, and then they agree, okay. So, basically, what Genocopolis and Paula Marchakos noted was that if there's only a finite number of possible states of the world, say like a finite amount of knowledge that Alice and Bob could have, then that conversation will terminate after a finite amount of time. It will end with Alice and Bob having the same opinion and with common knowledge that they have at the same time, as Aumann's theorem would have predicted. The fascinating thing about this conversation is that, at no point, are they ever mentioning evidence. At no point are they ever giving reasons why they think it will or will not rain. All they're doing is stating their opinions, and their opinions given each other's opinions. Yet, that is enough for them to reach that state of common knowledge. Coming back to Robin Hanson, he wrote articles in the early 2000s where he said there is something else that's very strange about that conversation. If we imagine a conversation that you had with someone whose opinion disagreed, maybe what happens is that you start out in two different places and then you get even more polarized as you argue. But in the best case, you'd say you start out kind of far away and then maybe you compromise a little bit. You might say, "you say 20, I say 80, and then maybe you have a point. Maybe I'll lower my estimate to 70, right? And maybe you'll raise yours to 30. And maybe in that way, we'll sort of gradually inch closer to agreement, okay?" And what Robin pointed out is that an Aumannian conversation - you know, this kind where we're Bayesians with common priors and we're just stating each other's opinions - it doesn't look like that at all. Okay, in fact, our opinions follow a random walk. Which means that my own estimate of my future, I should never be able to predict the direction in which I will disagree with you in the future. Or predict in which direction my opinion will move. So like, if I'm at 20 and you're at 80, let's say, right? I should not be able to predict that as a result of talking to you, I will move closer to you, I will go up to 30. It should be just as likely from my standpoint that I'll go down, that talking to you will cause me to go down to 10 or whatever. And the reason for that is, if I knew in what direction my opinion was going to shift, then I should have shifted it already. Okay, so this is the reason for this random walk property, right? Our opinions, if someone watched us, would just be jumping around, totally erratically. Okay, now there's only one type of conversation that I've had in real life that even approaches that and that's like when I'm arguing at a blackboard with someone about a math problem, let's say, right? And, you know, we might be arguing. I might think that a conjecture is true and the other person might think it's false. Then, five minutes later, we might be arguing just as vehemently but now the positions have shifted. Now the positions have switched, right? And now I think it's false and the other person thinks it's true. That basically never happens with politics, with religion, with anything like that. But okay, so okay. Now, once we understand that the opinions follow a random walk, then we can take a step further and this is what I did in my 2005 paper. We can ask, how long would the conversation have to continue before the two people probably, approximately agree with each other? Now, the theorem that I proved in 2005 says that the number of messages we have to exchange before we come to an approximate agreement is completely independent of how much knowledge we have. It doesn't matter how long we've lived for, how many books we've read, how much experience we have. All that matters is, with what probability do we want to agree with each other? As judged by our shared prior and within what accuracy do we want to agree with each other? And basically, I proved a bound which says that, like if we want to agree with a probability of, let's say, one minus delta, we'll say. And if we want to agree, let's say, about some probability which is a number between zero and one, and we want to agree to within an accuracy of epsilon, then the number of messages that we have to exchange - so the length of our conversation - goes like, 1 divided by delta times epsilon squared. That's it. So, for example, if we wanted to agree, at least 90 percent of the time or at least, if we want to go into the conversation expecting that, at least 90 percent of the time, we will agree to, within at most 10 let's say, then we might have to exchange a certain number of messages. The theorem would then say, at most a thousand. In practice, it would probably be much, much fewer than that. But it says that, at any rate, it can't be more. So, that showed that the length of the conversation was not the loophole to Aumann's theorem that I had thought it was. It's always good to win when math doesn't just formalize what you thought was obvious, but actually challenges it and changes your view of something. So, that was what I wrote a paper about. James: Okay, so a couple of questions then. Particularly on the random walk. So, let's imagine a search space - we'll call that state one. Okay, and we have two agents opening, so let's say that consists of N bits. You're saying that, and yourself, it was Alice and Bob you said. So the theorem requires a prior probability distribution over, say, two possible states. And you're saying that it requires them to calculate expectations over that distribution and update. Scott: For now, we're assuming that Alice and Bob do not face any limitations of computing power, right? I also proved a theorem that starts to deal with the case that they're limited in their computational abilities. Okay, but in the theorem I told you before, we were only interested in the length of their conversation, right? James: And you think that the execution time is shorter than Aumann predicted. Is that well? Scott: I mean, Aumann just didn't make a prediction about it. It's just not the question that he was focused on, right? James: Okay, so in terms of the random walk. Yes, and you said, Robin Hanson came up with the idea that it's random. Scott: I mean, it may have been known to people before him, but he wrote a paper that made it very explicit. James: My contention, having heard that, would be that it's probably not going to follow an exact random walk. It'll probably be, there'll be some bias to the random walk. For all intents and purposes, it may have lots of random elements, but well, okay... Scott: But the question is, you know, if you know that your future updates to your opinion will be biased in a certain direction, then, and you're a Bayesian, the question is always why haven't you updated your opinion in that direction already? Right, well, one thing that seems to help a lot for people to get intuition about this is to consider the stock market. If it can be reliably predicted that a certain stock will go up, then the market should have already priced that in. The stock should have already gone up. This is why if you look at the prices of stocks over time, they actually do seem to follow a trajectory that looks a lot like a random walk. Not always, there are all sorts of complicating factors. But if a broker tries to sell you a stock promising that it's going to go up, the obvious retort would be, 'Well then, why are you selling it to me?' So, you could say that there's actually a version of Aumann's Theorem about the prices of commodities, which suggests that no one should ever be engaging in purely speculative trade, right? I mean, if people have different liquidity needs or whatever; then, you know, these are all reasons to trade. However, they should never engage in speculative trade because the very fact that someone wants to sell you something means that they must know something that you don't. James: That signifies that it's less valuable than they're saying it is. Okay, so if you're both rational, I'll try to illustrate a random walk, just in case anyone doesn't know what that is, and we'll try to use that to work out if we think that it is following a random walk process. So if we picture a random walk being a drunkard leaving a pub and he has an equal probability of turning left or right, then what we're saying is, over the course of his journey from the pub, we can't predict where he might be in N number of steps. That would be a random walk, right? Scott: It's even saying more than that. It's saying, you know, if you want to know the distribution over places that will be, you can predict that it will follow a bell curve centered around the pub. It will follow what we call a Gaussian distribution. James: Absolutely. So let's say, for example, in the case of this particular drunkard, there are constraints on his walk but he may not know exactly what those constraints are. There are constraints acting on him and let's try to analogize that with the psychology of the agents involved. For example, let's say that Alison and Bob are engaged in a discussion about politics. It's true that they don't necessarily know why they believe everything they believe. It's also true, as you say, that had there been indications prior to the discussion, there's a good chance that they might well have believed something else anyway. As you say, if they already thought it, why haven't they already changed their mind? I think this goes to the heart of where I was picking up on Aumann's Agreement Theorem in my books. There are a lot of factors driving the disagreements, and the way that, as you say, we often find humans diverging when they should, with more rational persuasion, be converging. The fact that there are psychological biases, tribal affiliations, and all sorts of unconscious things acting on it, isn't it more likely that it's following a slightly biased, random walk? Scott: Well, in real life, we know that there are countless ways that we deviate from rationality. Now there's almost like an embarrassment of riches in terms of asking, "Where could we be going wrong?", such that we deviate from the prediction of Aumann's Theorem. The question is more: which one is it? Of our many failings, which are the most relevant ones, right? That cause us to deviate from this. I mean now, it's important to say we have these intuitions, for example, that in real life maybe people could come to agreement, could come to understand each other's perspective. But for that to happen, it's very important that they share their experiences, their evidence, and explain where they're each coming from. And then, maybe there's a chance. What's so surprising about Aumann's Theorem and this Geanakoplos-Polemarchakis conversation protocol is that it says none of that is needed at all. All that is needed is for them to reach a state of common knowledge of what each other's opinions are. So, that is putting a point on how different this sort of idealized Bayesian rationality is from whatever it is that we're actually doing. But it's important to say though, there's another possible loophole that we haven't touched on. If you are perfectly rational, you're a Bayesian and so forth, but you're also perfectly honest. However, you have reason to believe that I am either irrational or dishonest. So, I'm not telling you my true opinions or my true opinions are just completely off the wall, I can't even do the Bayesian calculations correctly, right? Or, I have a completely absurd prior, right? I just have, you know, my prior says that the probability is almost a hundred percent that the world is being run by lizardmen from space. And basically, nothing that I learn can budge me from that fixed opinion. If I am irrational, or dishonest in any of those ways, well then, you should devalue my opinion accordingly. And even if you think that there's a chance that I am irrational or dishonest, or if it's not common knowledge. So, I don't know, I don't know that I'm rational. I think that you might be doing things differently because you suspect that I am irrational. Yeah, well, this is what happens. So, sort of in the background of this utopian picture of people who all agree with each other merely because they all know each other's opinions, is the idea that we have a common prior. And as a special case, it includes common knowledge of each other's honesty and rationality, that we're all executing the protocol correctly. And that is not our world, to put it mildly. You know, maybe only very, very occasionally - as with scientists arguing at a blackboard, and even then not always right. But, only very, very occasionally might we approach that ideal. You can have a discussion, particularly on religious or political subjects, where you don't even get off the starting base really because the levels of distrust are so prominent that, however much you tried to employ rational persuasion, you're probably never even going to get past the starting block. James: But, I wanted to talk about something else as well, in relation to that, which is complexity. In my book, I mentioned the tribal affiliations, the confirmation bias, and the fact that we often do not have sufficient information on which two people could both come together and converge on an opinion about something. One of the other elements I talked about in that was the level of complexity. I wanted to just ask you about that in relation to your paper. Let me paint a scenario. If you have two mathematicians for example at the blackboard, and yes, maths is probably one of the subjects whereby, as long as the agents are honest and looking to converge upon a theory together, then it's one of the least emotive subjects in terms of getting two people to agree. So, if you both want to formulate a theorem mathematically and let's say it's a relatively simple equation, and let's say the mathematicians are only of college level. We all agree on what the ground rules are, right? I mean, there are a few other domains that are like that; maybe chess or certain sports. You know, it is said that the sports page in the newspaper is the one you have to be least concerned about being lied to or biased. There are certain fields where there is an underlying ground truth. We might not know it at any given time, but we all agree that it's there. Once we see it, we all have to agree about it. So let's take that the college level mathematicians are going to agree on this quite straightforwardly and with fairly short execution time. Now, let's take something really hot, like a big political dispute. In our country, the UK, it's Labour and the Conservatives. In yours, it's Republicans and Democrats. You could certainly find people who are very hard-wired culturally and socially to those beliefs that are unlikely to change their minds. In terms of the level of complexity, one of the things I said in my book is that it's probably going to be a barrier in some regards because big subjects like religion, politics, and socio-cultural elements that are attached to those are going to be a bit of a minefield for the agents. To gather everything that's required and put enough computational processing power into their thinking to capture a lot of complex things, particularly around metaphysical elements to do with religion and God and stuff. In your paper, yes, I think you said that one of your conclusions - and you said it was a surprising conclusion - was that complexity wasn't a major barrier to agreement. So, is that something that you could just elaborate on in terms of why you think it isn't? Are we talking about a different sort of complexity here? Scott: Yeah, sure. Let's unpack this because you've been using the word 'complexity' right, but if by 'complexity' you just mean the amount of knowledge that there is to learn in a given subject - the amount of information, let's say the number of bits that Alice and Bob may have learned, that may have caused them to have a different opinion - or even the computation that they have to do to calculate what their opinion should be given those bits, my results give strong evidence that none of that is the limiting factor at all that ought to prevent them from coming to an agreement. Right, okay. For example, the length of the conversation that they have to engage in is completely independent of how much knowledge they have - that measure of the complexity of the topic. However, listening to your question, you seem to have been using the word 'complexity' possibly in a different sense, right? Which is, how difficult is it to even cash out our disagreement into something empirical at all? So, the whole framework of Aumann's Theorem and common knowledge and so forth has presumed that the thing we are disagreeing about is some underlying truth of the state of the world. And it's the kind of thing where, as we get more evidence, we update our knowledge of the state of the world. This seems clearly appropriate to questions like, 'Is it going to rain tomorrow?' Right now, we're in a state of partial information about that. We can see the clouds, we can check the humidity measurements, and so forth. Tomorrow, we will have much more information about that question. Likewise, with the question of who is going to win the election or even scientific questions like, 'What is the mass of the Higgs boson?' or 'Is there intelligent life on other planets?' That's something where, in the future, as we explore space more, we will get more information about that. But a lot of the disagreements that people have in real life—they're really about which tribe is better. Who deserves to win in this legal conflict, in this war, in this political conflict? Who is essentially on the side of good, and who is essentially on the bad side? The trouble with that kind of question, however, is that no matter what the facts are, you can always reinterpret them in light of your pre-existing beliefs. That sort of tells you who is basically good—despite the occasional mistakes—and who is basically bad, despite occasionally getting some factual question right by accident. Still, that doesn't affect their fundamental badness. It's not even clear how you cast those questions into the Bayesian framework at all. I interact a lot with the rationality community, which revolves around websites like Less Wrong and the writings of this guy, Eliezer Yudkowsky. What they are constantly trying to do is to get people to bet on their beliefs. Instead of just saying something abstract like, 'I believe that climate change is a huge issue', right? Well, okay, you know, what specifically do you believe? With what probability do you think there will be hurricanes in 10 years causing as much destructive power as we have seen? And you know, give me your over and under right, think about it more actuarially. So, they're constantly doing that. The whole idea is that once you force people to cash out their ideological beliefs into actual predictions for what might happen in the future, then we have some chance of getting people to agree. They might realize things are more uncertain than they thought or it might force them to actually do their research. When they make a prediction, they might find out their prediction was wrong and then they have to concede to the bet. That might force them to update their belief. Robin Hanson, who I mentioned before as the person who introduced me to Aumann's Theorem through his paper about it, is perhaps best known for inventing the idea of prediction markets. He believes that for any sort of questions about the future, like who will win the election or will this economic policy lead to lower inflation or not, right? Will there, you know, will this foreign policy, if pursued, lead to a war between these countries or not? He thinks that these questions should all just be traded on a stock market essentially, right? And then, you know, if like, there is a stock that pays a dollar if these two countries go to war. And let's say now it's trading at 60 cents, then we can learn from that, that the best consensus right now from all the best experts trading on that market is that there is a 60% chance that event will happen. He thinks that prediction markets would be a way to force people to bet on their beliefs. And in that way, the real world could better approach this Bayesian or Omanian rationality. No, you know, I should say there are now lots of prediction markets that are actually in use. They tend to get in the news whenever there's an upcoming election. But in the U.S, prediction markets are severely hampered by regulatory barriers. They're considered to be gambling, and they're regulated as if they were online gambling. And some people think well, if people are allowed to trade stocks, bonds, and futures, right? And, you know, in companies which sort of indirectly reflect these questions, our beliefs about what's going to happen in the world, then why shouldn't we also be allowed to trade in futures that are just directly about what is going to happen, right? That might be a way of trying to tackle this complexity problem. When people have these complicated packages of ideological beliefs, trying to break them down into individual sub-questions that can be factually resolved, and which you could then literally bet on or have prediction markets on. James: I'm sorry, because we know from anecdote and statistical, empirical evidence - don't we? - that people, when some of the big problems in society with wrong decision-making are when the decisions that are made are divorced from the stake of the individual and the context. Absolutely, you see how people behave politically in terms of how they want the economy to be run and all the different incentives behind that. You'll see that people converge upon the smartest economics often when the stakes are most applicable to their own situation. I mean, there's even a, I don't know if you've heard of this, but there's an economist called John Harsanyi, a Hungarian economist, who came up with the amnesia theory. He won the Nobel Prize with John Nash. Absolutely, and based on, well he was, and his rule's veil of ignorance was based on that, wasn't it? The idea being that, I mean, I think Will's theories of Justice came to grief a little bit. But how Sani was the founder of that idea, that if we had, as you mentioned at the beginning, this sort of pre-birth conception of what might be right or wrong, and you divorced your own self-interest from that, then you might be able to construct a moral framework that would be consensually agreed upon. If you didn't know, if you'd forgotten where you were in that society, you're absolutely right. I'm used, when I'm using complexity, I'm using it in the sort of whether it's empirically intractable or not. How much information is converged upon the individual and how much sifting there is to do in order to get to the heart of the matter. So yeah, I think from that perspective, it's one of the barriers. Another thing I wanted to mention, which you touched upon as well, is that it obviously can't be applied to subjective things or things that are a matter of taste. Aumann's Agreement Theorem is anchored in the idea of truth being a strength and a factor that sits above the thing that's being discussed, isn't it? Scott: Well, I mean, if we're going to be debating like, 'Is chocolate ice cream better than vanilla?' then at some point we need to define how we will settle that bet. What are the terms? What if, after all knowledge has been exchanged, I just still prefer chocolate and you still prefer vanilla, right? Then you know, and then we would just have to say well, the truth is that we had different preferences. So, part of the presupposition here is that you know we are talking about the same world. We are asking about the truth or falsehood of something in a world that we can both observe. And to me, to apply any of this to political disputes, there's an even more basic presupposition that we care about what is the truth. Like if someone is saying "Our tax cut is going to turbo charge the economy," then it has to matter. If you cut taxes, do you actually see the economy improve? If there's no feedback mechanism, if you're just saying that as sort of a slogan and you can just repeat a failed policy any number of times and it doesn't affect the outcome, well then obviously you have no hope of reaching agreement there. James: And also, because I think there are a lot more objective answers than we often care to think. I think if you imagine analyzing the world through an omniscient mind, let's say God gave us access to his mind, whereby when we talk about a truth or an objective fact, that is that which would be true, irrespective of human opinion about it. Now, I know, obviously, some of the things in that set of facts will be conditioned by human opinion and of course, human behavior. But there are a lot of things that I think we classify as subjective or determined by preference. These things are preferential, but they're also based on far more objectivity than I think we often realize. In other words, if we did have access to all the facts, then there would be, for example, a right way to manage an economy. Or there would be a right way to establish the rule of law that's better for the individual in question. There probably would be a correct level of tax for whatever incentive was in place. And then there would be a correct level or, well, it's hard to say. Scott: I think that some political disagreements really do come down to differing views on trade-offs that different interest groups judge differently. James: Right, yeah, I agree. Scott: Some people believe it's more important to uphold individual responsibility, and others think it's crucial to have a social safety net. And then, you know, that might never be resolved by any amount of evidence. However, there are many other disagreements, as mentioned, that might seem like they're about values, but in reality, they're just different predictions. We all want a society that's prospering, where people live long lives, enjoying good healthcare, and they're happy, and they're not committing suicide, and they're not dying of drug overdoses, and they're not in prison. Like, we all want that outcome, and we're just disagreeing about which policies are going to lead to that outcome. Yeah, and for those questions, it seems like yes, prediction markets, you know, could help more generally. Forcing people to bet on their beliefs, to sort of state their beliefs as empirical predictions and test them, that could help enormously. James: Yeah, absolutely. And this goes back to what we're saying about having a stake in it because, you know, you notice with, um, say a lot of young people in our country, and probably yours too, a lot of young people tend to be quite left-wing economically because they don't have much of a stake in the economy at that age. And then when they start to have more responsibility, when they become business owners or when they have a job, when they actually start paying taxes, people tend to move slightly more economically right as they get older. And I think, you know, the other thing that's worth mentioning now in terms of human behavior is if you took two sets of, um, two groups together: the amount of things that humans agree on and the amount of things that they disagree on, I think that's another factor which we have to mention. Because actually, the set of things we agree on is way larger than the set of things we disagree on. Yet it's like the Freudian narcissism of small differences; it'The things that we disagree on make the headlines. It's the things that we disagree on that make up most of our discussions and conversations right now. Scott: Now, of course, the left-winger might say, "Well, you know, it's not surprising that people come to right-wing opinions if they weren't born in this poor community or this oppressed or marginalized community, right?" And it's sort of not epistemically valid to just form your opinions based on the accident that you weren't born holding the short end of the stick. So yeah, I mean, I think that it's a really hard problem to try to see everyone's perspective. There will be questions that are just trade-offs of values that maybe cannot be resolved. But then we can separate out questions that are just empirical predictions. Like, you predict that communism will lead to a utopian society for everyone? Well okay, that's a prediction that, in some sense, we've tested. James: Absolutely, a social or political question can be tested, and now we have an answer. Scott: That answer was bought at the cost of hundreds of millions of deaths, but now we have it. James: It shows you how powerful these tribal affiliations are, because, you know, we've probably had this too in America with, I think of Bernie Sanders, but we've had this resurgence of socialism in the UK in the last five years with a guy called Jeremy Corbyn, who you've probably heard of. Yes, yeah, sure. And no one, no one who remembers when socialism had any influence in this country last, which was the 1970s, no one sensible and rational would want to go back to that. As you say, the world is a natural landscape for testing empirical predictions, right? Scott: These comparisons are hard because in the US, many people will say, "Well, what about Sweden and Norway? All these countries have socialized health care and they're doing much better than we are. They have much better outcomes, right?" So, I think once you submit yourself to the rigor of empirical evidence, you have to agree that it could go either way. It could go in a direction that you don't want. I'm someone who doesn't know if I should be honored or hurt to be attacked both by left-wing and right-wing people constantly on my blog. But, I am trying to get the right answers. I don't always succeed, but I at least acknowledge that there is no ideological camp that has a monopoly on the right answers. I feel like understanding that is a prerequisite to making headway. James: Yes, I think I'm similar, we can discuss this further in a minute. It's quite intriguing but, I believe, truth is such an alien concept to very many people. As Plato and Mark Twain articulated, there's a probability of suffering if you don't partake in these advantageous relationships and instead attempt to uncover the truth in a non-partisan way. It makes people feel uncomfortable. Recognizing this is important as it's apparent that you will never be perceived as a reliable ally for their tribe, mainly because you might be persuaded by an argument made by the opposition. Einstein had a great term to describe the mind's ability to sift ideas and stimulate creativity; he referred to it as "combinatory play". This concept involves opening up one mental channel and exploring another, thus bringing together diverse things to create new ideas and thought processes based on these relationships. He elicited that the results emerging from combinatory play are very difficult to induce deliberately, artificially, or even precipitously. They must be welcomed and arrived at and greeted when they come upon us. And I think, yeah, you know, that's really interesting. Because one, I have a friend who's a physicist and a mathematician. He and I are very similar and I think you and I are similar in this way too. You and him would be similar in the way that we're not that interested, it seems, from what I've learned of you so far. We love the truth, and we value the truth more than we love being part of a particular group. We're more feline than canine in that regard. Scott: I try to be that way, you know. I might fall short of it, right? James: Yeah, and I've noticed with him, he's nothing like me personality-wise. I'm very extroverted, I'm very sociable. He's very introverted and he hides away in his room and he does lots of equations and stuff and doesn't mind socializing so much. But what I found between the two of us, and this is one of the reasons why my wife and I have such a great relationship, is because she's very unpartisan as well. She's really open to the truth. We have a great relationship because of that. And I find that with this friend of mine, we pretty much agree on everything. Now obviously, sometimes, that's not healthy either. But the reason why I think the only things we have as bones of contention, I find, are matters of degree. It's like, you know, I'm slightly more libertarian and market friendly than him. But he appreciates the market. He's a very highly cautious person, so he has issues with having too much freedom and things like that. Fundamentally, we agree on almost everything because we're both looking for the truth. We have great discussions, and I find that the people I disagree with most are those who I believe are not fundamental truth-seekers. They're living a life of convenience of belief and convenience of viewpoint, which happens to tailor to what they want their life to be like. My relationship with him is a prime example, and if you and I were friends I'm sure we'd have a similar type of relationship. It simulates, in a sense, the logical calculation steps whereby we are both quite dispassionately involved in the truth. We are a bit like computers in that sense, trying to converge on the truth. Scott: I mean, there clearly are people who care more about the truth than others. Now, the danger, or rather the paradox, emerges. As soon as you form a group that claims to be the ones who care about the truth, and those "other people" only care about what is convenient for them to believe, you've reproduced the in-group and out-group dynamics that you were trying to avoid. I think you see this happen repeatedly in history. People will realize that the establishment is systematically wrong about something or many things and they step away. They'll say, "We're going to take a stand and create a group that cares about the truth. Build a new club of just the truth seekers, the people who care about the truth, right?" And then, once they've decided that we have a monopoly on truth-seeking and these other people are not even trying to get it right, they just reproduce the 'us versus them' dynamics. And they're blind too, when actually the other side has gotten it right. James: Yeah, you have to be careful of that. Absolutely, yeah, you do. Scott: I mean, this is the central thing you have to be careful of. James: But equally, if you're open to the truth, you love the truth. Because actually, this is the other thing about truth: truth underpins every other quality. Every other quality has propositions related to it whereby the truthful proposition is the most valuable. That includes love, justice - truth is the thing that underpins them all. So actually, this hypothetical group, yes, you don't want to be so hallowed that you end up in a sort of pantheon of truth seekers where you dismiss the rest as non-truthy. I'm not saying that. But actually, it's a club that in order to join, you don't mind being shown to be wrong as well. Because the love of truth triumphs and it trumps the so-called embarrassment and discomfort that some people have. You know, I don't know what you think, but whenever people show me a better way of thinking about the world, or they show me somewhere where I was, uh, not thinking about it as well as I could be, I'm actually exhilarated. Because if I debate with somebody and I come away still believing what I believe, then as far as I'm concerned, I've consolidated my position. That happens a lot with religious discussion, um, where it's a lot more to do with ethics and, um, value judgments. But equally, if somebody says to me, "Actually James, what you've said, here's a better way of looking at it," I think, "Great, I've got a new one! I've learned something new, I've got a new way of interpreting the world. I can add that to my storehouse of knowledge and I'm a better person for it." And I think that's a healthy way to be because, actually, we're all subservient to the power and quality of the truth, whether we like to admit it or not. Scott: Yeah, now you know, I agree and I worry about this enormously in the context of social media, especially sites like Twitter. My experience has consistently been that you can sometimes approach the Aumannian ideal in a relatively small conversation among people who have different perspectives but who basically trust each other to be speaking honestly and are trying to understand each other. You can do that even about the most controversial subjects. For example, on my blog eight years ago, there was a discussion on feminism and dating. Essentially, the question was - 'who has a harder life, nerdy young men or nerdy young women?' These were absolutely explosive questions and they got discussed in the comment section of my blog. Men, women, people from all different perspectives were really trying to understand each other. I feel like we were actually making progress. But then, once the conversation got on Twitter, once it entered the wider social media world, it was met with swift condemnation. Anybody who said something that differed from the accepted narrative was ridiculed, embarrassed, and condemned as much as possible. James: I was going to ask you about this actually. It feels like you can have reasoned discussions, you can make progress on these really difficult issues when you're among people who are acting in good faith. But, social media puts a spanner in the works. Scott: Social media transforms dialogue into performance for an audience of hundreds of thousands of people, which completely changes the dynamics. James: Yeah, and if you're a public figure, then you've got so much at stake, your reputation and your career. I mean, imagine someone like Richard Dawkins. How is that man going to have a wholly balanced view of whether God exists, given how much he has invested in it? But yeah, no, if you are a public figure, like Dawkins is, let's say, it's very hard. Scott: It becomes very hard to even think out loud, right? To sort of think in the Aumannian way where you're acknowledging when you made a mistake about something. Dawkins does a far better job than most of acknowledging when he makes a mistake about something. Right. But you know, there are tens of thousands of people who are going to leap on anything he's said. So, that is a dynamic that if you are a truth seeker, you have to constantly be on guard against. James: Yeah, and I wonder what the effects of because you and I are probably the right age where we can remember a pre-internet age and the emergence of it to the point where you can now find out anything you want to by a search engine. The young people who have been brought up with that way of life, and no different, it's very natural to them. I think I wrote a blog a while back, and it was to do with whether we've reached peak stupid"—a term I didn't invent. It's always dangerous to make a pronouncement that we've reached peak stupidity. There are always new depths to plumb, I know. It's a curious thing that in the last 15 years or so, never have we, as a global society, had so much information poured onto us at any one time. Remember, of course, we've evolved minds for the Savannah, and we haven't evolved minds over 200,000 years that have equipped us to deal with this kind of information overload. We can't avoid outsourcing our thinking to other agents. At the risk of sounding hubristic, I think it's less of a seduction for people like ourselves because we know that there's not enough of a payoff to join that kind of mentality. I'm sure we do it too. However, for young people who are fighting their life battles and don't really know what they believe about a lot of things, they're getting drawn into this world where the Overton window has shifted so much. Now, even what seems like a normal question—like "what gender am I?"—is a question that young people, even at primary school age, are now asking themselves. Whereas 25-30 years ago, it was a marginal consideration for a few people. I think the internet, yes. I wrote a blog post called "The Internet Is Making Us Smarter But More Divided". So, it is making us more knowledgeable, but we're not doing great things so much for the information we have. Because there's so much of it, right? I mean, there's a theory that you know, there have always been people who believe that, uh, the government is controlled by Satanist child molesters or, you know, whatever ridiculous thing, right? But those people sort of never had a voice before, right? They were always sort of confined to the margins and disenfranchised. Scott: And the internet, you know, like, I am old enough to remember the 90s when the big promise was that it was going to democratize information and give everyone a voice. And, it turns out that it's delivered on that promise, right? But, you know, what many people wanted to say, once they had a voice, was absolutely horrifying, right? And we're now just, in the multi-decade process of working through the consequences of that. James: Yeah, absolutely. Um, and one thing you touched on a moment ago, which I wanted to ask you about as well, is you talked about the idea of a symposium of Truth Seekers. I remember in the paper, you asked a question: rather than having Alice and Bob, what about if it's Alice, Bob and another person? So, how does your theorem respond to predictions about when other agents are involved? Does that increase the probability or decrease it, do you think? Scott: Ah, okay, good. So, now we've gone from the loftiest societal questions back to a straightforward technical question, right? But yes, I did study the generalization of the setting to, let's say, N players, or N agents rather, rather than just two. And what I proved was that it might take more messages exchanged, because now information has to travel from any person to any other person. And now, if I had let's say, two disconnected subcultures that never talk to each other at all, well then, of course, you might never equilibrate between them. You might never reach agreement between them. So, the key question is, if you look at the graph of who was talking to whom, is that graph connected? Meaning, can information propagate from any person in the group to any other person in the group? As long as there's enough information propagation between any pair of people in the group, then the agreement will happen in a number of steps that scales with the number of people, and also with how certainly and how accurately you need them to agree. James: Do you have any predictions on the time differential then? Do the execution steps increase or decrease with the next? I know there's more information. Scott: I mean, this is again in the setting where we're imagining that all of the people have common priors. Yes, having knowledge of each other's honesty and rationality. And, once you've granted those assumptions, right, then it's really just the question of whether they're all speaking to each other in a graph that is sufficiently well-connected. And then the time that they'll need to reach agreement now depends both on how accurately and how certainly you need them to agree. That's that one over delta times epsilon squared thing that I mentioned before. But now also dependent on the length of the chains of communication, like how long it takes a message to propagate from any given person to any other. James: So, if we imagine, say, 'let's say a scene of the crime' scenario, where you have an amount of evidence and two detectives on the scene. Let's imagine two scenarios where in scenario A, we have a third detective joins the investigation and doesn't have any more information but has increased contributions in terms of reason and analysis. And then a second scenario where the third detective comes into the investigation but has new information to impart. Then, I know they're not logic machines like computers, but what are the likely scenarios in terms of differences in execution time do you think in those two scenarios? I know there's a lot of contingent factors but... Scott: That's a very interesting question that I didn't prove a theorem about. I would want to think about it more. Okay, but one thing that I'll say right now is that the setup we're assuming here doesn't really account for the contributions made by someone who just has more powers of analysis of already known facts. The crazy thing about Bayesians is, in the basic framework of Bayesian reasoning, there are just the facts you know and the facts you don't know. And there's the probabilities over the possible states of the world. As you gain more facts, that causes you to update your probabilities. But, it's sort of not part of this framework that you have to think more in order to learn more of the facts. You're just assumed to already know all of the logical consequences of what you already believe. Now, of course, as applied to real life, that's completely ludicrous. If we all knew the logical consequences of everything that was known to us then we would all be perfect chess players. We could all break any encryption system. And there would be no need for mathematicians. We would just instantly see the truth of Fermat's Last Theorem. We wouldn't have to work hard for any of this. And so, there's actually a lot of research over the past couple decades trying to extend the Bayesian framework to, you know, what you could call boundedly rational agents. Yes, agents who don't necessarily know all the logical consequences of their beliefs and who have to invest a lot of time and thought to figure them out. But then it becomes much more complicated, right? That's a very difficult undertaking. So in your setting where you have two detectives and then a third detective comes up, if I modeled that in this Bayesian setup, I would just say each one has some set of beliefs. If the third one has no relevant information about the crime, they still have some hunches, they still have some beliefs about it. That's still going to affect the other two detectives when they have this conversation. But I can imagine that depending on details, it could go either way. James: Yeah, and I think that's why we have to revert back to the point earlier, which is that actually getting people to agree isn't the fundamental here because what it's getting them to converge to what is the truth. I mean, two left-wing extremists might agree on communism, but it doesn't mean that it's a healthy agreement. And you know, we should probably close soon because... Scott: I don't know. We just have to get people to agree on the truth? Well, people, I guess, have been saying that since Plato, right? But then, there's sort of a secret presupposition that we just have to get everyone to agree with me, to agree with the truth that I know, right? I would rather say we have to get everyone to agree on a truth-seeking epistemology. James: Absolutely. And that's a great way for me to ask my final question because the context of my book is a series of letters that an older person has written. You know that whole thing about what I wish I could say this to my younger self? Scott: Yes, yeah, absolutely. James: The context of one of the references to Aumann's agreement theorem in this book, is whereby an older person is writing to the younger person saying, 'You know what a brilliant theorem this is.' It's used as a vehicle for encouragement. When I discovered it in 2012, what excited me about it wasn't just that it's a beautiful and elegant theorem. I was excited about sharing that with people because I loved its beautiful simplicity. I also love that it's algorithmic and its ability to influence and encourage people to, as you say, join the party and see the adventure and the benefits of truth-seeking. Because the thing about truth-seeking, which I often say to encourage people, is that you are the only version of your particular self that's ever existed. You are a unique person, and the only way to fulfill your potential - and for that self, that's you, to make the biggest impact you can possibly make - is by seeking the truth. The greatest benefits lie in learning about your skills and talents, and figuring out how you can leave an impression and live a full life. This is best achieved by seeking the truth. By doing so, you discover things about yourself that align with the blessings that you can bestow on each other. So my final question for you is, having known it for so long and given how much pleasure it clearly gives us, and thank you so much for your contributions to the theorem yourself, how can we use it as a force for good in the world? Is it just a case of telling more people about it, getting them excited about it and about the love of truth seeking too? Scott: I mean, to ask for a theorem to fix what is broken in the world is a very high burden to place on any theorem. Absolutely, I would say it's good for people to realize that they are not in sole possession of the truth. They can learn from others, they might be mistaken, they should update their beliefs, they should try to predict what will happen. If they have a theory, it should pay rent. It should help them be less surprised by what's happening in the world. And if they're constantly surprised, then maybe the problem is with their theoretical framework and they should seek a new and better framework. The way to do that is by discussion and criticism. It's not name-calling, it's not ganging up, it's not sneering. It's trying to respond to the best arguments that the other side made, not the worst arguments. Now, you don't need to know Aumann's theorem in order to be on board with any of that. And there are even people who do know Aumann's theorem and who are not on board with this, who just fail at it for very common human reasons. But I think that what the theorem does is it sort of paints a picture of what the ideal reasoners look like. We may not reach that ideal, but we can at least try to approach it. We can ask, are our institutions or are our social media platforms set up in such a way as to push us closer to that ideal or further away from it? And if they're pushing us further away from it, then maybe we need to change them. James: Yeah, absolutely, and that's an excellent point as well. Because the point of having a journey and making progress is that you have standards. If we aim high with those standards, we're much more likely to get to a higher place ourselves. The standard mustn't be too low for the adequacy of the task. You mentioned the concept of looking at the other side. There's a famous term, straw manning. I love the opposite of that, steel man. The Economist, Brian Caplan, came up with the ideological Turing test, which is basically a way to understand the opposition's view and argue from their perspective. This can actually strengthen your position and help you understand what you're arguing against. If you give the opposing viewpoint the benefit of the doubt and the best chance in your own reasoning, you can't lose. Why would you want to do anything other than that? Scott: One of the things I've enjoyed most at scientific conferences are debates. Let's say it's string theory pro and con, or the interpretation of quantum mechanics— are there multiple universes or just one? But they sometimes have people debate the opposite of the position that they're known for in their books and blog posts. Maybe you have a coin flip to decide who has to argue which position, right? James: Yeah, I like that. Scott: I think you wouldn't want that all the time, but I think that at least sometimes you ought to have that — steel manning the opposing side. It's a very important epistemic exercise. James: Yeah, absolutely. I think I read an example where, if you like, there were four people on each side. Let's say they're Democrats and Republicans, and each has to come forward and argue for the opposition. You have to guess which one out of the four was actually the Republican — say, arguing for Democrats. And I think that's right. Scott: I mean, you don't want the debate club mentality where people just say, 'All that matters is my arguing skill, and I can argue for either side equally because it doesn't matter, right? There's no truth; there's just better or worse debaters.' Like, no, that's not the point at all. There is a truth that you're trying to reach, but in order to reach that truth, it's an excellent exercise to be able to steelman the opposing case. James: Absolutely, and that underpins the beauty of Aumann's Theorem, that there is a truth to converge on. And yeah, I think that's a great place to end. So, hey Scott, thank you so much for joining me. I've really enjoyed this discussion, and I hope the viewers will too. I'm sure they'll find lots of interesting things about it to ponder. So yeah, I really appreciate you being my guest on the show. Scott: Thank you for having me. James: Thank you so much. Okay, all right. Um, thank you. That was Scott Aaronson. I just want to say that below in the comments is my link to my blog. Please do have a look and you'll see many different arguments and interesting topics, hopefully. So, thank you for tuning in, and good night.