PHYS771 Lecture 21: Ask Me Anything

Today, we're going to follow the great tradition pioneered by Feynman, in which the last class should be one where you can ask the teacher anything. Feynman's rule was that you could ask about anything except politics, religion, or the final exam. Here, we have no final exam, you can ask about politics, and I've been telling you my religion all semester. So anything goes! Ask about something from the course you'd like to know more about, or anything else. Who would like to start?

**Q:** Do you often think about using computer science to limit or give us a hint about physical theories? Do you think that we'll be able to discover physical theories which give more powerful models than quantum computation?

**Scott:** Is BQP the end of the road, or is there more to be found? That's a fantastic question, and I wish more people would think about it. I'm being a bit of a politician here and not answering directly, because obviously the answer is "I don't know." I guess the whole idea with science is that if we don't know the answer, we don't try to sprout one out of our butt or something. We try to base our answers on something. So, everything we know is consistent with the idea that quantum computing *is* the end of the road. Greg Kuperberg had an analogy I really liked. He said that there are people who keep saying that we've gone from classical to quantum mechanics so what other surprises are in store? But maybe that's like first assuming the Earth is flat, and then on discovering that it's round, saying who knows, maybe it has the topology of a Klein bottle. There's a surprise in a given direction, but once you've assimilated it, there may not be any further surprise in that same direction.

The Earth is still as round as it was for Eratosthenes. We talked before in this class about the strange property of quantum mechanics that it seems like a very brittle theory. Even general relativity, you could imagine putting in torsion or other ways of playing around with it. But quantum mechanics is very hard to fool around with without making it inconsistent. Of course, that doesn't prove that there's nothing beyond it. To people in the 1700s, it probably looked like you couldn't twiddle around much with Euclidean geometry without making it inconsistent. But on the other hand, the mere fact that something is conceivable doesn't imply that we ought to spend time on it. So, are there actual ideas about what could be beyond quantum mechanics?

Well, there are these quantum gravity proposals where it looks like you don't even have unitarity–people can't even get the probabilities to sum to 1. The positive spin on that would be "woohoo! We found something beyond quantum mechanics!" The negative spin would be that these theories (as they currently stand) are just nonsense, and when quantum gravity people finally figure out what they're doing, they'll have recovered unitarity. And then there are phenomena that seem to change our understanding of quantum mechanics a little bit. One of these is the black hole information loss problem.

The trouble is once you're in the black hole, you're not even near the event horizon. You're headed straight for the singularity. On the other hand, if the black hole is going to be leaking out information, then it seems like the information should somehow be *on* the event horizon or very close to it. This is especially so since we know that the *amount* of information in the black hole is proportional to the surface area. But from your perspective, you're just somewhere in the interior. So it seems like the information has to be in two places at once.

Anyway, one proposal that people like Gerard 't Hooft and Lenny Susskind have come up with is that, yes, the information *does* get "duplicated." On its face, that would seem to violate unitarity, and specifically the No-Cloning Theorem. But on the other hand, how would you ever *see* both copies of the information? If you're inside the black hole, then you're never going to see the outside copy. You can imagine that if you're really desperate to find out if the No-Cloning Theorem is violated–so desperate you'd sacrifice your life to find out–you could first measure the outside copy, then jump in to the black hole to look for the inside copy. But here's the funny thing: people actually calculated what would happen if you tried to do this, and they found that you'd have to wait a very long time for the information to come out as Hawking radiation, and by the time one copy comes out via Hawking radiation, the other copy is already at the singularity. It's like there's some kind of censorship that acts to keep you from seeing both copies at once. So from any one observer's perspective, it's as if unitarity is maintained. So it's funny that there are these little things that seem like they might cause a conflict with quantum mechanics or lead to a more powerful model of computation, but when you really examine them, it no longer seems like they do.

So that's a physics way to get at the question, but there's also a computer-science answer. We can ask, what kinds of complexity classes can we define or imagine above BQP? Well, firstly, if we had a model that could solve NP-complete problems in polynomial time, then from my perspective, that's where we enter the realm of dreams and fantasy.

I mean, at that point, maybe we should just quit doing math and science! If you want to prove the Riemann Hypothesis, just turn the crank, and there's your proof–no thinking involved. Of course, whether NP-complete problems can be efficiently solved is an empirical question–if you can do it, you can do it.

But suppose we want to believe that there's something more powerful than quantum computing, but that still can't solve NP-complete problems in polynomial time. Then, how much "room" is there for such a model? We do have some problems that seem to be easier than NP-complete, but that are still to hard to efficiently solve with a quantum computer. Two examples are graph isomorphism and approximate shortest vector. Very "close" to NP-complete, but probably not quite there, seem to be the problems of inverting one-way functions and distinguishing random from pseudorandom functions.

Years ago I came up with one example of a computational model (discussed earlier in the course), where you get to see the entire history of a hidden variable during the course of a quantum computation. I gave evidence that in this model, you do get more than with ordinary quantum computing–for example, you get graph isomorphism and approximate shortest vector–but still you don't get the NP-complete problems. On the other hand, my model was admittedly rather artificial. So maybe there *is* one more dramatic step before you get to NP-complete–I'm not sure.

**Q:** How can you say "one step?" You can theoretically always contrive a problem between any other two problems.

**Scott:** Of course, but here's the point: no one was interested in quantum computing when Bernstein and Vazirani discovered you could solve the Recursive Fourier Sampling problem. People only became interested when it was found that you solve problems that were *previously* considered to be important, like factoring. So if we judge our hypothetical new model by the same standard, and ask what problems it can solve that we already think are important, there are arguably not that many of them between factoring and NP-complete. So again, there could be some new model that gets you slightly beyond BQP–maybe it lets you solve Graph Isomorphism, or the Hidden Subgroup Problem for a few more non-Abelian groups–but at least in our current picture, there's only a limited amount of "room" between BQP and the NP-complete problems.

**Q:** BQP hasn't been shown to be in NP, right?

**Scott:** Right. That's also an excellent question. In fact, we don't even know if BQP is in the polynomial-time hierarchy; this has been a central open question in quantum computing theory for more than a decade. Of course, we can't hope to prove unconditionally that BQP is not in PH (since that would imply P≠PSPACE among other things). But we could at least hope to construct an oracle relative to which BQP isn't in the polynomial hierarchy. That's the least we could hope for, but years have gone by and we still can't even do that.

A number of us have worked on this, and it seems extremely hard. In fact, we can't even prove there's an oracle relative to which BQP is not in AM (Arthur-Merlin). AM is just like a probabilistic version of NP. So AM is where the progress stopped. We've got an oracle relative to which BQP is not in MA (Merlin-Arthur), but no such oracle for AM. I strongly believe that there exists such an oracle, but current techniques just don't seem strong enough to prove it. Indeed, I believe that there's an oracle relative to which BQP isn't in the polynomial hierarchy.

**Q:** But what about in the non-oracle world?

**Scott:** That's a whole other can of worms. If we ask if BQP is in the polynomial hierarchy in the real world, then I'm no longer willing to say. It would not surprise me enormously if, in the real world, BQP were in AM. We don't have any concrete example of a non-promise decision problem that we have good reason to believe is in BQP and not in AM. I mean, we've got these things like Recursive Fourier Sampling, but not only are they oracle problem, it's not clear how you'd ever get the oracle out of the picture.

Note that we believe that AM = NP, because of derandomization. So if BQP were in AM, then BQP would be in NP ∩ coNP. That would say that whenever a quantum computer produces an output, there'd be a short classical proof that it produced that output. That would really be astounding, but I don't think we can exclude it based on any of the evidence we have.

**Q:** Where would you ever get an oracle?

**Scott:** You just define it. Let *A* be an oracle...

**Q:** That's a bit of an issue.

**Scott:** It is, it is. It's strange to me that only computer scientists get this kind of flak for using the techniques that they have to answer questions. Like physicists say that they're going to do some calculation in the perturbative regime. "Oh! Of course, what else would you do? These are deep and difficult problems." Of course, you're going to do what works. Computer scientists say that we can't yet prove that P≠NP, but we'll study it in the relativized world. "That's cheating!" It just seems obvious that you just start with the kind of results that you can prove and work from there. One objection that could be made against oracle results in the past would be that some of them were just trivial. Some of them essentially just amounted to restatements of the question. But these days, we've got some very nontrivial oracle separations. I mean, I can tell you in very concrete terms what an oracle result's good for. About every month or so, I see another paper on the arXiv solving NP-complete problems in polynomial time on a quantum computer. This must be the easiest problem in the world. Often these papers are very long and complicated. But if you know about oracle results, you don't have to read the papers. That's a very useful application. You can say if this proof works, then it also works relative to oracles, but that can't be the case, because we know of an oracle where it's false. Of course, that probably won't convince the author, but it will at least convince you.

As another example, I gave this oracle relative to which SZK (Statistical Zero-Knowledge) is not in BQP. In other words, finding collisions is hard for a quantum computer. Sure enough, as the years go by, I see these papers that talk about how to find collisions with a constant number of queries on a quantum computer, and without reading the paper, I can say no, this has to fail, because it's not doing anything non-relativizing. So, oracles are there to tell you what approaches not to try. They direct you towards the non-relativizing techniques that we know we're eventually going to need.

**Q:** What complexity class are you?

**Scott:** I'm not even all of P. I'm not even LOGSPACE! Especially if I haven't had much sleep.

**Q:** What's the complexity class for creativity?

**Scott:** That's an excellent question. I was thinking about it just this morning. Someone asked me if humans have an oracle in their head for NP. Well, maybe Gauss or Wiles did. But for most of us, finding proofs is a very hit-or-miss business. You can change your perspective and it seems pathetic that after three billion years of natural selection and after this time building up civilizations, all the wars and everything else, we can solve a *few* instances of SAT–but if you switch to the Riemann Hypothesis or Goldbach's Conjecture instances, suddenly we can't solve those.

When it comes to proving theorems, you're dealing with a very special case of an NP-complete problem. You aren't just taking some arbitrary formula of size polynomial in *n*, you're taking some fixed question of fixed size and asking, does this have a proof of size *n*?. So you're uniformly generating these instances for whatever length proof you're looking for. But even for this sort of problem, the evidence is *not* good that we have some sort of general algorithm for solving them. A few people decided to forsake their social lives and spend their whole lives in this monastic existence, thinking about math problems. Finally, they've managed to succeed on a few problems and sometimes even win Fields Medals for that. But there's still this huge universe of problems that everyone knows about and no one can solve. So I would say that before reaching for Penrose-style speculations about human mathematical creativity transcending computation, we should first make sure sure the data actually supports the hypothesis that humans are good at finding proofs. I'm not convinced that it does.

Now, it's clear that in certain cases, we are very good at finding patterns or taking a problem that looks to be hard and decomposing it into easier subproblems. In many cases, we're much much better at that than any computer. We can ask "why is that?" That's a very big question, but I think part of the answer is we've got a billion year head-start. We've got the advantage of a billion years of natural selection giving us a very good toolbox of heuristics for solving certain kinds of search problems. Not all of them and not all the time, but in some cases, we can do really well. Like I said, I believe that NP-complete problems are not efficiently solvable in the physical universe, so I believe that there can never be a machine that can just prove any theorem efficiently, but there could certainly be machines that would take advantage of the same kind of creative insight that human mathematicians have. They don't have to beat God, they just have to beat Andrew Wiles. That could be an easier problem, but it takes us outside of the scope of complexity theory and into AI.

**Q:** So even if there's no way to solve NP-complete problems in polynomial time, human mathematicians could still be rendered obsolete?

**Scott:** Sure. And after the computers take over from us, maybe they'll worry that *they'll* be out of a job once some NP oracle comes along.

**Q:** When we discussed P_{CTC} (the class of problems efficiently solvable using a closed timelike curve), shouldn't that have been BPP_{CTC}, since P doesn't have access to any randomness, whereas with closed timelike curves you have to have a distribution?

**Scott:** That's a tricky question–even with a fixed-point distribution, we can still require the CTC computer to produce a deterministic *output* (so that in essence, randomness is only used to avoid the Grandfather Paradox and not for any other purpose). On the other hand, if you relax that requirement and let the answer have some probability of error, it turns out that you get the same complexity class. That is, one can show that P_{CTC} = BPP_{CTC} = PSPACE.

**Q:** Bell inequalities seem to be an important tool in studying the limitations of quantum mechanics. We know what happens if we have completely non-local boxes, but what happens (say, to computational complexity) if we allow correlations just above what, say, quantum entanglement gives?

**Scott:** That's a good question, and there are people at Perimeter and elsewhere who have been thinking about it. The fundamental problem is that you can imagine Tsirelson's bound is violated–that is, you can imagine that there are these non-local correlations stronger than anything allowed by quantum mechanics–but saying that still doesn't give us a *model of computation*. I mean, what are the allowed operations? What is the model you're assuming that lets you break Tsirelson's bound?

Admittedly, you *can* just assume there are these non-local boxes and then determine the consequences of that for *communication* complexity. For example, Brassard et al. (building on an earlier result of Wim van Dam) showed that if you have a good enough non-local box (if the error is small enough), then it makes communication complexity trivial (i.e., all communication problems can be solved with just a single bit).

**Q:** I didn't know about the error part. I just heard that result for a perfect non-local box.

**Scott:** It was extended to if the box has 7% error or something. So there's still some gap. Quantum mechanics lets you win the so-called *CHSH game* 85% of the time. If you had a magical box that let you win the CHSH game 93% of the time, then communication complexity would become trivial. There's still that gap between 85% and 93%, which it would be nice to close. As I said, if you want to ask about computational complexity, then it becomes more complicated. A non-local box does not a model of computation make.

**Q:** I'm thinking of what a quantum circuit would look like if you allowed some non-locality. You just define some unitary version of the non-local box and query it in superposition whenever you want to, and that's the only way to join two separated parties.

**Scott:** You mean you don't have multi-qubit gates anymore?

**Q:** You do, but any multi-qubit gates cannot span parties anymore.

**Scott:** OK, but usually when we define a model of computation, we don't even have a concept of parties. There's one party.

**Q:** Not necessarily. Think of multi-party interactive proofs.

**Scott:** All right, well now we have a different question. You've got a multi-prover interactive proof, and you've got a non-local box between the provers. What's the power of that? That's a great question, which I'd be happy to talk about offline. Getting back to the original question, of course I'd be extremely interested if you could come up with a model of computation that looked like BQP, but incorporated non-local boxes in an interesting way. I haven't seen that yet.

**Q:** I think you could have just one very big, complicated non-local box that just gives the right answer.

**Scott:** That reminds me of something: we've got this class BQP/qpoly (BQP with polynomial-sized advice states that only depend on the length of the input), and several people have asked me what happens if you extend that so that you have a polynomial-sized *advice unitary operation*. Well, how powerful is that? Whatever Boolean function *f* you want to compute, just design a a unitary operation that computes that function and you're done! So that class can solve any problem whatsoever.

**Q:** Do you see there being a bit more clearing up of the complexity classes? We just keep getting more and more.

**Scott:** To me, that's like asking a chemist if she sees a clearing up of the Periodic Table. Is Nitrogen going to collapse with Helium? In our case, it's a little bit better than for the chemist, since we can expect a collapse of *some* classes. For example, we hope and expect that P, RP, ZPP and BPP are going to collapse. We hope and expect that NP, AM and MA are going to collapse. IP and PSPACE already collapsed. So yeah, there are collapses, but we also know that there are other pairs of classes that can't collapse. We know, for example, that P is different from EXP, which immediately tells you that either P has to be different from PSPACE or PSPACE has to be different from EXP, or both. So not everything can collapse. That shouldn't really be surprising.

Now, maybe complexity theory took a wrong turn when it gave everything this string of random-looking capital letters as its name–I appreciate how they can look to people like codenames or inside jokes. But really, we're just talking about different notions of computation. Time, space, randomness, quantumness, having a prover around. There are as many complexity classes as there are different notions of computation. So, the richness of the complexity zoo just seems like an inevitable reflection of the richness of the computational world.

**Q:** Do you think that BPP will collapse with P?

**Scott:** Oh, yeah. Absolutely. We have not just one but several reasonable-looking circuit lower bound conjectures where we know that if they're true, then P = BPP. I mean, there were people who realized even in the 1980s that P should equal BPP. Even then, Yao pointed out that if you had good enough cryptographic pseudorandom number generators, then you could use them to derandomize any probabilistic algorithm, hence P = BPP. Now, what people have managed to do in the last ten years is to get the same conclusion with weaker and weaker assumptions.

Besides that, there's also an "empirical" case, in that two of the most spectacular results in complexity theory in the last decade were the AKS primality test showing that primality testing is in P, and Reingold's result that searching an undirected graph is in deterministic logspace. So, this program of taking specific randomized algorithm and derandomizing them has had considerable success. It sort of increases one's confidence that if we were smart enough or knew enough, then this would probably work for other BPP problems as well. You can also look at a specific case, like derandomizing polynomial identity testing, and maybe this is a good example to illustrate the point.

The question is, if you've got some polynomial like x^{2}-y^{2}-(x+y)(x-y), *is it identically zero?* In this case, the answer is yes. But you could have some very complicated polynomial identity involving variables raised to very high powers, and then it's not obvious how you would check it efficiently even with a computer. If you tried to expand everything out, you'd get an exponential number of terms.

Now, we do know of a fast randomized algorithm for this problem: namely, just plug in some random values (over some random finite field) and see whether the identity holds or not. The question is whether this algorithm can be *derandomized*. That is, is there an efficient deterministic algorithm to check whether a polynomial is identically zero? If you bang your head against this problem, you quickly get into some very deep questions in algebraic geometry. For example, can you come up with some small list of numbers, such that given any polynomial *p*(*x*) described by a small arithmetic formula, all you have to do is plug in the numbers in that list, and if *p*(*x*)=0 for every *x* in the list, then it's zero everywhere? That seems like it *should* be true, because all you should have to do is pick some "generic" set of numbers to test which is much larger than the size of the formula for *p*. For example, if you find that *p*(1)=0,*p*(2)=0,...,*p*(k)=0, then either *p* must be zero, or else it must be evenly divisible by the polynomial (*x*-1)...(*x*-*k*). But is there *any* nonzero multiple of (*x*-1)...(*x*-*k*), that can be represented by an arithmetic formula of size much smaller than *k*? That's really the crucial question. If you can prove that no such polynomial exists, then you'll give a way to derandomize polynomial identity testing (a major step towards proving P=BPP).

**Q:** What do you think the chances are that three Indian mathematicians will come up with an elementary proof?

**Scott:** I think it's gonna take at least four Indian mathematicians! We know today that if you prove good enough circuit lower bounds, then you can prove P=BPP. But Impagliazzo and Kabanets also proved a result in the other direction: if you want to derandomize, you're going to *have* to prove circuit lower bounds. To me that gives some explanation as to why people haven't succeeded yet in proving that P=BPP. It's all because we don't know how to prove circuit lower bounds. The two problems are almost–though not quite–the same.

**Q:** Does P = BPP imply that NP = MA?

**Scott:** Almost. If you derandomize PromiseBPP, then you derandomize MA. No one has any idea of how to derandomize BPP that wouldn't also derandomize PromiseBPP.

Any other questions? Maybe a less technical one?

**Q:** How would you answer an intelligent design advocate? Without getting shot?

**Scott:** You know, I'm genuinely not sure. It'sone of those cases where might be anthropic selection going on. If someone could be persuaded by evidence on this question, then wouldn't he or she already have been? I think we have to concede that there are people for whom the most important thing about a belief isn't whether it's true, but rather some other properties of the belief, such as its role in a community. So they're playing a different game where beliefs are judged by a different standard. It's like you're a basketball player on a football field. How are you supposed to win?

**Q:** Is complexity theory relevant to the evolution versus intelligent design controversy?

**Scott:** To the extent that you need complexity theory, it's all sort of trivial complexity theory. For example, just because we believe that NP is exponentially hard doesn't mean that we believe that every particular instance (say, evolving a working brain or a retina) has to be hard.

**Q:** When Steven Weinberg came to talk at the Perimeter Institute, the question was asked "where does God fit into all of this"? His answer was to just dismiss religion as an artifact of our evolution that now has no value, and that we'd eventually grow out of it.

**Scott:** Was that a question?

**Q:** Do you agree with him?

**Scott:** So I think that there are several questions here.

**Q:** You're being a politician.

**Scott:** Look, this is a hot topic right now, and I've read books like Richard Dawkins's.

**Q:** Was it a good book?

**Scott:** Yes. Dawkins is always amusing, and he's at his absolute best when he's ripping into bad arguments like a Rottweiler. Anyway, one way to think about it is that the world would clearly be a better place if there were no wars, or for that matter if there were no lawyers and no one sued anyone else. And there are those who want to turn that idea into an actual political program. I'm not talking about people who oppose specific wars like the one in Iraq for specific reasons, but absolute pacifists. And the obvious problem with their position is a game-theoretic one. Yes, the world would be a better place with no armies, but the other guys have an army.

It's clear that religion fills some sort of role for people; otherwise it wouldn't have been so ubiquitous for thousands of years or resisted very significant efforts to stamp it out. For example, maybe people who believe God is on their side are braver in battle. Or maybe religion is one of the factors (besides more obvious factors) that induces men and women to get married and have lots of babies, and is therefore adaptive just from a Darwinian point of view. Years ago I was struck by an irony: in contemporary America, you've got these stereotypical coastal elites who believe in Darwinism and often live by themselves well into their thirties or forties, and then you've got these stereotypical heartland folks who reject Darwinism but marry young and have 7 kids, 49 grandkids and 243 great-grandkids. So then it's not really a contest between Darwinists and anti-Darwinists; it's just a contest between the Darwinian theorists and the Darwinian practitioners!

If this speculation is at least partly correct–if religion survives because it helps induce people to win wars, have more babies, etc.–then the question arises, how are you ever going to counter it unless you've got a competing religion?

**Q:** Yeah, I'm sure that's what people are thinking about when they decide whether or not to believe in a religion.

**Scott:** I'm not saying it's conscious, or that people are thinking it through in these terms. Surely a few are, but the point is they don't *have* to in order for it to describe their behavior.

**Q:** We can have lots of kids without accepting a religion if we want to.

**Scott:** Sure, we *can*, but *do* we on average? I don't know the numbers offhand, but I believe there are studies backing up the claim that religious people have more children on average.

Now, there's another key factor, which is that sometimes *irrationality can be supremely rational*, because it's the only way of proving to someone else that you're committed to something. Like if someone shows up at your doorstep and asks for $100, you're much more likely to give it to him if his eyes are bloodshot and he looks really irrational–you don't know what he's going to do! The only way that this is actually effective is if the show of irrationality is *convincing*. The person can't just feign, or you'll see through it. He has to be really, really irrational and show that he's ready to get revenge on you no matter what. If you believe that the person's going to defend his honor to the death, you're probably not going to mess with him.

So the theory is that religion is a way of committing yourself. Someone might *say* that he believes in a certain moral code, but others might figure talk is cheap and not trust him. On the other hand, if he has a really long beard and prays every day and really seems to believe that he'll have an eternity in hellfire if he breaks the code, then he's making this very expensive commitment to his belief. It becomes much more plausible that he means it. So on this theory, religion functions as a way of publicly advertising a commitment to a certain set of rules. Of course, the rules might be good or they might be terrible. Nevertheless, this sort of public commitment to obeying a set of rules, backed up with supernatural rewards and punishments, seems like an important element of how societies organized themselves for thousands of years. It's why rulers trusted their subjects not to rebel, men trusted their wives to stay faithful, wives trusted their husbands not to abandon them, etc. etc.

So, I feel like these are the sorts of game-theoretic forces that Dawkins and Hitchens and the other anti-religion crusaders are up against, and that they maybe don't sufficiently acknowledge in their writing. What makes it easier for them, of course, is that their opponents can't just come out and say, "yes, *of course* it's all a load of hooey, but here are the important social functions it serves!" Instead, religious apologists often resort to arguments that are easily demolished (at least since the days of Hume and Darwin)–since their *real* case, though considerably stronger, is one that's so hard for them to make openly!

In summary, maybe it's true that humans (if we survive long enough) will eventually outgrow religion, now that we have something better to fill religion's explanatory role. But before that will happen, I think that *at the least* we'll need to better understand the social functions that religion played for most of history and still plays in most of the world, and maybe come up with alternative social mechanisms to solve the same sorts of problems.

**Q:** I was just thinking of if there's another case where irrationality might be preferred over rationality.

**Scott:** Where to begin?

**Q:** Especially if you have incomplete information. Like if you have a politician who's committed and won't change his ideals later on, you can feel more assured that he'll do what he said he would.

**Scott:** Because he has conviction. He believes in what he says. To most voters, that matters much more than the actual content of the beliefs. Bush gives a *very* good impression of belief.

**Q:** I'm not sure that's the best for the interests of the United States.

**Scott:** Right, that's the question! How do you defeat people who have mastered the mechanisms of rational irrationality? By saying "no, look here, you've got your facts wrong"? Which game are you going to play? Or take another example: a singles bar. The ones who succeed are the ones best able to convince themselves (at least temporarily) of certain falsehoods: "I'm the hottest guy/girl here." This is a very clear case where irrationality seems to be rational in some sense.

**Q:** The standard example is if you're playing Chicken with someone, it's advantageous to you if you break your steering wheel so it can't turn.

**Scott:** Exactly.

**Q:** Why is computer science not a branch of physics departments?

**Scott:** The answer to that isn't philosophical, it's historical. Computer scientists back in the day were either mathematicians or electrical engineers. People who would have been computer scientists when there wasn't such a department went into either math or EE. Physics had its plate full with other things, and to get into physics you had to learn this enormous amount of other stuff which maybe wasn't directly relevant if you just wanted to hack around and write programs, or if you wanted to think theoretically about computation. Paul Graham has said that computer science is not so much a unified discipline as a collection of people thrown together by accident of history, like Yugoslavia. You've got the "mathematicians," the "hackers," and the "experimentalists," and we just throw them all together in the same department and hope they sometimes talk to each other. But I do think (and this is a cliched thing to say) that the boundaries between CS, math, physics, and so on are going to look less and less relevant, more and more like a formality. It's clear that there's a terrain, but it's not clear where to draw the boundaries.