Archive for the ‘Metaphysical Spouting’ Category

Quantum Computing Since Democritus Lecture 21: Ask Me Anything

Monday, September 1st, 2008

In the final Democritus installment, I entertain students’ questions about everything from derandomization to the “complexity class for creativity” to the future of religion.  (In this edited version, I omitted questions that seemed too technical, which surprisingly was almost half of them.)  Thanks to all the readers who’ve stuck with me to this point, to the students for a fantastic semester (if they still remember it) as well as their scribing help, to Chris Granade for further scribing, and to Waterloo’s Institute for Quantum Computing for letting me get away with this.  I hope you’ve enjoyed it, and only wish I’d kept my end of the bargain by getting these notes done a year earlier.

A question for the floor: some publishers have expressed interest in adapting the Democritus material into book form.  Would any of you actually shell out money for that?

Quantum Computing Since Democritus Lecture 18: Free Will

Monday, July 28th, 2008

If you don’t like this latest lecture, please don’t blame me: I had no choice! (Yeah, yeah, I know. You presumably have no choice in criticizing it either. But it can’t hurt to ask!)

Those of you who’ve been reading this blog since the dark days of 2005 might recognize some of the content from this post about Newcomb’s Paradox, and this one about the Free Will Theorem.

Quantum Computing Since Democritus Lecture 17: Fun With The Anthropic Principle

Thursday, July 24th, 2008

Here it is. There was already a big anthropic debate in the Lecture 16 comments — spurred by a “homework exercise” at the end of that lecture — so I feel absolutely certain that there’s nothing more to argue about. On the off chance I’m wrong, though, you’re welcome to restart the debate; maybe you’ll even tempt me to join in eventually.

The past couple weeks, I was at Foo Camp in Sebastopol, CA, where I had the opportunity to meet some wealthy venture capitalists, and tell them all about quantum computing and why not to invest in it hoping for any short-term payoff other than interesting science. Then I went to Reed College in Portland, OR, to teach a weeklong course on “The Complexity of Boolean Functions” at MathCamp’2008. MathCamp is (as the name might suggest) a math camp for high school students. I myself attended it way back in 1996, where some guy named Karp gave a talk about P and NP that may have changed the course of my life.

Alas, neither camp is the reason I haven’t posted anything for two weeks; for that I can only blame my inherent procrastination and laziness, as well as my steadily-increasing, eminently-justified fear of saying something stupid or needlessly offensive (i.e., the same fear that leads wiser colleagues not to start blogs in the first place).

The array size of the universe

Friday, May 23rd, 2008

I’ve been increasingly tempted to make this blog into a forum solely for responding to the posts at Overcoming Bias. (Possible new name: “Wallowing in Bias.”)

Two days ago, Robin Hanson pointed to a fascinating paper by Bousso, Harnik, Kribs, and Perez, on predicting the cosmological constant from an “entropic” version of the anthropic principle. Say what you like about whether anthropicology is science or not, for me there’s something delightfully non-intimidating about any physics paper with “anthropic” in the abstract. Sure, you know it’s going to have metric tensors, etc. (after all, it’s a physics paper) — but you also know that in the end, it’s going to turn on some core set of assumptions about the number of sentient observers, the prior probability of the universe being one way rather than another, etc., which will be comprehensible (if not necessarily plausible) to anyone familiar with Bayes’ Theorem and how to think formally.

So in this post, I’m going to try to extract an “anthropic core” of Bousso et al.’s argument — one that doesn’t depend on detailed calculations of entropy production (or anything else) — trusting my expert readers to correct me where I’m mistaken. In defense of this task, I can hardly do better than to quote the authors themselves. In explaining why they make what will seem to many like a decidedly dubious assumption — namely, that the “number of observations” in a given universe should be proportional to the increase in non-gravitational entropy, which is dominated (or so the authors calculate) by starlight hitting dust — they write:

We could have … continued to estimate the number of observers by more explicit anthropic criteria. This would not have changed our final result significantly. But why make a strong assumption if a more conservative one suffices? [p. 14]

In this post I’ll freely make strong assumptions, since my goal is to understand and explain the argument rather than to defend it.

The basic question the authors want to answer is this: why does our causally-connected patch of the universe have the size it does? Or more accurately: taking everything else we know about physics and cosmology as given, why shouldn’t we be surprised that it has the size it does?

From the standpoint of post-1998 cosmology, this is more-or-less equivalent to asking why the cosmological constant Λ ~ 10-122 should have the value it has. For the radius of our causal patch scales like

1/√Λ ~ 1061 Planck lengths ~ 1010 light-years,

while (if you believe the holographic principle) its maximum information content scales like 1/Λ ~ 10122 qubits. To put it differently, there might be stars and galaxies and computers that are more than ~1010 light-years away from us, and they might require more than ~10122 qubits to describe. But if so, they’re receding from us so quickly that we’ll never be able to observe or interact with them.

Of course, to ask why Λ has the value it does is really to ask two questions:

1. Why isn’t Λ smaller than it is, or even zero? (In this post, I’ll ignore the possibility of its being negative.)
2. Why isn’t Λ bigger than it is?

Presumably, any story that answers both questions simultaneously will have to bring in some actual facts about the universe. Let’s face it: 10-122 is just not the sort of answer you expect to get from armchair philosophizing (not that it wouldn’t be great if you did). It’s a number.

As a first remark, it’s easy to understand why Λ isn’t much bigger than it is. If it were really big, then matter in the early universe would’ve flown apart so quickly that stars and galaxies wouldn’t have formed, and hence we wouldn’t be here to blog about it. But this upper bound is far from tight. Bousso et al. write that, based on current estimates, Λ could be about 2000 times bigger than it is without preventing galaxy formation.

As for why Λ isn’t smaller, there’s a “naturalness” argument due originally (I think) to Weinberg, before the astronomers even discovered that Λ>0. One can think of Λ as the energy of empty space; as such, it’s a sum of positive and negative contributions from all possible “scalar fields” (or whatever else) that contribute to that energy. That all of these admittedly-unknown contributions would happen to cancel out exactly, yielding Λ=0, seems fantastically “unnatural” if you choose to think of the contributions as more-or-less random. (Attempts to calculate the likely values of Λ, with no “anthropic correction,” notoriously give values that are off by 120 orders of magnitude!) From this perspective, the smaller you want Λ to be, the higher the price you have to pay in the unlikelihood of your hypothesis.

Based on the above reasoning, Weinberg predicted that Λ would have close to the largest possible value it could have, consistent with the formation of galaxies. As mentioned before, this gives a prediction that’s too big by a factor of 2000 — a vast improvement over the other approaches, which gave predictions that were off by factors of 10120 or infinity!

Still, can’t we do better? One obvious approach to pushing Λ down would be to extend the relatively-uncontroversial argument explaining why Λ can’t be enormous. After all, the tinier we make Λ, the bigger the universe (or at least our causal patch of it) will be. And hence, one might argue, the more observers there will be, hence the more likely we’ll be to exist in the first place! This form of anthropicizing — that we’re twice as likely to exist in a universe with twice as many observers — is what philosopher Nick Bostrom calls the Self-Indication Assumption.

However, two problems with this idea are evident. First, why should it be our causal patch of the universe that matters, rather than the universe as a whole? For anthropic purposes, who cares if the various civilizations that arise in some universe are in causal contact with each other or not, provided they exist? Bousso et al.’s response is basically just to stress that, from what we know about quantum gravity (in particular, black-hole complementarity), it probably doesn’t even make sense to assign a Hilbert space to the entire universe, as opposed to some causal patch of it. Their “Causal-Patch Self-Indication Assumption” still strikes me as profoundly questionable — but let’s be good sports, assume it, and see what the consequences are.

If we do this, we immediately encounter a second problem with the anthropic argument for a low value of Λ: namely, it seems to work too well! On its face, the Self-Indication Assumption wants the number of observers in our causal patch to be infinite, hence the patch itself to be infinite in size, hence Λ=0, in direct conflict with observation.

But wait: what exactly is our prior over the possible values of Λ? Well, it appears Landscapeologists typically just assume a uniform prior over Λ within some range. (Can someone enlighten me on the reasons for this, if there are any? E.g., is it just that the middle part of a Gaussian is roughly uniform?) In that case, the probability that Λ is between ε and 2ε will be of order ε — and such an event, we might guess, would lead to a universe of “size” 1/ε, with order 1/ε observers. In other words, it seems like the tiny prior probability of a small cosmological constant should precisely cancel out the huge number of observers that such a constant leads to — Λ(1/Λ)=1 — leaving us with no prediction whatsoever about the value of Λ. (When I tried to think about this issue years ago, that’s about as far as I got.)

So to summarize: Bousso et al. need to explain to us on the one hand why Λ isn’t 2000 times bigger than it is, and on the other hand why it’s not arbitrarily smaller or 0. Alright, so are you ready for the argument?

The key, which maybe isn’t so surprising in retrospect, turns out to be other stuff that’s known about physics and astronomy (independent of Λ), together with the assumption that that other stuff stays the same (i.e., that all we’re varying is Λ). Sure, say Bousso et al.: in principle a universe with positive cosmological constant Λ could contain up to ~1/Λ bits of information, which corresponds — or so a computer scientist might estimate! — to ~1/Λ observers, like maybe ~1/√Λ observers in each of ~1/√Λ time periods. (The 1/√Λ comes from the Schwarzschild bound on the amount of matter and energy within a given radius, which is linear in the radius and therefore scales like 1/√Λ.)

But in reality, that 1/Λ upper bound on the number of observers won’t be anywhere close to saturated. In reality, what will happen is that after a billion or so years stars will begin to form, radiating light and quickly increasing the universe’s entropy, and then after a couple tens of billions more years, those stars will fizzle out and the universe will return to darkness. And this means that, even though you pay a Λ price in prior probability for a universe with 1/Λ information content, as Λ goes to zero what you get for your money is not ~1/√Λ observers in each of ~1/√Λ time periods (hence ~1/Λ observers in total), but rather just ~1/√Λ observers over a length of time independent of Λ (hence ~1/√Λ observers in total). In other words, you get diminishing returns for postulating a bigger and bigger causal patch, once your causal patch exceeds a few tens of billions of light-years in radius.

So that’s one direction. In the other direction, why shouldn’t we expect Λ to be 2000 times bigger than it is (i.e. the radius of our causal patch to be ~45 times smaller)? Well, Λ could be that big, say the authors, but in that case the galaxies would fly apart from each other before starlight really started to heat things up. So once again you lose out: during the very period when the stars are shining the brightest, entropy production is at its peak, civilizations are presumably arising and killing each other off, etc., the number of galaxies per causal patch is minuscule, and that more than cancels out the larger prior probability that comes with a larger value of Λ.

Putting it all together, then, what you get is a posterior distribution for Λ that’s peaked right around 10-122 or so, corresponding to a causal patch a couple tens of light-years across. This, of course, is exactly what’s observed. You also get the prediction that we should be living in the era when Λ is “just taking over” from gravity, which again is borne out by observation. According to another paper, which I haven’t yet read, several other predictions of cosmological parameters come out right as well.

On the other hand, it seems to me that there are still few enough data points that physicists’ ability to cook up some anthropic explanation to fit them all isn’t sufficiently surprising to compel belief. (In learning theory terms, the measurable cosmological parameters still seem shattered by the concept class of possible anthropic stories.) For those of us who, unlike Eliezer Yudkowsky, still hew to the plodding, non-Bayesian, laughably human norms of traditional science, it seems like what’s needed is a successful prediction of a not-yet-observed cosmological parameter.

Until then, I’m happy to adopt a bullet-dodging attitude toward this and all other proposed anthropic explanations. I assent to none, but wish to understand them all — the more so if they have a novel conceptual twist that I personally failed to think of.

Floating in Platonic heaven

Thursday, May 15th, 2008

In the comments section of my last post, Jack in Danville writes:

I may have misunderstood [an offhand comment about the "irrelevance" of the Continuum Hypothesis] … Intuitively I’ve thought the Continuum Hypothesis describes an aspect of the real world.

I know we’ve touched on similar topics before, but something tells me many of you are hungerin’ for a metamathematical foodfight, and Jack’s perplexity seemed as good a pretext as any for starting a new thread.

So, Jack: this is a Deep Question, but let me try to summarize my view in a few paragraphs.

It’s easy to imagine a “physical process” whose outcome could depend on whether Goldbach’s Conjecture is true or false. (For example, a computer program that tests even numbers successively and halts if it finds one that’s not a sum of two primes.) Likewise for P versus NP, the Riemann Hypothesis, and even considerably more abstract questions.

But can you imagine a “physical process” whose outcome could depend on whether there’s a set larger than the set of integers but smaller than the set of real numbers? If so, what would it look like?

I submit that the key distinction is between

  1. questions that are ultimately about Turing machines and finite sets of integers (even if they’re not phrased that way), and
  2. questions that aren’t.

We need to assume that we have a “direct intuition” about integers and finite processes, which precedes formal reasoning — since without such an intuition, we couldn’t even do formal reasoning in the first place. By contrast, for me the great lesson of Gödel and Cohen’s independence results is that we don’t have a similar intuition about transfinite sets, even if we sometimes fool ourselves into thinking we do. Sure, we might say we’re talking about arbitrary subsets of real numbers, but on closer inspection, it turns out we’re just talking about consequences of the ZFC axioms, and those axioms will happily admit models with intermediate cardinalities and other models without them, the same way the axioms of group theory admit both abelian and non-abelian groups. (Incidentally, Gödel’s models of ZFC+CH and Cohen’s models of ZFC+not(CH) both involve only countably many elements, which makes the notion that they’re telling us about some external reality even harder to understand.)

Of course, everything I’ve said is consistent with the possibility that there’s a “truth” about CH floating in Platonic heaven, or even that a plausible axiom system other than ZFC could prove or disprove CH (which was Gödel’s hope). But the “truth” of CH is not going to have consequences for human beings or the physical universe independent of its provability, in the same way that the truth of P=NP could conceivably have consequences for us even if we weren’t able to prove or disprove it.

For mathematicians, this distinction between “CH-like questions” and “Goldbach/Riemann/Pvs.NP-like questions” is a cringingly obvious one, probably even too obvious to point out. But I’ve seen so many people argue about Platonism versus formalism as if this distinction didn’t exist — as if one can’t be a Platonist about integers but a formalist about transfinite sets — that I think it’s worth hammering home.

To summarize, Kronecker had it backwards. Man and Woman deal with the integers; all else is the province of God.

The bullet-swallowers

Tuesday, May 13th, 2008

Question for the day: what do libertarianism and the Many-Worlds Interpretation of quantum mechanics have in common? Interest in the two worldviews seems to be positively correlated: think of quantum computing pioneer David Deutsch, or several prominent posters over at Overcoming Bias, or … oh, alright, my sample size is admittedly pretty small.

Some connections are obvious: libertarianism and MWI are both grand philosophical theories that start from premises that almost all educated people accept (quantum mechanics in the one case, Econ 101 in the other), and claim to reach conclusions that most educated people reject, or are at least puzzled by (the existence of parallel universes / the desirability of eliminating fire departments). Both theories seem to have a strong following with nerds who read science fiction and post to Internet discussion groups, but a relatively poorer following with both John Q. Public and Alistair K. Intellectual. (Needless to say, these stereotypes tell us almost nothing about the theories’ validity.)

My own hypothesis has to do with bullet-dodgers versus bullet-swallowers. A bullet-dodger is a person who says things like:

Sure, obviously if you pursued that particular line of reasoning to an extreme, then you’d get such-and-such an absurd-seeming conclusion. But that very fact suggests that other forces might come into play that we don’t understand yet or haven’t accounted for. So let’s just make a mental note of it and move on.

Faced with exactly the same situation, a bullet-swallower will exclaim:

The entire world should follow the line of reasoning to precisely this extreme, and this is the conclusion, and if a ‘consensus of educated opinion’ finds it disagreeable or absurd, then so much the worse for educated opinion! Those who accept this are intellectual heroes; those who don’t are cowards.

In a lifetime of websurfing, I don’t think I’ve ever read an argument by a libertarian or a Many-Worlds proponent that didn’t sound like the latter.

We know plenty of historical examples where the bullet-swallowers were gloriously right: Moore’s Law, Darwinism, the abolition of slavery, women’s rights. On the other hand, at various points within the last 150 years, extremely smart people also reasoned themselves to the inescapable conclusions that aether had to exist for light to be a wave in, that capitalism was reaching its final crisis, that only a world government could prevent imminent nuclear war, and that space colonies would surely exist by 2000. In those cases, even if you couldn’t spot any flaws in the arguments, you still would’ve been wise to doubt their conclusions. (Or are you sure you would have spotted the flaws where Maxwell and Kelvin, Russell and Einstein did not?)

Here’s a favorite analogy. The world is a real-valued function that’s almost completely unknown to us, and that we only observe in the vicinity of a single point x0. To our surprise, we find that, within that tiny vicinity, we can approximate the function extremely well by a Taylor series.

“Aha!” exclaim the bullet-swallowers. “So then the function must be the infinite series, neither more nor less.”

“Not so fast,” reply the bullet-dodgers. “All we know is that we can approximate the function in a small open interval around x0. Who knows what unsuspected phenomena might be lurking beyond it?”

“Intellectual cowardice!” the first group snorts. “You’re just like the Jesuit schoolmen, who dismissed the Copernican system as a mere calculational device! Why can’t you accept what our best theory is clearly telling us?”

So who’s right: the bullet-swallowing libertarian Many-Worlders, or the bullet-dodging intellectual kibitzers? Well, that depends on whether the function is sin(x) or log(x).

Volume 4 is already written (in our hearts)

Thursday, January 10th, 2008

Today is the 70th birthday of Donald E. Knuth: Priest of Programming, Titan of Typesetting, Monarch of MMIX, intellectual heir to Turing and von Neumann, greatest living computer scientist by almost-universal assent … alright, you get the idea.

That being the case, Jeff Shallit proposed to various CS bloggers that we should all band together and present the master with a birthday surprise: one post each about how his work has inspired us. The posts are now in! Readers who don’t know about Knuth’s work (are there any?) should start with this post from Luca. Then see this from David Eppstein, this from Doron Zeilberger, this from Jeff, this from Bill Gasarch, and this from Suresh.

Knuth’s impact on my own work and thinking, while vast, has not been directly through research: his main influence on my BibTeX file is that if not for him, I wouldn’t have a BibTeX file. (One reason is that I’m one of the people Doron Zeilberger attacks for ignoring constant factors, and supporting what he calls “the ruling paradigm in computational complexity theory, with its POL vs. EXP dichotomy.”) So I decided to leave Knuth’s scientific oeuvre to others, and to concentrate in this post on his contributions to two other fields: mathematical exposition and computational theology.

Knuth’s creation of the TeX typesetting system — his original motivation being to perfect the layout of his own Art of Computer Programming books — was remarkable in two ways. First, because scientific typesetting is of so little interest to industry, it’s not clear if something like TeX would ever have been invented if not for one man and his borderline-neurotic perfectionism. Second, TeX is one of the only instances I can think of when a complicated software problem was solved so well that it never had to be solved again (nor will it for many decades, one hazards to guess). At least in math, computer science, and physics, the adoption of TeX has been so universal that failure to use it is now a reliable crackpot indicator.

From Wikipedia:

Since version 3, TeX has used an idiosyncratic version numbering system, where updates have been indicated by adding an extra digit at the end of the decimal, so that the version number asymptotically approaches π. This is a reflection of the fact that TeX is now very stable, and only minor updates are anticipated. The current version of TeX is 3.141592; it was last updated in December 2002 … Even though Donald Knuth himself has suggested a few areas in which TeX could have been improved, he indicated that he firmly believes that having an unchanged system that will produce the same output now and in the future is more important than introducing new features. For this reason, he has stated that the “absolutely final change (to be made after my death)” will be to change the version number to π, at which point all remaining bugs will become features.

But Knuth’s interest in scientific exposition goes far beyond typesetting. His 1974 Surreal Numbers: How Two Ex-Students Turned on to Pure Mathematics and Found Total Happiness, which he wrote in one week, was weirdness at the highest possible level: the Beatles’ White Album of math. It’s said to represent the only occasion in history when a new mathematical theory (Conway’s theory of surreal numbers) was introduced in the form of a novel. (Though admittedly, with the exception of one sex scene, this is a “novel” whose plot development mostly takes the form of lemmas.)

Those seeking to improve their own writing should consult Mathematical Writing (available for free on the web), the lecture notes from a course at Stanford taught by Knuth, Tracy Larrabee, and Paul Roberts. Like a lot of Knuth’s work, Mathematical Writing has the refreshing feel of an open-ended conversation: we get to see Knuth interact with students, other teachers, and visiting luminaries like Mary-Claire van Leunen, Paul Halmos, Jeff Ullman, and Leslie Lamport.

Since I’ve blogged before about the battle over academic publishing, I also wanted to mention Knuth’s remarkable and characteristically methodical 2003 letter to the editorial board of the Journal of Algorithms. Knuth asks in a postscript that his letter not be distributed widely — but not surprisingly, it already has been.

In the rest of this post, I’d like to talk about Things A Computer Scientist Rarely Talks About, the only book of Knuth’s for which I collected one of his coveted $2.56 prizes for spotting an error. (Nothing important, just a typo.)

Things is based on a series of lectures on computer science and religion that Knuth gave in 1997 at MIT. (At the risk of oversimplifying: Knuth practices Christianity, but in a strange form less interested in guns and gays than in some business about “universal compassion.”) Perhaps like most readers, when I bought Things I expected yet another essay on “non-overlapping magisteria,” a famous scientist’s apologia justifying his belief in the Virgin Birth and the Resurrection. But Knuth likes to surprise, and what he delivers instead is mostly a meditation on the typography of Bible verses [sic]. More precisely, Things is a “metabook”: a book about the lessons Knuth learned while writing and typesetting an earlier book, one I haven’t yet read, that analyzed verse 3:16 of every book of the Bible.

But this being a lecture series, Knuth also fields questions from the audience about everything from sin and redemption to mathematical Platonism. He has a habit of parrying all the really difficult questions with humor; indeed, he does this so often one comes to suspect humor is his answer. As far as I could tell, there’s only one passage in the entire book where Knuth directly addresses what atheists are probably waiting for him to address. From one of the question periods:

Q: How did you become so interested in God and religion in the first place?

A: It was because of the family I was born into. If I had been born in other circumstances, my religious life would no doubt have been quite different. (p. 155)

And then on to the next question.

To me, what’s remarkable about this response is that Knuth without any hesitation concedes what skeptics from Xenophanes to Richard Dawkins have held up as the central embarrassment of religion. This, of course, is the near-perfect correlation between the content of religious belief and the upbringing of the believer. How, Dawkins is fond of asking, could there possibly be such a thing as a Christian or Hindu or Jewish child? How could a four-year-old already know what he or she thinks about profound questions of cosmogony, history, and ethics — unless, of course, the child were brainwashed by parents or teachers?

My Bayesian friends, like Robin Hanson, carry this argument a step further. For them, the very fact that Knuth knows his beliefs would be different were he born to different parents must, assuming he’s rational, force him to change his beliefs. For how can he believe something with any conviction, if he knows his belief was largely determined by a logically-irrelevant coin toss?

And yet, openly defying the armies of Bayes arrayed against him, here we have Knuth saying, in effect: yes, if I know that if I were some other person my beliefs would be different, but I’m not that other person; I’m Knuth.

So, readers: is Knuth’s response a cop-out, the understandable yet ultimately-indefensible defense of an otherwise-great scientist who never managed to free himself from certain childhood myths? Or is it a profound acknowledgment that none of us ever escape the circumstances of our birth, that we might as well own up to it, that tolerance ought not to require a shared prior, that the pursuit of science and other universal values can coexist with the personal and incommunicable?

Taking a cue from Knuth himself, I’m going to dodge this question. Instead, I decided to end this post by quoting some of my favorite passages from Chapter 6 of Things A Computer Scientist Rarely Talks About.

On computer science and God: “When I talk about computer science as a possible basis for insights about God, of course I’m not thinking about God as a super-smart intellect surrounded by large clusters of ultrafast Linux workstations and great search engines. That’s the user’s point of view.” (p. 168)

“I think it’s fair to say that many of today’s large computer programs rank among the most complex intellectual achievements of all time. They’re absolutely trivial by comparison with any of the works of God, but still they’re somehow closer to those works than anything else we know.” (p. 169)

On infinity: “Infinity is a red herring. I would be perfectly happy to give up immortality if I could only live Super K years before dying ['Super K' being defined similarly to an Ackermann number]. In fact, Super K nanoseconds would be enough.” (p. 172)

On the other hand: “I once thought, if I ever had to preach a sermon in church, I would try to explain Cantor’s theorem to my non-mathematical friends so that they could understand something about the infinite.” (p. 172)

On God and computational complexity: “I think it’s fair to say that God may well be bound by the laws of computational complexity … But I don’t recommend that theologians undertake a deep study of computational complexity (unless, of course, they really enjoy it). ” (p. 174)

On quantum mechanics: “Several years ago, I chanced to open Paul Dirac’s famous book on the subject and I was surprised to find out that Dirac was not only an extremely good writer but also that his book was not totally impossible to understand. The biggest surprise, however — actually a shock — was to learn that the things he talks about in that book were completely different from anything I had ever read in Scientific American or in any other popular account of the subject. Apparently when physicists talk to physicists, they talk about linear transformations of generalized Hilbert spaces over the complex numbers; observable quantities are eigenvalues and eigenfunctions of Hermitian linear operators. But when physicists talk to the general public they don’t dare mention such esoteric things, so they speak instead about particles and spins and such, which are much less than half the story. No wonder I could never really understand the popular articles.” (p. 181)

“The extra detail that gets suppressed when quantum mechanics gets popularized amounts to the fact that, according to quantum mechanics, the universe actually consists of much more data than could ever be observed.” (p. 182)

On free will and the problem of evil: “I can design a program that never crashes if I don’t give the user any options. And if I allow the user to choose from only a small number of options, limited to things that appear on a menu, I can be sure that nothing anomalous will happen, because each option can be foreseen in advance and its effects can be checked. But if I give the user the ability to write programs that will combine with my own program, all hell might break loose. (In this sense the users of Emacs have much more free will than the users of Microsoft Word.) … I suppose we could even regard Figure 5 [a binary tree representing someone's choices] as the Tree of the Knowledge of Good and Evil.” (p. 189-190)

Does it come with a 14-Gyr warranty?

Sunday, August 19th, 2007

As many of you probably saw, John Tierney of the New York Times thinks there’s a ~50% chance we’re living in a computer simulation, having been persuaded by Nick Bostrom’s infamous simulation argument.

(This argument, incidentally, is something that occurred to me as a teenager, and I’m guessing to many others of nerdly leanings as well. I didn’t consider it a profound metaphysical discovery, just a sign I needed to get out more.)

Peter Woit feels strongly that debates about whether the universe is a computer are not science and therefore have no place in the Times science section. Robin Hanson retorts that “rather than complain that something is not ‘science,’ or not ‘philosophy,’ it is much better to just say more specifically what it is that you don’t like about it.” Peter Shor points out that if we’re living in a simulation, then the incompatibility of quantum mechanics with general relativity might simply be a bug, in which case the universe will crash when the first black hole evaporates.

As for me, I tend to side with Woody Allen: yes, the universe might be a simulation, but where else can you get a decent steak?

The last word, however, goes to Bender Bending Rodriguez of Futurama.

Bender: “If that stuff wasn’t real, how can I be sure anything is real? Is it not possible, nay, probable that my whole life is just a product of my or someone else’s imagination?”

Clerk: “No, get out. Next!”

(Click here for the audio clip.)

Religion’s rules of inference

Saturday, May 12th, 2007

Besides defending quantum computing day and night, having drinks with Cosmic Variance‘s Sean Carroll, and being taken out to dinner at lots of restaurants with tablecloths, the other highlight of my job interview tour was meeting a friendly, interesting, articulate divinity student on the flight from San Francisco to Philadelphia, who tried to save my soul from damnation.

Here’s how it happened: the student (call him Kurt) was reading a Christian theological tract, while I, sitting next to him, was reading Russell on Religion. (This is true.) I sheepishly covered the spine of my book, trying to delay the inevitable conversation — but it finally happened, when Kurt asked me how I was liking ole’ Bert. I said I was liking him just fine, thank you very much.

Kurt then made some comment about the inadequacy of a materialistic worldview, and how, without God as the basis of morality, the whole planet would degenerate into what we saw at Virginia Tech. I replied that the prevention of suffering seemed like a pretty good basis for morality to me.

“Oh!” said Kurt. “So then suffering is bad. How do you know it’s bad?”

“How do you know it’s bad?”

“Because I believe the word of God.”

“So if God said that suffering was good, that would make it good?”

I can’t remember Kurt’s response, but I’m sure it was eloquent and well-practiced — nothing I said really tripped him up, nor did I expect it to. Wanting to change the subject, I asked him about his family, his studies, his job, what he’d been doing in the vipers’ den of San Francisco, etc. I told him a little about quantum computing and my job search. I mused that, different though we were, we both valued something in life more than money, and that alone probably set us apart from most people on the plane. Kurt said it was fitting that I’d gone to grad school at Berkeley. I replied that, as a mere Democrat, I was one of the most conservative people there.

Finally I blurted out the question I really wanted to ask. In his gentle, compassionate, way, Kurt made it clear to me that yes, I was going to roast in hell, and yes, I’d still roast in hell even if I returned to the religion of my ancestors (that, of course, being at best a beta version of the true religion). In response, I told Kurt that when I read Dante’s Inferno in freshman English, I decided that the place in the afterlife I really wanted to go was the topmost layer of hell: the place where Dante put the “righteous unbaptized” such as Euclid, Plato, and Aristotle. There, these pre-Christian luminaries could carry on an eternal intellectual conversation — cut off from God’s love to be sure, but also safe from the flames and pitchforks. How could angels and harps possibly compete with infinite tenure at Righteous Unbaptized University? If God wanted to lure me away from that, He’d probably have to throw in the Islamic martyr package.

San Francisco to Philadelphia is a five-hour flight, and the conversation ranged over everything you might expect: the age of the earth (Kurt was undecided but leaning toward 6,000 years), whether the universe needs a reason for its existence external to itself, etc. With every issue, I resolved not to use the strongest arguments at my disposal, since I was more interested in understanding my adversary’s reasoning process — and ideally, in getting him to notice inconsistencies within his own frame of reference. Alas, in that I was to be mostly disappointed.

Here’s an example. I got Kurt to admit that certain Bible passages — in particular, the ones about whipping your slaves — reflected a faulty, limited understanding of God’s will, and could only be understood in the historical context in which they were written. I then asked him how he knew that other passages — for example, the ones condemning homosexuality — didn’t also reflect a limited understanding of God’s will. He replied that, in the case of homosexuality, he didn’t need the Bible to tell him it was immoral: he knew it was immoral because it contradicted human beings’ biological nature, gay couples being unable to procreate. I then asked whether he thought that infertile straight couples should similarly be banned from getting married. Of course not, he replied, since marriage is about more than procreation — it’s also about love, bonding, and so on. I then pointed out that gay and lesbian couples also experience love and bonding. Kurt agreed that this was true, but then said the reason homosexuality was wrong went back to the Bible.

What fascinated me was that, with every single issue we discussed, we went around in a similar circle — and Kurt didn’t seem to see any problem with this, just so long as the number of 2SAT clauses that he had to resolve to get a contradiction was large enough.

In the study of rationality, there’s a well-known party game: the one where everyone throws a number from 0 to 100 into a hat, and that player wins whose number was closest to two-thirds of the average of everyone’s numbers. It’s easy to see that the only Nash equilibrium of this game — that is, the only possible outcome if everyone is rational, knows that everyone is rational, knows everyone knows everyone is rational, etc. — is for everyone to throw in 0. Why? For simplicity, consider the case of two people: one can show that I should throw in 1/2 of what I think your number will be, which is 1/2 of what you think my number will be, and so on ad infinitum until we reason ourselves down to 0.

On the other hand, how should you play if you actually want to win this game? The answer, apparently, is that you should throw in about 20. Most people, when faced with a long chain of logical inferences, will follow the chain for one or two steps and then stop. And, here as elsewhere in life, “being rational” is just a question of adjusting yourself to everyone else’s irrationalities. “Two-thirds of 50 is 33, and two-thirds of that is 22, and … OK, good enough for me!”

I’ve heard it said that the creationists are actually perfectly rational Bayesians; they just have prior probabilities that the scientifically-minded see as perverse. Inspired by conversations with Kurt and others, I hereby wish to propose a different theory of fundamentalist psychology. My theory is this: fundamentalists use a system of logical inference wherein you only have to apply the inference rules two or three times before you stop. (The exact number of inferences can vary, depending on how much you like the conclusion.) Furthermore, this system of “bounded inference” is actually the natural one from an evolutionary standpoint. It’s we — the scientists, mathematicians, and other nerdly folk — who insist on a bizzarre, unnatural system of inference, one where you have to keep turning the modus ponens crank whether you like where it’s taking you or not.

Kurt, who looked only slightly older than I am, is already married with two kids, and presumably more on the way. In strict Darwinian terms, he’s clearly been more successful than I’ve been. Are those of us who can live with A→B or B→C or C→not(A) but not all of them at once simply evolutionary oddities, like people who have twelve fingers or can’t stand sunlight?

Quantum Computing Since Democritus Lecture 11: Decoherence and Hidden Variables

Tuesday, April 3rd, 2007

After a week of brainbreaking labor, here it is at last: My Grand Statement on the Interpretation of Quantum Mechanics.

Granted, I don’t completely solve the mysteries of quantum mechanics in this lecture. I didn’t see any need to — since to judge from the quant-ph arXiv, those mysteries are solved at least twenty times a week. Instead I merely elucidate the mysteries, by examining two very different kinds of stories that people tell themselves to feel better about quantum mechanics: decoherence and hidden variables.

“But along the way,” you’re wondering, “will Scott also touch on the arrow of time, the Second Law of Thermodynamics, Bell’s Inequality, the Kochen-Specker Theorem, the preferred-basis problem, discrete vs. continuous Hilbert spaces, and even the Max-Flow/Min-Cut Theorem?” Man oh man, is someone in for a treat.

I assume that, like Lecture 9, this will be one of the most loved and hated lectures of the course. So bring it on, commenters. You think I can’t handle you?

Update (4/5): Peter Shor just posted a delightful comment that I thought I’d share here, in the hope of provoking more discussion.

Interpretations of quantum mechanics, unlike Gods, are not jealous, and thus it is safe to believe in more than one at the same time. So if the many-worlds interpretation makes it easier to think about the research you’re doing in April, and the Copenhagen interpretation makes it easier to think about the research you’re doing in June, the Copenhagen interpretation is not going to smite you for praying to the many-worlds interpretation. At least I hope it won’t, because otherwise I’m in big trouble.