The Blog of Scott Aaronson If you take just one piece of information from this blog: Quantum computers would not solve hard search problems instantaneously by simply trying all the possible solutions at once.

Years ago, Sasha Razborov taught me one of my all-time favorite jokes.

In the 1960s, a man starts handing out leaflets in Moscow’s Red Square. Needless to say, he’s immediately apprehended by the KGB. On examining the leaflets, however, the KGB agents discover that they’re just blank pieces of paper. “What is the meaning of this?” the agents demand.

“What could I write?” exclaims the man. “It’s so obvious!”

The lesson I’ve learned this week is that the man was wrong. In politics, nothing is ever too obvious.

I promised to blog more about research, and I will. Unfortunately, in the one week between my world tour and the start of the fall semester, I’ve been spending less time on quantum complexity research than on sleeping on a new mattress that I bought. This has provided ample time to ponder the following question, which I’ve decided to add to the Shtetl-OptimizedPhysics for Doofuses series:

Why is a soft bed more comfortable than a hard one?

At first glance, this question seems too doofusy even for a series such as this, which makes its target audience clear. The trouble is that, while perfectly reasonable-sounding answers immediately suggest themselves, several of those answers can be shown to be wrong.

Let’s start with the most common answer: a soft bed is more comfortable than a hard bed because it molds to your shape. The inadequacy of this answer can be seen by the following thought experiment: lie on a soft bed, and let it mold to your body. Then imagine that the bed retains exactly the same molded shape, but is replaced by ceramic. No longer so comfortable!

Ah, you reply, but that’s because a ceramic bed doesn’t change its shape as you shift positions throughout the night. But this reply is still inadequate—since even if you’re lying as still as possible, it still seems clear that a soft bed is more comfortable than a hard one.

So it seems any answer needs to start from the observation that, even when you’re lying still, you’re not really lying still: you’re breathing in and out, there are tiny vibrations, etc. The real point of a soft bed is to create a gentler potential well, which absorbs the shocks that would otherwise be caused by those sorts of small movements.

(I was tempted to say the point is to damp the movements, but that can’t be right: trampolines are designed for minimal damping, yet sleeping on a trampoline could actually be pretty comfortable. So the essential thing a bed needs to do is simply to make way in response to small movements and vibrations. How hard the bed tries to spring back to its original shape is a secondary question—the answer to which presumably influences, for example, whether you prefer an innerspring or a memory-foam mattress.)

So then why aren’t beds even softer than they are? Well, the limit of infinite softness would be a bed that immediately collapsed to nothing when you lay on it, dropping you to the floor. But even before that limit, a bed that was too soft would give you too much freedom to shift into awkward positions and thereby cause yourself back problems. This suggests an answer to a question raised by a colleague: is the purpose of a bed to approximate, as well as possible on the earth’s surface, the experience of sleeping in zero gravity? Unless I’m mistaken, the answer is no. Sleeping in space would be like sleeping on a bed that was too soft, with the same potential for back problems and so forth.

Given that lying in bed is normally the least active thing we do, I find it ironic that the only reasons we lie in bed in the first place (as opposed to, say, on steel beams) are dynamical: they involve the way the bed responds to continual vibrations and movements.

I’ll be grateful if knowledgeable physicists, physiologists, or sleepers can correct any errors in the above account. Meantime, the next time your spouse, partner, roommate, parent, etc. accuses you of lounging in bed all afternoon like a comatose dog, you can reply that nothing could be further from the truth: rather, inspired by a post on Shtetl-Optimized, you’re struggling to reconcile your modern understanding of the physics and biology of lying in bed with the prescientific, phenomenal experience of lying in bed, and thereby make yourself into a more enlightened human being.

In a post a year and a half ago, I offered a prize of $200 for proving something called the Generalized Linial-Nisan Conjecture, which basically said that almost k-wise independent distributions fool AC^{0} circuits. (Go over to that post if you want to know what that means and why I cared about it.)

Well, I’m pleased to report that that’s a particular $200 I’ll never have to pay. I just uploaded a new preprint to ECCC, entitled A Counterexample to the Generalized Linial-Nisan Conjecture. (That’s the great thing about research: no matter what happens, you get a paper out of it.)

A couple friends commented that it was wise to name the ill-fated conjecture after other people rather than myself. (Then again, who the hell names a conjecture after themselves?)

If you don’t feel like downloading the ECCC preprint, but do feel like scrolling down, here’s the abstract (with a few links inserted):

In earlier work, we gave an oracle separating the relational versions of BQP and the polynomial hierarchy, and showed that an oracle separating the decision versions would follow from what we called the Generalized Linial-Nisan (GLN) Conjecture: that “almost k-wise independent” distributions are indistinguishable from the uniform distribution by constant-depth circuits. The original Linial-Nisan Conjecture was recently proved by Braverman; we offered a $200 prize for the generalized version. In this paper, we save ourselves $200 by showing that the GLN Conjecture is false, at least for circuits of depth 3 and higher.
As a byproduct, our counterexample also implies that Π_{2}^{p}⊄P^{NP} relative to a random oracle with probability 1. It has been conjectured since the 1980s that PH is infinite relative to a random oracle, but the best previous result was NP≠coNP relative to a random oracle.
Finally, our counterexample implies that the famous results of Linial, Mansour, and Nisan, on the structure of AC^{0} functions, cannot be improved in several interesting respects.

To dispel any confusion, the $200 prize still stands for the original problem that the GLN Conjecture was meant to solve: namely, giving an oracle relative to which BQP is not in PH. As I say in the paper, I remain optimistic about the prospects for solving that problem by a different approach, such as an elegant one recently proposed by Bill Fefferman and Chris Umans. Also, it’s still possible that the GLN Conjecture is true for depth-two AC^{0} circuits (i.e., DNF formulas). If so, that would imply the existence of an oracle relative to which BQP is not in AM—already a 17-year-old open problem—and net a respectable $100.

The Onion has a new piece—United Airlines Exploring Viability of Stacking Them Like Cordwood—that, as usual, is grossly unrealistic. If my own experience is any guide, the real United would never waste money on a grated floor for waste disposal, or people to shovel peanuts into a trough.

But The Onion‘s exploration of the geometry of passenger-packing does raise some genuinely interesting questions. For years, I’ve had this idea to start an airline where, instead of seats, passengers would get personal cubbyholes that were stacked on top of each other like bunk beds. (I’d make sure the marketing materials didn’t describe them as “coffin-shaped,” though that’s what they would be.)

You could sleep in your cubbyhole—much more easily than in a seat, of course—but you could also read, watch a movie, work on your laptop, or eat (all activities that I don’t mind doing while lying down, and the first two of which I prefer to do lying down).

Besides passenger comfort, my arrangement would have at least two advantages over the standard one:

First, depending on the exact size of the cubbyholes, you could very likely fit more passengers this way, thereby lowering ticket costs.

Second, assuming the cubbyholes were ventilated, you could put little doors on them, thereby giving passengers far more privacy than in a conventional airline. No more being immiserated by screaming babies or inane conversations, or the B.O. of the person next to you, or reading lights while you’re trying to sleep. And, as many of you will have noticed, BQP Aarlines could provide amorous couples with a far more comfortable alternative than the bathroom.

So, readers: do you know if any airline has tried something like this? If not, why not? Are there strong arguments against it that I haven’t thought of, besides the obvious cultural/psychological ones? Should I keep my day job?

Recently someone asked me how to give funnier talks. My first response was to recoil at such an insolent question: doesn’t everyone know that at the core of my shtick lies a unique and ineffable je ne sais quoi that can’t be packaged, bottled, or resold? But the truth was not that I couldn’t give advice; it’s that I didn’t want to. For if everyone knew how easy it was to keep an audience at least half-awake, how would people like me maintain their edge? By proving better theorems? Having something new and relevant and say? These questions answer themselves.

But because I love you, my readers, so deeply, and because I feel guilty about abandoning you for so long, I shall now publicly deconstruct the main ingredients of seminar humor, insofar as I’ve been able to find them. (A few ingredients are specific to theoretical computer science, but most are more general.)

Make fun of people in the audience. (Of course, you have to do it in such a way that they’re flattered you’re ripping them and not someone else.)

Ridicule bogus claims related to your topic, particularly claims that received wide currency in the popular press. (To be honest, I do this not so much because it gets laughs—though it does—but as a small service to humanity. If I can make one budding crackpot think twice before hitting “Submit” on a disproof of Bell’s Theorem, I will not have lived in vain. Of course, the ridicule should always focus more on ideas than people; and even then, a few in the audience will frown on it, considering it unscientific or unprofessional. Forty or fifty crackpots ago, I agreed with them. It’s only experience that hardened me into a vigilante.)

Incorporate the audience’s shared experiences into your talk (without making a big deal of it, as if it’s the most natural thing in the world). For example, when it comes time to trot out an Alice/Bob scenario, have yours wryly comment on a previous talk, an excursion everyone went on, a current event (like an election) that everyone actually cares about more than the talk…

Self-deprecate. (“My first conjecture was falsified. The following conjecture hasn’t yet been falsified, and is obviously true…”)

Say things that recognize and comment on how neurotic the thought-process of theoretical computer scientists really is, by taking that thought-process to extremes. (“That’s off by a factor of 10^{10^120}, which is only O(1) and is therefore irrelevant.” “For years, people tried unsuccessfully to prove this sort of impossibility result was impossible. Our result shows the impossibility of their goal.”)

If your field is interdisciplinary, the humor potential is almost limitless. Are you a physicist? Ridicule the computer scientists. A computer scientist? Ridicule the mathematicians. A mathematician? Ridicule the economists. Chances are, enough differences in notation, terminology, assumptions, and underlying goals will arise in the talk to give you a never-ending supply of material. “Disciplinary humor” is a more refined, intellectual variant of ethnic humor, and is effective for the same reasons.

Explain your results in an unusually vivid or graphic way. (“If, at the moment of your death, your whole life flashed before you in an instant, and if while you were alive you’d performed suitable quantum computations on your own brain, then you could solve Graph Isomorphism in polynomial time.”) This type of humor is my absolute favorite: on a plot with laughter volume on one axis and scientific content on the other, it’s way out on the upper-right-hand corner.

If you’re using PowerPoint, take full advantage of its comic potential: wild animations, text that pops up on the screen to question or even flat-out contradict what you’re saying, a punchline at the bottom of the slide that only gets revealed when you press a key, etc. I love doing this because I have as much time as I need to “precompute” jokes (though I’ll then often elaborate on them extemporaneously).

Banter with the crowd: if someone makes a crack at your expense, always respond, and even escalate the interaction into a “staged fight” (the rest of the audience will love it). If someone catches you in a mistake, or you don’t know the answer to a question, make a self-deprecating joke that acknowledges the situation even as it wins you sympathy points.

Have high energy! Loud, lots of moving around, emotion in your voice … like you can’t wait to invite everyone along to the most exciting journey in the history of the universe. Not only is that good practice in general (at the least, it keeps the audience from falling asleep), it also creates a general atmosphere in which it’s okay to laugh at jokes.

Pause a few beats before the punchline. (You can get better at this by watching professional comics.)

Experiment! If a particular joke bombs, drop it from your rotation; if it brings the house down, recycle it in future talks. Of course, you should drop a joke once it reaches its saturation point, where much of the audience has already heard it in previous talks. On the other hand, if this particular audience hasn’t yet heard the joke, disregard your own internal sense of its being “tired”: it could go over just as well as the first time, or better.

Steal ideas shamelessly from other speakers. (I mean their humor techniques, not their results.) Just as importantly, study the lame jokes other speakers use, so as to avoid them. (For example, I estimate that 94% of quantum computing talks include a heavy-handed comment about someone or something being “in superposition”; this has not yet gotten a laugh. Or the talks repeat stories about Feynman, Bohr, etc. that everyone in the audience has already heard a thousand times.)

Tailor your jokes to the audience’s background. For instance, I have some jokes that work great in the US, but sink in other countries. Or work on physicists but not computer scientists, or vice versa.

Make jokes about the country you’re visiting. Of course, this is subject to common sense: I’ve been known to resort to “zed” / “aboot” jokes in Canada, scone / royalty / powdered wig jokes in England, and neutrality / yodeling jokes in Switzerland, but I usually don’t make the first joke that pops into my head when visiting Germany or Austria.

Take risks! Here’s an Umeshism: if some of your jokes don’t flop, then you’re not being bold enough. Do things that people can’t believe anyone would actually do in a talk. Most people seem to operate under the assumption that when they’re giving a talk, they have to be less funny than in regular conversation, when the truth is the opposite. If something comes into your head that’s funny to you, and it passes the most flimsy and cursory of offensiveness checks … out with it, and worry later about the consequences!

Three final remarks.

First, reading over the list, I can’t help but feel sheepish about how much one can do with such a crude and obvious bag of tricks.

Second, I only wish I applied this crude bag more consistently! Particularly when I have a new result and I’m excited about the proof, I all too often ignore my own advice and lapse into boringness. But at least I notice I’m doing it, get annoyed at myself, and resolve to be crasser, less mature, and less professional the next time around.

Third, you might feel that adding shtick to your talks makes you “shallow,” that all that should matter is the content of your results. In the relatively rare case where you’re addressing experts in your own sub-sub-subfield, that’s probably true: you can drop the funny business and get straight to the point. In all other cases, I’m almost certain the audience will understand your results better if you incorporate some shtick than if you don’t. But hey—it’s up to you whether you want to address an ideal Platonic audience (“more lemmas! no irrelevant distractions! yes! harder! faster!”) or the actual flesh-and-blood hairless apes who are dozing off in the seminar room while you speak.

Dear Scott, Please keep the focus of your blog. You have lately been losing science to your blog and started blogging about various loosely related things. One of the ways I subscribed to your blog was because your articles were very computation-oriented. Now you no longer keep the theme. And as you might have heard, shifting topics in your blog will lose your readers.

So today I noticed something bizarre. A celebrated result in cryptography, due to Goldreich, Goldwasser, and Micali, states that any pseudorandom generator gives rise to a pseudorandom function family. See Luca’s notes or the original GGM paper for more.

Now I’d always assumed, without thinking about it, that the GGM result “obviously” carries over to the quantum case—so that any pseudorandom generator secure against quantum attack would give rise to a pseudorandom function family secure against quantum attack. But now that I’m writing a paper that actually relies on this “fact,” I realized I have no idea why it’s true.

Look: in the GGM argument, you start with a pseudorandom generator G:{0,1}^{n}→{0,1}^{2n}, and you apply it recursively to produce a family of functions f_{s}:{0,1}^{n}→{0,1}^{n}, where s is the seed. You then consider a hypothetical polynomial-time algorithm A that distinguished f_{s} from a truly random function. You show how you could use A to create a polynomial-time algorithm that distinguished the output of G from a truly random 2n-bit string—thereby contradicting the starting assumption that G was pseudorandom.

The trouble is, the argument relies crucially on the fact that A examines only a polynomial number of outputs of f_{s}—intuitively so that you can run a hybrid argument, changing the outputs that A actually examines one by one into truly random strings. But if A is a quantum algorithm, then (duh) it can examine all 2^{n} outputs of f_{s} in superposition! So any argument that depends on “watching A to see which inputs it queries” is toast.

But maybe we can recover the same conclusion in a fancier way? For at least seven years, I’ve been going around conjecturing the following:

Conjecture (): Let Q be a quantum algorithm that makes T queries to a Boolean input X∈{0,1}^{N}. Then for all ε,δ>0, there exists a deterministic classical algorithm that makes poly(T,1/ε,log(1/δ)) queries to X, and that approximates Q’s acceptance probability to within ε on a 1-δ fraction of inputs.

My motivation for Conjecture () had nothing to do with cryptography. I was interested in whether we could rule out the possibility that P=BQP relative to a random oracle with probability 1. If Conjecture () holds—and if the classical algorithm is anything like I think it is—then we can’t rule it out, at least not without proving P≠PSPACE or an even stronger separation in the unrelativized world.

It now occurs to me that, if we knew how to prove Conjecture (), then maybe we could push through a quantum GGM argument using similar ideas—that is, by identifying a tiny subset of inputs to f_{s} that the quantum algorithm’s acceptance probability “really” depends on. Alas, I have good reason to believe that Conjecture () is hard.

So the task remains: prove a quantum GGM theorem. Or maybe I’m missing something completely obvious?

PS. The promised report on the QIS conference in Virginia is coming tomorrow. Take that, future self!

Update (5/3): An anonymous commenter points out that we can use a simpler hybrid argument of Razborov and Rudich—which doesn’t break down in the quantum case—to show that if there exists a PRG that’s secure against 2^{n^Ω(1)}-time quantum adversaries, then there also exists a PRF with polynomial seed length that’s secure against exponential-time quantum adversaries. That somehow hadn’t occurred to me, and it’s good enough for my purposes. (Masked cryptographer: emerge ye from the shadows, and claim thy rightful honour in my Acknowledgments!) On the other hand, the extremely interesting question still stands of whether one can prove a “strong,” GGM-style reduction: from PRGs secure against f(n)-time quantum adversaries to PRFs with linear seed length secure against f(n)^{Ω(1)}-time quantum adversaries, for any superpolynomial f.

The above question came up in conversation with Michael Vassar and some other nerds in New York City yesterday (before I went with relatives to see Gimpel Tam, an extraordinarily dark and depressing musical performed entirely in Yiddish). Look, I know a massive black hole would swallow the earth extremely quickly, and I also know that a microscopic black hole would quickly evaporate as Hawking radiation. So suppose we chose one of intermediate size so as to maximize the earth’s survival time—how long a time could we achieve? (Does the answer depend on the viscosity of the magma or whatever else is in the earth’s core?) Sure, I could try to calculate an answer myself, but why bother when so many physicists read this blog? Pencils out!

Today is the 70th birthday of Donald E. Knuth: Priest of Programming, Titan of Typesetting, Monarch of MMIX, intellectual heir to Turing and von Neumann, greatest living computer scientist by almost-universal assent … alright, you get the idea.

That being the case, Jeff Shallit proposed to various CS bloggers that we should all band together and present the master with a birthday surprise: one post each about how his work has inspired us. The posts are now in! Readers who don’t know about Knuth’s work (are there any?) should start with this post from Luca. Then see this from David Eppstein, this from Doron Zeilberger, this from Jeff, this from Bill Gasarch, and this from Suresh.

Knuth’s impact on my own work and thinking, while vast, has not been directly through research: his main influence on my BibTeX file is that if not for him, I wouldn’t have a BibTeX file. (One reason is that I’m one of the people Doron Zeilberger attacks for ignoring constant factors, and supporting what he calls “the ruling paradigm in computational complexity theory, with its POL vs. EXP dichotomy.”) So I decided to leave Knuth’s scientific oeuvre to others, and to concentrate in this post on his contributions to two other fields: mathematical exposition and computational theology.

Knuth’s creation of the TeX typesetting system — his original motivation being to perfect the layout of his own Art of Computer Programming books — was remarkable in two ways. First, because scientific typesetting is of so little interest to industry, it’s not clear if something like TeX would ever have been invented if not for one man and his borderline-neurotic perfectionism. Second, TeX is one of the only instances I can think of when a complicated software problem was solved so well that it never had to be solved again (nor will it for many decades, one hazards to guess). At least in math, computer science, and physics, the adoption of TeX has been so universal that failure to use it is now a reliable crackpot indicator.

From Wikipedia:

Since version 3, TeX has used an idiosyncratic version numbering system, where updates have been indicated by adding an extra digit at the end of the decimal, so that the version number asymptotically approaches π. This is a reflection of the fact that TeX is now very stable, and only minor updates are anticipated. The current version of TeX is 3.141592; it was last updated in December 2002 … Even though Donald Knuth himself has suggested a few areas in which TeX could have been improved, he indicated that he firmly believes that having an unchanged system that will produce the same output now and in the future is more important than introducing new features. For this reason, he has stated that the “absolutely final change (to be made after my death)” will be to change the version number to π, at which point all remaining bugs will become features.

But Knuth’s interest in scientific exposition goes far beyond typesetting. His 1974 Surreal Numbers: How Two Ex-Students Turned on to Pure Mathematics and Found Total Happiness, which he wrote in one week, was weirdness at the highest possible level: the Beatles’ White Album of math. It’s said to represent the only occasion in history when a new mathematical theory (Conway’s theory of surreal numbers) was introduced in the form of a novel. (Though admittedly, with the exception of one sex scene, this is a “novel” whose plot development mostly takes the form of lemmas.)

Those seeking to improve their own writing should consult Mathematical Writing (available for free on the web), the lecture notes from a course at Stanford taught by Knuth, Tracy Larrabee, and Paul Roberts. Like a lot of Knuth’s work, Mathematical Writing has the refreshing feel of an open-ended conversation: we get to see Knuth interact with students, other teachers, and visiting luminaries like Mary-Claire van Leunen, Paul Halmos, Jeff Ullman, and Leslie Lamport.

Since I’ve blogged before about the battle over academic publishing, I also wanted to mention Knuth’s remarkable and characteristically methodical 2003 letter to the editorial board of the Journal of Algorithms. Knuth asks in a postscript that his letter not be distributed widely — but not surprisingly, it already has been.

In the rest of this post, I’d like to talk about Things A Computer Scientist Rarely Talks About, the only book of Knuth’s for which I collected one of his coveted $2.56 prizes for spotting an error. (Nothing important, just a typo.)

Things is based on a series of lectures on computer science and religion that Knuth gave in 1997 at MIT. (At the risk of oversimplifying: Knuth practices Christianity, but in a strange form less interested in guns and gays than in some business about “universal compassion.”) Perhaps like most readers, when I bought Things I expected yet another essay on “non-overlapping magisteria,” a famous scientist’s apologia justifying his belief in the Virgin Birth and the Resurrection. But Knuth likes to surprise, and what he delivers instead is mostly a meditation on the typography of Bible verses [sic]. More precisely, Things is a “metabook”: a book about the lessons Knuth learned while writing and typesetting an earlier book, one I haven’t yet read, that analyzed verse 3:16 of every book of the Bible.

But this being a lecture series, Knuth also fields questions from the audience about everything from sin and redemption to mathematical Platonism. He has a habit of parrying all the really difficult questions with humor; indeed, he does this so often one comes to suspect humor is his answer. As far as I could tell, there’s only one passage in the entire book where Knuth directly addresses what atheists are probably waiting for him to address. From one of the question periods:

Q: How did you become so interested in God and religion in the first place?

A: It was because of the family I was born into. If I had been born in other circumstances, my religious life would no doubt have been quite different. (p. 155)

And then on to the next question.

To me, what’s remarkable about this response is that Knuth without any hesitation concedes what skeptics from Xenophanes to Richard Dawkins have held up as the central embarrassment of religion. This, of course, is the near-perfect correlation between the content of religious belief and the upbringing of the believer. How, Dawkins is fond of asking, could there possibly be such a thing as a Christian or Hindu or Jewish child? How could a four-year-old already know what he or she thinks about profound questions of cosmogony, history, and ethics — unless, of course, the child were brainwashed by parents or teachers?

My Bayesian friends, like Robin Hanson, carry this argument a step further. For them, the very fact that Knuth knows his beliefs would be different were he born to different parents must, assuming he’s rational, force him to change his beliefs. For how can he believe something with any conviction, if he knows his belief was largely determined by a logically-irrelevant coin toss?

And yet, openly defying the armies of Bayes arrayed against him, here we have Knuth saying, in effect: yes, if I know that if I were some other person my beliefs would be different, but I’m not that other person; I’m Knuth.

So, readers: is Knuth’s response a cop-out, the understandable yet ultimately-indefensible defense of an otherwise-great scientist who never managed to free himself from certain childhood myths? Or is it a profound acknowledgment that none of us ever escape the circumstances of our birth, that we might as well own up to it, that tolerance ought not to require a shared prior, that the pursuit of science and other universal values can coexist with the personal and incommunicable?

Taking a cue from Knuth himself, I’m going to dodge this question. Instead, I decided to end this post by quoting some of my favorite passages from Chapter 6 of Things A Computer Scientist Rarely Talks About.

On computer science and God: “When I talk about computer science as a possible basis for insights about God, of course I’m not thinking about God as a super-smart intellect surrounded by large clusters of ultrafast Linux workstations and great search engines. That’s the user’s point of view.” (p. 168)

“I think it’s fair to say that many of today’s large computer programs rank among the most complex intellectual achievements of all time. They’re absolutely trivial by comparison with any of the works of God, but still they’re somehow closer to those works than anything else we know.” (p. 169)

On infinity: “Infinity is a red herring. I would be perfectly happy to give up immortality if I could only live Super K years before dying [‘Super K’ being defined similarly to an Ackermann number]. In fact, Super K nanoseconds would be enough.” (p. 172)

On the other hand: “I once thought, if I ever had to preach a sermon in church, I would try to explain Cantor’s theorem to my non-mathematical friends so that they could understand something about the infinite.” (p. 172)

On God and computational complexity: “I think it’s fair to say that God may well be bound by the laws of computational complexity … But I don’t recommend that theologians undertake a deep study of computational complexity (unless, of course, they really enjoy it). ” (p. 174)

On quantum mechanics: “Several years ago, I chanced to open Paul Dirac’s famous book on the subject and I was surprised to find out that Dirac was not only an extremely good writer but also that his book was not totally impossible to understand. The biggest surprise, however — actually a shock — was to learn that the things he talks about in that book were completely different from anything I had ever read in Scientific American or in any other popular account of the subject. Apparently when physicists talk to physicists, they talk about linear transformations of generalized Hilbert spaces over the complex numbers; observable quantities are eigenvalues and eigenfunctions of Hermitian linear operators. But when physicists talk to the general public they don’t dare mention such esoteric things, so they speak instead about particles and spins and such, which are much less than half the story. No wonder I could never really understand the popular articles.” (p. 181)

“The extra detail that gets suppressed when quantum mechanics gets popularized amounts to the fact that, according to quantum mechanics, the universe actually consists of much more data than could ever be observed.” (p. 182)

On free will and the problem of evil: “I can design a program that never crashes if I don’t give the user any options. And if I allow the user to choose from only a small number of options, limited to things that appear on a menu, I can be sure that nothing anomalous will happen, because each option can be foreseen in advance and its effects can be checked. But if I give the user the ability to write programs that will combine with my own program, all hell might break loose. (In this sense the users of Emacs have much more free will than the users of Microsoft Word.) … I suppose we could even regard Figure 5 [a binary tree representing someone’s choices] as the Tree of the Knowledge of Good and Evil.” (p. 189-190)

No particular news to report — it’s about the same as it was 400 years ago, I guess. I just wanted to liveblog from the Taj Mahal, is all. (Jonathan Walgate is the one who suggested it.). Now I’ll go back to looking at it.