The Boson Apocalypse

December 21st, 2012

If the world ends today, at least it won’t do so without three identical photons having been used to sample from a probability distribution defined in terms of the permanents of 3×3 matrices, thereby demonstrating the Aaronson-Arkhipov BosonSampling protocol.  And the results were obtained by no fewer than four independent experimental groups, some of whom have now published in Science.  One of the groups is based in Brisbane, Australia, one in Oxford, one in Vienna, and one in Rome; they coordinated to release their results the same day.  That’s right, the number of papers (4) that these groups managed to choreograph to appear simultaneously actually exceeds the number of photons that they so managed (3).  The Brisbane group was even generous enough to ask me to coauthor: I haven’t been within 10,000 miles of their lab, but I did try to make myself useful to them as a “complexity theory consultant.”

Here are links to the four experimental BosonSampling papers released in the past week:

For those who want to know the theoretical background to this work:

For those just tuning in, here are some popular-level articles about BosonSampling:

I’ll be happy to answer further questions in the comments; for now, here’s a brief FAQ:

Q: Why do you need photons in particular for these experiments?

A: What we need is identical bosons, whose transition amplitudes are given by the permanents of matrices.  If it were practical to do this experiment with Higgs bosons, they would work too!  But photons are more readily available.

Q: But a BosonSampling device isn’t really a “computer,” is it?

A: It depends what you mean by “computer”!  If you mean a physical system that you load input into, let evolve according to the laws of physics, then measure to get an answer to a well-defined mathematical problem, then sure, it’s a computer!   The only question is whether it’s a useful computer.  We don’t believe it can be used as a universal quantum computer—or even, for that matter, as a universal classical computer.  More than that, Alex and I weren’t able to show that solving the BosonSampling problem has any practical use for anything.  However, we did manage to amass evidence that, despite being useless, the BosonSampling problem is also hard (at least for a classical computer).  And for us, the hardness of classical simulation was the entire point.

Q: So, these experiments reported in Science this week  have done something that no classical computer could feasibly simulate?

A: No, a classical computer can handle the simulation of 3 photons without much—or really, any—difficulty.  This is only a first step: before this, the analogous experiment (called the Hong-Ou-Mandel dip) had only ever been performed with 2 photons, for which there’s not even any difference in complexity between the permanent and the determinant (i.e., between bosons and fermions).  However, if you could scale this experiment up to about 30 photons, then it’s likely that the experiment would be solving the BosonSampling problem faster than any existing classical computer (though the latter could eventually solve the problem as well).  And if you could scale it up to 100 photons, then you might never even know if your experiment was working correctly, because a classical computer would need such an astronomical amount of time to check the results.

Proving Without Explaining, and Verifying Without Understanding

November 17th, 2012

Last Friday, I was at a “Symposium on the Nature of Proof” at UPenn, to give a popular talk about theoretical computer scientists’ expansions of the notion of mathematical proof (to encompass things like probabilistic, interactive, zero-knowledge, and quantum proofs).  This really is some of the easiest, best, and most fun material in all of CS theory to popularize.  Here are iTunes videos of my talk and the three others in the symposium: I’m video #2, logician Solomon Feferman is #3, attorney David Rudovsky is #4, and mathematician Dennis DeTurck is #5.  Also, here are my PowerPoint slides.  Thanks very much to Scott Weinstein at Penn for organizing the symposium.

In other news, the Complexity Zoo went down yet again this week, in a disaster that left vulnerable communities without access to vital resources like nondeterminism and multi-prover interaction.  Luckily, computational power has since been restored: with help from some volunteers, I managed to get the Zoo up and running again on my BlueHost account.  But while the content is there, it looks horrendously ugly; all the formatting seems to be gone.  And the day I agreed to let the Zoo be ported to MediaWiki was the day I lost the ability to fix such problems.  What I really need, going forward, is for someone else simply to take charge of maintaining the Zoo: it’s become painfully apparent both that it needs to be done and that I lack the requisite IT skills.  If you want to take a crack at it, here’s an XML dump of the Zoo from a few months ago (I don’t think it’s really changed since then).  You don’t even need to ask my permission: just get something running, and if it looks good, I’ll anoint you the next Zookeeper and redirect complexityzoo.com to point to your URL.

Update (Nov. 18): The Zoo is back up with the old formatting and graphics!!  Thanks so much to Charles Fu for setting up the new complexity-zoo.net (as well as Ethan, who set up a slower site that tided us over).  I’ve redirected complexityzoo.com to point to complexity-zoo.net, though it might take some time for your browser cache to clear.

The $10 billion voter

November 5th, 2012

Update (Nov. 8): Slate’s pundit scoreboard.


Update (Nov. 6): In crucial election news, a Florida woman wearing an MIT T-shirt was barred from voting, because the election supervisor thought her shirt was advertising Mitt Romney.


At the time of writing, Nate Silver is giving Obama an 86.3% chance.  I accept his estimate, while vividly remembering various admittedly-cruder forecasts the night of November 5, 2000, which gave Gore an 80% chance.  (Of course, those forecasts need not have been “wrong”; an event with 20% probability really does happen 20% of the time.)  For me, the main uncertainties concern turnout and the effects of various voter-suppression tactics.

In the meantime, I wanted to call the attention of any American citizens reading this blog to the wonderful Election FAQ of Peter Norvig, director of research at Google and a person well-known for being right about pretty much everything.  The following passage in particular is worth quoting.

Is it rational to vote?

Yes. Voting for president is one of the most cost-effective actions any patriotic American can take.

Let me explain what the question means. For your vote to have an effect on the outcome of the election, you would have to live in a decisive state, meaning a state that would give one candidate or the other the required 270th electoral vote. More importantly, your vote would have to break an exact tie in your state (or, more likely, shift the way that the lawyers and judges will sort out how to count and recount the votes). With 100 million voters nationwide, what are the chances of that? If the chance is so small, why bother voting at all?

Historically, most voters either didn’t worry about this problem, or figured they would vote despite the fact that they weren’t likely to change the outcome, or vote because they want to register the degree of support for their candidate (even a vote that is not decisive is a vote that helps establish whether or not the winner has a “mandate”). But then the 2000 Florida election changed all that, with its slim 537 vote (0.009%) margin.

What is the probability that there will be a decisive state with a very close vote total, where a single vote could make a difference? Statistician Andrew Gelman of Columbia University says about one in 10 million.

That’s a small chance, but what is the value of getting to break the tie? We can estimate the total monetary value by noting that President George W. Bush presided over a $3 trillion war and at least a $1 trillion economic melt-down. Senator Sheldon Whitehouse (D-RI) estimated the cost of the Bush presidency at $7.7 trillion. Let’s compromise and call it $6 trillion, and assume that the other candidate would have been revenue neutral, so the net difference of the presidential choice is $6 trillion.

The value of not voting is that you save, say, an hour of your time. If you’re an average American wage-earner, that’s about $20. In contrast, the value of voting is the probability that your vote will decide the election (1 in 10 million if you live in a swing state) times the cost difference (potentially $6 trillion). That means the expected value of your vote (in that election) was $600,000. What else have you ever done in your life with an expected value of $600,000 per hour? Not even Warren Buffett makes that much. (One caveat: you need to be certain that your contribution is positive, not negative. If you vote for a candidate who makes things worse, then you have a negative expected value. So do your homework before voting. If you haven’t already done that, then you’ll need to add maybe 100 hours to the cost of voting, and the expected value goes down to $6,000 per hour.)

I’d like to embellish Norvig’s analysis with one further thought experiment.  While I favor a higher figure, for argument’s sake let’s accept Norvig’s estimate that the cost George W. Bush inflicted on the country was something like $6 trillion.  Now, imagine that a delegation of concerned citizens from 2012 were able to go back in time to November 5, 2000, round up 538 lazy Gore supporters in Florida who otherwise would have stayed home, and bribe them to go to the polls.  Set aside the illegality of the time-travelers’ action: they’re already violating the laws of space, time, and causality, which are well-known to be considerably more reliable than Florida state election law!  Set aside all the other interventions that also would’ve swayed the 2000 election outcome, and the 20/20 nature of hindsight, and the insanity of Florida’s recount process.  Instead, let’s simply ask: how much should each of those 538 lazy Floridian Gore supporters have been paid, in order for the delegation from the future to have gotten its money’s worth?

The answer is a mind-boggling ~$10 billion per voter.  Think about that: just for peeling their backsides off the couch, heading to the local library or school gymnasium, and punching a few chads (all the way through, hopefully), each of those 538 voters would have instantly received the sort of wealth normally associated with Saudi princes or founders of Google or Facebook.  And the country and the world would have benefited from that bargain.

No, this isn’t really a decisive argument for anything (I’ll leave it to the commenters to point out the many possible objections).  All it is, is an image worth keeping in mind the next time someone knowingly explains to you why voting is a waste of time.

A causality post, for no particular reason

November 2nd, 2012

The following question emerged from a conversation with the machine learning theorist Pedro Domingos a month ago.

Consider a hypothetical race of intelligent beings, the Armchairians, who never take any actions: never intervene in the world, never do controlled experiments, never try to build anything and see if it works.  The sole goal of the Armchairians is to observe the world around them and, crucially, to make accurate predictions about what’s going to happen next.  Would the Armchairians ever develop the notion of cause and effect?  Or would they be satisfied with the notion of statistical correlation?  Or is the question kind of silly, the answer depending entirely on what we mean by “developing the notion of cause and effect”?  Feel free to opine away in the comments section.

Silver lining

October 29th, 2012

Update (10/31): While I continue to engage in surreal arguments in the comments section—Scott, I’m profoundly disappointed that a scientist like you, who surely knows better, would be so sloppy as to assert without any real proof that just because it has tusks and a trunk, and looks and sounds like an elephant, and is the size of the elephant, that it therefore is an elephant, completely ignoring the blah blah blah blah blah—while I do that, there are a few glimmerings that the rest of the world is finally starting to get it.  A new story from The Onion, which I regard as almost the only real newspaper left:

Nation Suddenly Realizes This Just Going To Be A Thing That Happens From Now On

Update (11/1): OK, and this morning from Nicholas Kristof, who’s long been one of the rare non-Onion practitioners of journalism: Will Climate Get Some Respect Now?


I’m writing from the abstract, hypothetical future that climate-change alarmists talk about—the one where huge tropical storms batter the northeastern US, coastal cities are flooded, hundreds of thousands are evacuated from their homes, etc.  I always imagined that, when this future finally showed up, at least I’d have the satisfaction of seeing the deniers admit they were grievously wrong, and that I and those who think similarly were right.  Which, for an academic, is a satisfaction that has to be balanced carefully against the possible destruction of the world.  I don’t think I had the imagination to foresee that the prophesied future would actually arrive, and that climate change would simultaneously disappear as a political issue—with the forces of know-nothingism bolder than ever, pressing their advantage into questions like whether or not raped women can get pregnant, as the President weakly pleads that he too favors more oil drilling.  I should have known from years of blogging that, if you hope for the consolation of seeing those who are wrong admit to being wrong, you hope for a form of happiness all but unattainable in this world.

Yet, if the transformation of the eastern seaboard into something out of the Jurassic hasn’t brought me that satisfaction, it has brought a different, completely unanticipated benefit.  Trapped in my apartment, with the campus closed and all meetings cancelled, I’ve found, for the first time in months, that I actually have some time to write papers.  (And, well, blog posts.)  Because of this, part of me wishes that the hurricane would continue all week, even a month or two (minus, of course, the power outages, evacuations, and other nasty side effects).  I could learn to like this future.

At this point in the post, I was going to transition cleverly into an almost (but not completely) unrelated question about the nature of causality.  But I now realize that the mention of hurricanes and (especially) climate change will overshadow anything I have to say about more abstract matters.  So I’ll save the causality stuff for tomorrow or Wednesday.  Hopefully the hurricane will still be here, and I’ll have time to write.

Quantum computing in the newz

October 9th, 2012

Update (10/10).  In case anyone is interested, here’s a comment I posted over at Cosmic Variance, responding to a question about the relevance of Haroche and Wineland’s work for the interpretation of quantum mechanics.

The experiments of Haroche and Wineland, phenomenal as they are, have zero implications one way or the other for the MWI/Copenhagen debate (nor, for that matter, for third-party candidates like Bohm 🙂 ). In other words, while doing these experiments is a tremendous challenge requiring lots of new ideas, no sane proponent of any interpretation would have made predictions for their outcomes other than the ones that were observed. To do an experiment about which the proponents of different interpretations might conceivably diverge, it would be necessary to try to demonstrate quantum interference in a much, much larger system — for example, a brain or an artificially-intelligent quantum computer. And even then, the different interpretations arguably don’t make differing predictions about what the published results of such an experiment would be. If they differ at all, it’s in what they claim, or refuse to claim, about the experiences of the subject of the experiment, while the experiment is underway. But if quantum mechanics is right, then the subject would necessarily have forgotten those experiences by the end of the experiment — since otherwise, no interference could be observed!

So, yeah, barring any change to the framework of quantum mechanics itself, it seems likely that people will be arguing about its interpretation forever. Sorry about that. 🙂


Where is he?  So many wild claims being leveled, so many opportunities to set the record straight, and yet he completely fails to respond.  Where’s the passion he showed just four years ago?  Doesn’t he realize that having the facts on his side isn’t enough, has never been enough?  It’s as if his mind is off somewhere else, or as if he’s tired of his role as a public communicator and no longer feels like performing it.  Is his silence part of some devious master plan?  Is he simply suffering from a lack of oxygen in the brain?  What’s going on?

Yeah, yeah, I know.  I should blog more.  I’ll have more coming soon, but for now, two big announcements related to quantum computing.

Today the 2012 Nobel Prize in Physics was awarded jointly to Serge Haroche and David Wineland, for “for ground-breaking experimental methods that enable measuring and manipulation of individual quantum systems.”  I’m not very familiar with Haroche’s work, but I’ve known of Wineland for a long time as possibly the top quantum computing experimentalist in the business, setting one record after another in trapped-ion experiments.  In awarding this prize, the Swedes have recognized the phenomenal advances in atomic, molecular, and optical physics that have already happened over the last two decades, largely motivated by the goal of building a scalable quantum computer (along with other, not entirely unrelated goals, like more accurate atomic clocks).  In so doing, they’ve given what’s arguably the first-ever “Nobel Prize for quantum computing research,” without violating their policy to reward only work that’s been directly confirmed by experiment.  Huge congratulations to Haroche and Wineland!!

In other quantum computing developments: yes, I’m aware of the latest news from D-Wave, which includes millions of dollars in new funding from Jeff Bezos (the founder of Amazon.com, recipients of a large fraction of my salary).  Despite having officially retired as Chief D-Wave Skeptic, I posted a comment on Tom Simonite’s article in MIT Technology Review, and also sent the following email to a journalist.

I’m probably not a good person to comment on the “business” aspects of D-Wave.  They’ve been extremely successful raising money in the past, so it’s not surprising to me that they continue to be successful.  For me, three crucial points to keep in mind are:

(1) D-Wave still hasn’t demonstrated 2-qubit entanglement, which I see as one of the non-negotiable “sanity checks” for scalable quantum computing.  In other words: if you’re producing entanglement, then you might or might not be getting quantum speedups, but if you’re not producing entanglement, then our current understanding fails to explain how you could possibly be getting quantum speedups.

(2) Unfortunately, the fact that D-Wave’s machine solves some particular problem in some amount of time, and a specific classical computer running (say) simulated annealing took more time, is not (by itself) good evidence that D-Wave was achieving the speedup because of quantum effects.  Keep in mind that D-Wave has now spent ~$100 million and ~10 years of effort on a highly-optimized, special-purpose computer for solving one specific optimization problem.  So, as I like to put it, quantum effects could be playing the role of “the stone in a stone soup”: attracting interest, investment, talented people, etc. to build a device that performs quite well at its specialized task, but not ultimately because of quantum coherence in that device.

(3) The quantum algorithm on which D-Wave’s business model is based — namely, the quantum adiabatic algorithm — has the property that it “degrades gracefully” to classical simulated annealing when the decoherence rate goes up.  This, fundamentally, is the thing that makes it difficult to know what role, if any, quantum coherence is playing in the performance of their device.  If they were trying to use Shor’s algorithm to factor numbers, the situation would be much more clear-cut: a decoherent version of Shor’s algorithm just gives you random garbage.  But a decoherent version of the adiabatic algorithm still gives you a pretty good (but now essentially “classical”) algorithm, and that’s what makes it hard to understand what’s going on here.

As I’ve said before, I no longer feel like playing an adversarial role.  I really, genuinely hope D-Wave succeeds.  But the burden is on them to demonstrate that their device uses quantum effects to obtain a speedup, and they still haven’t met that burden.  When and if the situation changes, I’ll be happy to say so.  Until then, though, I seem to have the unenviable task of repeating the same observation over and over, for 6+ years, and confirming that, no, the latest sale, VC round, announcement of another “application” (which, once again, might or might not exploit quantum effects), etc., hasn’t changed the truth of that observation.

Best,
Scott

Two quick announcements

September 8th, 2012

The Pennsylvania Governor’s School for the Sciences (PGSS) was an incredibly-successful summer program for gifted high school students in my birth-state of Pennsylvania.  PGSS ran from 1982 to 2009 and then was shuttered due to state budget cuts.  A group of alumni is now trying to raise enough private funds to restart the program (they need $100,000).  Please visit their site, watch their video, and make a small (or large) donation if you feel moved to.

In other news, I’ll be speaking at a workshop on Quantum Information Science in Computer and Natural Sciences, organized by Umesh Vazirani and Carl Williams, to be held September 28-29 at the University of Maryland College Park.  This workshop is specifically designed for computer scientists, mathematicians, physicists, and others who haven’t worked in quantum information, but who’d like to know more about current research in the area, and to look for connections between quantum information and their own fields.  Umesh writes:

The initiative comes at a particularly opportune moment for researchers in complexity theory, given the increasing relevance of quantum techniques in complexity theory — the 2-4 norm paper of Barak, et al (SDPs, Lasserre), exponential lower bounds for TSP polytope via quantum communication complexity arguments (de Wolf et al), quantum Hamiltonian complexity as a generalization of  CSPs, lattice-based cryptography whose security is based on quantum arguments, etc.
Hope to see some of you there!

The Toaster-Enhanced Turing Machine

August 30th, 2012

Over at Theoretical Computer Science StackExchange, an entertaining debate has erupted about the meaning and validity of the Church-Turing Thesis.  The prompt for this debate was a question asking for opinions about Peter Wegner and Dina Goldin’s repetitive diatribes claiming to refute “the myth of the Church-Turing Thesis”—on the grounds that, you see, Turing machines can only handle computations with static inputs and outputs, not interactivity, or programs like operating systems that run continuously.  For a demolition of this simple misunderstanding, see Lance Fortnow’s CACM article.  Anyway, I wrote my own parodic response to the question, which generated so many comments that the moderators started shooing people away.  So I decided to repost my answer on my blog.  That way, after you’re done upvoting my answer over at CS Theory StackExchange :-), you can come back here and continue the discussion in the comments section.


Here’s my favorite analogy. Suppose I spent a decade publishing books and papers arguing that, contrary to theoretical computer science’s dogma, the Church-Turing Thesis fails to capture all of computation, because Turing machines can’t toast bread. Therefore, you need my revolutionary new model, the Toaster-Enhanced Turing Machine (TETM), which allows bread as a possible input and includes toasting it as a primitive operation.

You might say: sure, I have a “point”, but it’s a totally uninteresting one. No one ever claimed that a Turing machine could handle every possible interaction with the external world, without first hooking it up to suitable peripherals. If you want a Turing machine to toast bread, you need to connect it to a toaster; then the TM can easily handle the toaster’s internal logic (unless this particular toaster requires solving the halting problem or something like that to determine how brown the bread should be!). In exactly the same way, if you want a TM to handle interactive communication, then you need to hook it up to suitable communication devices, as Neel discussed in his answer. In neither case are we saying anything that wouldn’t have been obvious to Turing himself.

So, I’d say the reason why there’s been no “followup” to Wegner and Goldin’s diatribes is that theoretical computer science has known how to model interactivity whenever needed, and has happily done so, since the very beginning of the field.

Update (8/30): A related point is as follows. Does it ever give the critics pause that, here inside the Elite Church-Turing Ivory Tower (the ECTIT), the major research themes for the past two decades have included interactive proofs, multiparty cryptographic protocols, codes for interactive communication, asynchronous protocols for routing, consensus, rumor-spreading, leader-election, etc., and the price of anarchy in economic networks? If putting Turing’s notion of computation at the center of the field makes it so hard to discuss interaction, how is it that so few of us have noticed?

Another Update: To the people who keep banging the drum about higher-level formalisms being vastly more intuitive than TMs, and no one thinking in terms of TMs as a practical matter, let me ask an extremely simple question. What is it that lets all those high-level languages existin the first place, that ensures they can always be compiled down to machine code? Could it be … err … THE CHURCH-TURING THESIS, the very same one you’ve been ragging on? To clarify, the Church-Turing Thesis is not the claim that “TURING MACHINEZ RULE!!” Rather, it’s the claim that any reasonable programming language will be equivalent in expressive power to Turing machines — and as a consequence, that you might as well think in terms of the higher-level languages if it’s more convenient to do so. This, of course, was a radical new insight 60-75 years ago.

Update (Sept. 6): Check out this awesome comment by Lou Scheffer, describing his own tale of conversion from a Church-Turing skeptic to believer, and making an extremely apt comparison to the experience of conversion to the belief that R, R2, and so on all have the same cardinality (an experience I also underwent!).

Why Many-Worlds is not like Copernicanism

August 18th, 2012

[Update (8/26): Inspired by the great responses to my last Physics StackExchange question, I just asked a new onealso about the possibilities for gravitational decoherence, but now focused on Gambini et al.’s “Montevideo interpretation” of quantum mechanics.

Also, on a completely unrelated topic, my friend Jonah Sinick has created a memorial YouTube video for the great mathematician Bill Thurston, who sadly passed away last week.  Maybe I should cave in and set up a Twitter feed for this sort of thing…]

[Update (8/26): I’ve now posted what I see as one of the main physics questions in this discussion on Physics StackExchange: “Reversing gravitational decoherence.”  Check it out, and help answer if you can!]

[Update (8/23): If you like this blog, and haven’t yet read the comments on this post, you should probably do so!  To those who’ve complained about not enough meaty quantum debates on this blog lately, the comment section of this post is my answer.]

[Update: Argh!  For some bizarre reason, comments were turned off for this post.  They’re on now.  Sorry about that.]

I’m in Anaheim, CA for a great conference celebrating the 80th birthday of the physicist Yakir Aharonov.  I’ll be happy to discuss the conference in the comments if people are interested.

In the meantime, though, since my flight here was delayed 4 hours, I decided to (1) pass the time, (2) distract myself from the inanities blaring on CNN at the airport gate, (3) honor Yakir’s half-century of work on the foundations of quantum mechanics, and (4) honor the commenters who wanted me to stop ranting and get back to quantum stuff, by sharing some thoughts about a topic that, unlike gun control or the Olympics, is completely uncontroversial: the Many-Worlds Interpretation of quantum mechanics.

Proponents of MWI, such as David Deutsch, often argue that MWI is a lot like Copernican astronomy: an exhilarating expansion in our picture of the universe, which follows straightforwardly from Occam’s Razor applied to certain observed facts (the motions of the planets in one case, the double-slit experiment in the other).  Yes, many holdouts stubbornly refuse to accept the new picture, but their skepticism says more about sociology than science.  If you want, you can describe all the quantum-mechanical experiments anyone has ever done, or will do for the foreseeable future, by treating “measurement” as an unanalyzed primitive and never invoking parallel universes.  But you can also describe all astronomical observations using a reference frame that places the earth at the center of the universe.  In both cases, say the MWIers, the problem with your choice is its unmotivated perversity: you mangle the theory’s mathematical simplicity, for no better reason than a narrow parochial urge to place yourself and your own experiences at the center of creation.  The observed motions of the planets clearly want a sun-centered model.  In the same way, Schrödinger’s equation clearly wants measurement to be just another special case of unitary evolution—one that happens to cause your own brain and measuring apparatus to get entangled with the system you’re measuring, thereby “splitting” the world into decoherent branches that will never again meet.  History has never been kind to people who put what they want over what the equations want, and it won’t be kind to the MWI-deniers either.

This is an important argument, which demands a response by anyone who isn’t 100% on-board with MWI.  Unlike some people, I happily accept this argument’s framing of the issue: no, MWI is not some crazy speculative idea that runs afoul of Occam’s razor.  On the contrary, MWI really is just the “obvious, straightforward” reading of quantum mechanics itself, if you take quantum mechanics literally as a description of the whole universe, and assume nothing new will ever be discovered that changes the picture.

Nevertheless, I claim that the analogy between MWI and Copernican astronomy fails in two major respects.

The first is simply that the inference, from interference experiments to the reality of many-worlds, strikes me as much more “brittle” than the inference from astronomical observations to the Copernican system, and in particular, too brittle to bear the weight that the MWIers place on it.  Once you know anything about the dynamics of the solar system, it’s hard to imagine what could possibly be discovered in the future, that would ever again make it reasonable to put the earth at the “center.”  By contrast, we do more-or-less know what could be discovered that would make it reasonable to privilege “our” world over the other MWI branches.  Namely, any kind of “dynamical collapse” process, any source of fundamentally-irreversible decoherence between the microscopic realm and that of experience, any physical account of the origin of the Born rule, would do the trick.

Admittedly, like most quantum folks, I used to dismiss the notion of “dynamical collapse” as so contrived and ugly as not to be worth bothering with.  But while I remain unimpressed by the specific models on the table (like the GRW theory), I’m now agnostic about the possibility itself.  Yes, the linearity of quantum mechanics does indeed seem incredibly hard to tinker with.  But as Roger Penrose never tires of pointing out, there’s at least one phenomenon—gravity—that we understand how to combine with quantum-mechanical linearity only in various special cases (like 2+1 dimensions, or supersymmetric anti-deSitter space), and whose reconciliation with quantum mechanics seems to raise fundamental problems (i.e., what does it even mean to have a superposition over different causal structures, with different Hilbert spaces potentially associated to them?).

To make the discussion more concrete, consider the proposed experiment of Bouwmeester et al., which seeks to test (loosely) whether one can have a coherent superposition over two states of the gravitational field that differ by a single Planck length or more.  This experiment hasn’t been done yet, but some people think it will become feasible within a decade or two.  Most likely it will just confirm quantum mechanics, like every previous attempt to test the theory for the last century.  But it’s not a given that it will; quantum mechanics has really, truly never been tested in this regime.  So suppose the interference pattern isn’t seen.  Then poof!  The whole vast ensemble of parallel universes spoken about by the MWI folks would have disappeared with a single experiment.  In the case of Copernicanism, I can’t think of any analogous hypothetical discovery with even a shred of plausibility: maybe a vector field that pervades the universe but whose unique source was the earth?  So, this is what I mean in saying that the inference from existing QM experiments to parallel worlds seems too “brittle.”

As you might remember, I wagered $100,000 that scalable quantum computing will indeed turn out to be compatible with the laws of physics.  Some people considered that foolhardy, and they might be right—but I think the evidence seems pretty compelling that quantum mechanics can be extrapolated at least that far.  (We can already make condensed-matter states involving entanglement among millions of particles; for that to be possible but not quantum computing would seem to require a nasty conspiracy.)  On the other hand, when it comes to extending quantum-mechanical linearity all the way up to the scale of everyday life, or to the gravitational metric of the entire universe—as is needed for MWI—even my nerve falters.  Maybe quantum mechanics does go that far up; or maybe, as has happened several times in physics when exploring a new scale, we have something profoundly new to learn.  I wouldn’t give much more informative odds than 50/50.

The second way I’d say the MWI/Copernicus analogy breaks down arises from a closer examination of one of the MWIers’ favorite notions: that of “parochial-ness.”  Why, exactly, do people say that putting the earth at the center of creation is “parochial”—given that relativity assures us that we can put it there, if we want, with perfect mathematical consistency?  I think the answer is: because once you understand the Copernican system, it’s obvious that the only thing that could possibly make it natural to place the earth at the center, is the accident of happening to live on the earth.  If you could fly a spaceship far above the plane of the solar system, and watch the tiny earth circling the sun alongside Mercury, Venus, and the sun’s other tiny satellites, the geocentric theory would seem as arbitrary to you as holding Cheez-Its to be the sole aim and purpose of human civilization.  Now, as a practical matter, you’ll probably never fly that spaceship beyond the solar system.  But that’s irrelevant: firstly, because you can very easily imagine flying the spaceship, and secondly, because there’s no in-principle obstacle to your descendants doing it for real.

Now let’s compare to the situation with MWI.  Consider the belief that “our” universe is more real than all the other MWI branches.  If you want to describe that belief as “parochial,” then from which standpoint is it parochial?  The standpoint of some hypothetical godlike being who sees the entire wavefunction of the universe?  The problem is that, unlike with my solar system story, it’s not at all obvious that such an observer can even exist, or that the concept of such an observer makes sense.  You can’t “look in on the multiverse from the outside” in the same way you can look in on the solar system from the outside, without violating the quantum-mechanical linearity on which the multiverse picture depends in the first place.

The closest you could come, probably, is to perform a Wigner’s friend experiment, wherein you’d verify via an interference experiment that some other person was placed into a superposition of two different brain states.  But I’m not willing to say with confidence that the Wigner’s friend experiment can even be done, in principle, on a conscious being: what if irreversible decoherence is somehow a necessary condition for consciousness?  (We know that increase in entropy, of which decoherence is one example, seems intertwined with and possibly responsible for our subjective sense of the passage of time.)  In any case, it seems clear that we can’t talk about Wigner’s-friend-type experiments without also talking, at least implicitly, about consciousness and the mind/body problemand that that fact ought to make us exceedingly reluctant to declare that the right answer is obvious and that anyone who doesn’t see it is an idiot.  In the case of Copernicanism, the “flying outside the solar system” thought experiment isn’t similarly entangled with any of the mysteries of personal identity.

There’s a reason why Nobel Prizes are regularly awarded for confirmations of effects that were predicted decades earlier by theorists, and that therefore surprised almost no one when they were finally found.  Were we smart enough, it’s possible that we could deduce almost everything interesting about the world a priori.  Alas, history has shown that we’re usually not smart enough: that even in theoretical physics, our tendencies to introduce hidden premises and to handwave across gaps in argument are so overwhelming that we rarely get far without constant sanity checks from nature.

I can’t think of any better summary of the empirical attitude than the famous comment by Donald Knuth: “Beware of bugs in the above code.  I’ve only proved it correct; I haven’t tried it.”  In the same way, I hereby declare myself ready to support MWI, but only with the following disclaimer: “Beware of bugs in my argument for parallel copies of myself.  I’ve only proved that they exist; I haven’t heard a thing from them.”

The right to bear ICBMs

August 7th, 2012

(Note for non-US readers: This will be another one of my America-centric posts.  But don’t worry, it’s probably one you’ll agree with.)

There’s one argument in favor of gun control that’s always seemed to me to trump all others.

In your opinion, should private citizens should be allowed to own thermonuclear warheads together with state-of-the-art delivery systems?  Does the Second Amendment give them the right to purchase ICBMs on the open market, maybe after a brief cooling-off period?  No?  Why not?

OK, whatever grounds you just gave, I’d give precisely the same grounds for saying that private citizens shouldn’t be allowed to own assault weapons, and that the Second Amendment shouldn’t be construed as giving them that right.  (Personally, I’d ban all guns except for the bare minimum used for sport-shooting, and even that I’d regulate pretty tightly.)

Now, it might be replied that the above argument can be turned on its head: “Should private citizens be allowed to own pocket knives?  Yes, they should?  OK then, whatever grounds you gave for that, I’d give the precisely same grounds for saying that they should be allowed to own assault weapons.”

But crucially, I claim that’s a losing argument for the gun-rights crowd.  For as soon as we’re anywhere on the slippery slope—that is, as soon as it’s conceded that the question hinges, not on absolute rights, but on an actual tradeoffs in actual empirical reality—then the facts make it blindingly obvious that letting possibly-deranged private citizens buy assault weapons is only marginally less crazy than letting them buy ICBMs.

[Related Onion story]