Kai von Fintel
Anne Whiston Spirn
Members of the MIT Open Access Working Group
Update (1/18): Some more information has emerged. First, it’s looking like the prosecution’s strategy was to threaten Aaron with decades of prison time, in order to force him to accept a plea bargain involving at most 6 months. (Carmen Ortiz issued a statement that conveniently skips the first part of the strategy and focuses on the second.) This is standard operating procedure in our wonderful American justice system, due (in part) to the lack of resources actually to bring most cases to trial. The only thing unusual about the practice is the spotlight being shone on it, now that it was done not to some poor unknown schmuck but to a tortured prodigy and nerd hero. Fixing the problem would require far-reaching changes to our justice system.
Second, while I still strongly feel that we should await the results of Hal Abelson’s investigation, I’ve now heard from several sources that there was some sort of high-level decision at MIT—by whom, I have no idea—not to come out in support of Aaron. Crucially, though, I’m unaware of the faculty (or students, for that matter) ever being consulted about this decision, or even knowing that there was anything for MIT to decide. Yesterday, feeling guilty about having done nothing to save Aaron, I found myself wishing that either he or his friends or parents had made an “end run” around the official channels, and informed MIT faculty and students directly of the situation and of MIT’s ability to help. (Or maybe they did, and I simply wasn’t involved?)
Just to make sure I hadn’t missed anything, I searched my inbox for “Swartz”, but all I found relevant to the case were a couple emails from a high-school student shortly after the arrest (for a project he was doing about the case), and then the flurry of emails after Aaron had already committed suicide. By far the most interesting thing that I found was the following:
Aaron Swartz (December 12, 2007): I’m really enjoying the Democritus lecture notes. Any chance we’ll ever see lecture 12?
My response: It’s a-comin’!
As I wrote on this blog at the time of Aaron’s arrest: I would never have advised him to do what he did. Civil disobedience can be an effective tactic, but off-campus access to research papers simply isn’t worth throwing your life away for—especially if your life holds as much spectacular promise as Aaron’s did, judging from everything I’ve read about him. At the same time, I feel certain that the world will eventually catch up to Aaron’s passionate belief that the results of publicly-funded research should be freely available to the public. We can honor Aaron’s memory by supporting the open science movement, and helping the world catch up with him sooner.
If you have opinions about quantum computing, and haven’t yet read through the discussion following my “response to Dyakonov” post, you’re missing out. The comments—by QC researchers (Preskill, Kuperberg, Gottesman, Fitzsimons…), skeptics (Dyakonov, Kalai, …), and interested outsiders alike—are some of the most interesting I’ve seen in this two-decade-old debate.
At the risk of crass immodesty, I just posted a comment whose ending amused me so much, I had to promote it to its own post. My starting point was an idea that several skeptics, including Dyakonov, have articulated in this debate, and which I’ll paraphrase as follows:
Sure, quantum computing might be “possible in principle.” But only in the same sense that teaching a donkey how to read, transmuting large amounts of lead into gold, or doing a classical computation in the center of the Sun are “possible in principle.” In other words, the task is at the same time phenomenally difficult, and fundamentally arbitrary and quixotic even if you did somehow achieve it.
Since I considered this argument an important one, I wrote a response, which stressed how quantum computing is different both because it strives to solve problems that flat-out can’t feasibly be solved any other way if standard complexity conjectures are correct, and because the goal—namely, expanding the human race’s computational powers beyond classical polynomial time—is not at all an arbitrary one. However, I then felt the need to expand on the last point, since it occurred to me that it’s both central to this debate and almost never discussed explicitly.
How do I know that the desire for computational power isn’t just an arbitrary human quirk?
Well, the reason I know is that math isn’t arbitrary, and computation is nothing more or less than the mechanizable part of solving math problems.
Let me put it this way: if we ever make contact with an advanced extraterrestrial civilization, they might have three sexes and five heads. But they, too, will have encountered the problem of factoring integers into primes. Indeed, because they’ll inhabit the same physical universe as we do, they’ll even have encountered the problem of simulating quantum physics. And therefore, putting the two together, they’ll almost certainly have discovered something like Shor’s algorithm — though they’ll call it “Zork’s bloogorithm” or whatever.
A couple weeks ago M. I. Dyakonov, a longtime quantum computing skeptic, published a new paper setting out his arguments (maybe “grievances” is a more accurate word) against quantum computing research. Looking for a way to procrastinate from other work I have to do, I decided to offer some thoughts in response.
To me, perhaps the most striking aspect of Dyakonov’s paper is what it doesn’t claim. Unlike Leonid Levin, Oded Goldreich, and several other quantum computing skeptics I’ve engaged, Dyakonov never seriously entertains the possibility of a general principle that would explain why scalable quantum computing is not possible. (Thus, my $100K prize presumably isn’t relevant to him.) He even ridicules discussion of such a principle (see the end of this post). The unwillingness to say that scalable QC can’t work, or to articulate a reason why, saves Dyakonov from the need to explore what else would need to be true about the physical world if scalable QC were impossible. For example, would there then be an efficient algorithm to simulate arbitrary quantum systems on a classical computer—or at least, all quantum systems that can plausibly arise in Nature? Dyakonov need not, and does not, evince any curiosity about such questions. In his game, it’s only the quantum computing proponents who are on trial; there’s no need for examination of the other side.
That being so, Dyakonov focuses on what he sees as unrealistic assumptions in known versions of the Quantum Fault-Tolerance Theorem, covering well-trodden ground but with some strange twists. He accuses quantum computing researchers of a “widespread belief that the |0〉 and |1〉 states ‘in the computational basis’ are something absolute, akin to the on/off states of an electrical switch, or of a transistor in a digital computer.” He then follows with a somewhat-patronizing discussion of how no continuous quantity can be manipulated perfectly, and how |0〉 and |1〉 are just arbitrary labels whose meanings could change over time due to drift in the preparation and measurement devices. Well, yes, it’s obvious that |0〉 and |1〉 don’t have absolute meanings, but is it not equally obvious that we can give them meanings, through suitable choices of initial states, gates, and measurement settings? And if the meanings of |0〉 and |1〉 drift over time, due to the imprecision of our devices … well, if the amount of drift is upper-bounded by some sufficiently small constant, then we can regard it as simply yet another source of noise, and apply standard fault-tolerance methods to correct it. If the drift is unbounded, then we do need better devices.
(Fault-tolerance mavens: please use the comments for more detailed discussion! To my inexpert eyes, Dyakonov doesn’t seem to engage the generality of the already-known fault-tolerance theorems—a generality traceable to the fact that what powers those results is ultimately just the linearity of quantum mechanics, not some fragile coincidence that one expects to disappear with the slightest change in assumptions. But I’m sure others can say more.)
Anyway, from his discussion of fault-tolerance, Dyakonov concludes only that the possibility of scalable quantum computing in the real world should be considered an open question.
Surprisingly—since many QC skeptics wouldn’t be caught dead making such an argument—Dyakonov next turns around and says that, well, OK, fine, even if scalable QCs can be built, they still won’t be good for much. Shor’s factoring algorithm is irrelevant, since people would simply switch to other public-key cryptosystems that appear secure even against quantum attack. Simulating quantum physics “would be an interesting and useful achievement, but hardly revolutionary, unless we understand this term in some very narrow sense.” And what about Grover’s algorithm? In an endnote, Dyakonov writes:
Quantum algorithms that provide (with an ideal quantum computer!) only polynomial speed-up compared to digital computing, like the Grover algorithm, became obsolete due to the polynomial slow-down imposed by error correction.
The above is flat-out mistaken. The slowdown imposed by quantum error-correction is polylogarithmic, not polynomial, so it doesn’t come close to wiping out the Grover speedup (or the subexponential speedups that might be achievable, e.g., with the adiabatic algorithm, which Dyakonov doesn’t mention).
But disregarding the polylog/polynomial confusion (which recurs elsewhere in the article), and other technical issues about fault-tolerance, up to this point many quantum computing researchers could happily agree with Dyakonov—and have said similar things many times themselves. Dyakonov even quotes Dorit Aharonov, one of the discoverers of quantum fault-tolerance, writing, “In a sense, the question of noisy quantum computation is theoretically closed. But a question still ponders our minds: Are the assumptions on the noise correct?”
(And as for QC researchers coming clean about limitations of quantum computers? This is just hearsay, but I’m told there’s a QC researcher who actually chose “Quantum computers are not known to be able to solve NP-complete problems in polynomial time” as the tagline for his blog!)
Dyakonov fumes about how popular articles, funding agency reports, and so forth have overhyped progress in quantum computing, leaving the conditions out of theorems and presenting incremental advances as breakthroughs. Here I sadly agree. As readers of Shtetl-Optimized can hopefully attest, I’ve seen it as my professional duty to spend part of my life battling cringeworthy quantum computing claims. Every week, it feels like I talk to another journalist who tries to get me to say that this or that QC result will lead to huge practical applications in the near future, since that’s what the editor is looking for. And every week I refuse to say it, and try to steer the conversation toward “deeper” scientific questions. Sometimes I succeed and sometimes not, but at least I never hang up the phone feeling dirty.
On the other hand, it would be interesting to know whether, in the history of science, there’s ever been a rapidly-developing field, of interest to large numbers of scientists and laypeople alike, that wasn’t surrounded by noxious clouds of exaggeration, incomprehension, and BS. I can imagine that, when Isaac Newton published his Principia, a Cambridge University publicist was there to explain to reporters that the new work proved that the Moon was basically an apple.
But none of that is where Dyakonov loses me. Here’s where he does: from the statements
A) The feasibility of scalable quantum computing in the physical world remains open, and
B) The applications of quantum computing would probably be real but specialized,
he somehow, unaided by argument, arrives at the conclusion
C) Quantum computing is a failed, pathological research program, which will soon die out and be of interest only to sociologists.
Let me quote from his conclusion at length:
I believe that, in spite of appearances, the quantum computing story is nearing its end, not because somebody proves that it is impossible, but rather because 20 years is a typical lifetime of any big bubble in science, because too many unfounded promises have been made, because people get tired and annoyed by almost daily announcements of new “breakthroughs”, because all the tenure positions in quantum computing are already occupied, and because the proponents are growing older and less zealous, while the younger generation seeks for something new …
In fact, quantum computing is not so much a scientific, as a sociological problem which has expanded out of all proportion due to the US system of funding scientific research (which is now being copied all over the world). While having some positive sides, this system is unstable against spontaneous formation of bubbles and mafia-like structures. It pushes the average researcher to wild exaggerations on the border of fraud and sometimes beyond. Also, it is much easier to understand the workings of the funding system, than the workings of Nature, and these two skills only rarely come together.
The QC story says a lot about human nature, the scientific community, and the society as a whole, so it deserves profound psycho-sociological studies, which should begin right now, while the main actors are still alive and can be questioned.
In case the message isn’t yet clear enough, Dyakonov ends by comparing quantum computing to the legend of Nasreddin, who promised the Sultan that he could teach a donkey how to read.
Had he [Nasreddin] the modern degree of sophistication, he could say, first, that there is no theorem forbidding donkeys to read. And, since this does not contradict any known fundamental principles, the failure to achieve this goal would reveal new laws of Nature. So, it is a win-win strategy: either the donkey learns to read, or new laws will be discovered.
Second, he could say that his research may, with some modifications, be generalized to other animals, like goats and sheep, as well as to insects, like ants, gnats, and flies, and this will have a tremendous potential for improving national security: these beasts could easily cross the enemy lines, read the secret plans, and report them back to us.
Dyakonov chose his example carefully. Turnabout: consider the first person who had the idea of domesticating a wild donkey, teaching the beast to haul people’s stuff on its back. If you’d never seen a domestic animal before, that idea would sound every bit as insane as donkey literacy. And indeed, it probably took hundreds of years of selective breeding before it worked well.
In general, if there’s no general principle saying that X can’t work, the truth might be that X can probably never work, but the reasons are too messy to articulate. Or the truth might be that X can work. How can you ever find out, except by, y’know, science? Try doing X. If you fail, try to figure out why. If you figure it out, share the lessons with others. Look for an easier related problem Y that you can solve. Think about whether X is impossible; if you could show its impossibility, that might advance human knowledge even more than X itself would have. If the methods you invented for X don’t work, see if they work for some other, unrelated problem Z. Congratulations! You’ve just reinvented quantum computing research. Or really, any kind of research.
But there’s something else that bothers me about Dyakonov’s donkey story: its specificity. Why fixate on teaching a donkey, only a donkey, how to read? Earlier in his article, Dyakonov ridicules the diversity of physical systems that have been considered as qubits—electron spin qubits, nuclear spin qubits, Josephson superconducting qubits, cavity photon qubits, etc.—seeing the long list as symptomatic of some deep pathology in the field. Yet he never notices the tension with his donkey story. Isn’t it obvious that, if Nasreddin had been a quantum computing experimentalist, then after failing to get good results with donkeys, he’d simply turn his attention to teaching cows, parrots, snakes, elephants, dolphins, or gorillas how to read? Furthermore, while going through the zoo, Nasreddin might discover that he could teach gorillas how to recognize dozens of pictorial symbols: surely a nice partial result. But maybe he’d have an even better idea: why not build his own reading machine? The machine could use a camera to photograph the pages of a book, and a computer chip to decode the letters. If one wanted, the machine could be even be the size and shape of a donkey, and could emit braying sounds. Now, maybe Nasreddin would fail to build this reading machine, but even then, we know today that it would have been a noble failure, like those of Charles Babbage or Ted Nelson. Nasreddin would’ve failed only by being too far ahead of his time.
Sorry for the terrible pun. Today’s post started out as a comment on a review of the movie Lincoln on Sean Carroll’s blog, but it quickly become too long, so I made it into a post on my own blog. Apparently I lack Abe’s gift for concision.
I just saw Lincoln — largely inspired by Sean’s review — and loved it. It struck me as the movie that Lincoln might have wanted to be made about himself: it doesn’t show any of his evolution, but at least it shows the final result of that evolution, and conveys the stories, parables, and insight into human nature that he had accumulated by the end of his life in a highly efficient manner.
Interestingly, the Wikipedia page says that Spielberg commissioned, but then ultimately rejected, two earlier scripts that would have covered the whole Civil War period, and (one can assume) Lincoln’s process of evolution. I think that also could have been a great movie, but I can sort-of understand why Spielberg and Tony Kushner made the unusual choice they did: at the level of detail they wanted, it seems like it would be impossible to do justice to Lincoln’s whole life, or even the last five years of it, in anything less than a miniseries.
I agree with the many people who pointed out that the movie could have given more credit to those who were committed antislavery crusaders from the beginning—rather than those like Lincoln, who eventually came around to the positions we now associate with him after a lot of toying with ideas like blacks self-deporting to Liberia. But in a way, the movie didn’t need to dole out such credit: today, we know (for example) that Thaddeus Stevens had history and justice 3000% on his side, so the movie is free to show him as the nutty radical that he seemed to most others at the time. And there’s even a larger point: never the most diligent student of history, I (to take one embarrassing example) had only the vaguest idea who Thaddeus Stevens even was before seeing the movie. Now I’ve spent hours reading about him, as well as about Charles Sumner, and being moved by their stories.
(At least I knew about the great Frederick Douglass, having studied his Narrative in freshman English class. Douglass and I have something in common: just as a single sentence he wrote, “I would unite with anybody to do right and with nobody to do wrong,” will reverberate through the ages, so too, I predict, will a single sentence I wrote: “Australian actresses are plagiarizing my quantum mechanics lecture to sell printers.”)
More broadly, I think it’s easy for history buffs to overestimate how much people already know about this stuff. Indeed, I can easily imagine that millions of Americans who know Lincoln mostly as “the dude on the $5 bill (who freed some slaves, wore a top hat, used the word ‘fourscore,’ and got shot)” will walk out of the cineplex with a new and ~85% accurate appreciation for what Lincoln did to merit all that fuss, and why his choices weren’t obvious to everyone else at the time.
Truthfully, though, nothing made me appreciate the movie more than coming home and reading countless comments on movie review sites denouncing Abraham Lincoln as a bloodthirsty war criminal, and the movie as yet more propaganda by the victors rewriting history. Even on Sean’s blog we find this, by a commenter named Tony:
I’m not one who believes we have to go to war to solve every problem we come across, I can’t believe that Lincoln couldn’t have found a solution to states rights and slavery in a more peaceful course of action. It seems from the American Revolutionary war to the present it has been one war after another … The loss of life of all wars is simply staggering, what a waste of humanity.
Well look, successive presidential administrations did spend decades trying to find a peaceful solution to the “states rights and slavery” issue; the massive failure of their efforts might make one suspect that a peaceful solution didn’t exist. Indeed, even if Lincoln had simply let the South secede, my reading of history is that issues like the return of fugitive slaves, or competition over Western territories, would have eventually led to a war anyway. I’m skeptical that, in the limit t→∞, free and slave civilizations could coexist on the same continent, no matter how you juggled their political organization.
I’ll go further: it even seems possible to me that the Civil War ended too early, with the South not decimated enough. After World War II, Japan and Germany were successfully dissuaded even from “lite” versions of their previous plans, and rebuilt themselves on very different principles. By contrast, as we all know, the American South basically refused for the next century to admit it had lost: it didn’t try to secede again, but it did use every means available to it to reinstate de facto slavery or something as close to that as possible. All the civil-rights ideals of the 1960s had already been clearly articulated in the 1860s, but it took another hundred years for them to get implemented. Even today, with a black President, the intellectual heirs of the Confederacy remain a force to be reckoned with in the US, trying (for example) to depress minority voter turnout through ID laws, gerrymandering, and anything else they think they can possibly get away with. The irony, of course, is that the neo-Confederates now constitute a nontrivial fraction of what they proudly call “the party of Lincoln.” (Look at the map of blue vs. red states, and compare it to the Mason-Dixon line. Even the purple states correspond reasonably well to the vacillating border states of 1861.)
So that’s why it seems important to have a movie every once in a while that shows the moral courage of people like Lincoln and Thaddeus Stevens, and that names and shames the enthusiastic defenders of slavery—because while the abolitionists won the battle, on some fronts we’re still fighting the war.
(Note: The “2!” in the title of this post really does mean “2 factorial,” if you want it to.)
With the end of the semester upon us, it’s time for a once-every-two-year tradition: showcasing student projects from my 6.845 Quantum Complexity Theory graduate course at MIT. For my previous showcase, in 2010, I chose six projects that I thought were especially outstanding. This year, however, there were so many great projects—and so many, in particular, that could actually be useful to people in quantum computing—that I decided simply to open up the showcase to the whole class. I had 17 takers; their project reports and 10-minute presentation slides are below.
Let me mention a few projects that tried to do something new and audacious. Jenny Barry generalizes the notion of Partially Observable Markov Decision Processes (POMDPs) to the quantum case, and uses a recent result of Eisert et al., showing that certain problems in quantum measurement theory are undecidable (like, literally Turing-undecidable), to show that goal state reachability for “QOMDPs” is also Turing-undecidable (despite being decidable for classical POMDPs). Matt Falk suggests a novel quantum algorithm for spatial search on the 2D grid, and gives some numerical evidence that the algorithm finds a marked item in O(√n) time (which, if true, would be the optimal bound, beating the previous runtime of O(√(n log n))). Matt Coudron and Henry Yuen set out to prove that the Vazirani-Vidick protocol for quantum randomness expansion is optimal, and achieve some interesting partial results. Mohammad Bavarian (well, jointly with me) asks whether there’s a fast quantum algorithm for PARITY that gets the right answer on just slightly more than 50% of the inputs—and shows, rather surprisingly, that this question is closely related to some of the hardest open problems about Boolean functions, like sensitivity versus block sensitivity.
This year, though, I also want to call special attention to the survey projects, since some of them resulted in review articles that could be of real use to students and researchers in quantum computing theory. Notably, Adam Bookatz compiled the first list of essentially all known QMA-complete problems, analogous to (but shorter than!) Garey and Johnson’s listing of known NP-complete problems in 1979. Chris Graves surveyed the known quantum fault-tolerance bounds. Finally, three projects took on the task of understanding and explaining some of the most important recent results in quantum complexity theory: Travis Hance on Thomas Vidick and Tsuyoshi Ito’s NEXP in MIP* breakthrough; Emily Stark on Mark Zhandry’s phenomenal results on the security of classical cryptographic constructions against quantum attack; and Max Zimet on Jordan-Lee-Preskill’s major work on simulation of quantum field theories.
(Oops, sorry … did I use words like “important,” “breakthrough,” and “phenomenal” too often in that last sentence, thereby triggering the wrath of the theoretical computer science excitement police? Well then, come over to my apartment and friggin’ arrest me.)
Anyway, thanks so much to all the students for making 6.845 such an awesome class (at least on my end)! Without further ado, here’s the complete project showcase:
If the world ends today, at least it won’t do so without three identical photons having been used to sample from a probability distribution defined in terms of the permanents of 3×3 matrices, thereby demonstrating the Aaronson-Arkhipov BosonSampling protocol. And the results were obtained by no fewer than four independent experimental groups, some of whom have now published in Science. One of the groups is based in Brisbane, Australia, one in Oxford, one in Vienna, and one in Rome; they coordinated to release their results the same day. That’s right, the number of papers (4) that these groups managed to choreograph to appear simultaneously actually exceeds the number of photons that they so managed (3). The Brisbane group was even generous enough to ask me to coauthor: I haven’t been within 10,000 miles of their lab, but I did try to make myself useful to them as a “complexity theory consultant.”
Here are links to the four experimental BosonSampling papers released in the past week:
- Experimental BosonSampling by Broome et al. (Brisbane)
- Experimental Boson Sampling by Tillmann et al. (Vienna)
- Experimental Boson Sampling by Walmsley et al. (Oxford)
- Experimental boson sampling in arbitrary integrated photonic circuits by Crespi et al. (Italy)
For those who want to know the theoretical background to this work:
- My and Alex’s original 100-page BosonSampling paper (to appear soon in the journal Theory of Computing)
- The 10-page STOC’2011 version of our paper
- My PowerPoint slides
- Alex’s slides
- Theoretical Computer Science StackExchange question and answer
- Gil Kalai’s blog post
- Old Shtetl-Optimized post
For those just tuning in, here are some popular-level articles about BosonSampling:
- Larry Hardesty’s MIT News article (from last year)
- University of Queensland press release
- Victorian counting device gets speedy quantum makeover (this week, from New Scientist; the article is not bad except that it ought to credit Alex Arkhipov)
- New Form of Quantum Computation Promises Showdown with Ordinary Computers, by Adrian Cho (from Science)
I’ll be happy to answer further questions in the comments; for now, here’s a brief FAQ:
Q: Why do you need photons in particular for these experiments?
A: What we need is identical bosons, whose transition amplitudes are given by the permanents of matrices. If it were practical to do this experiment with Higgs bosons, they would work too! But photons are more readily available.
Q: But a BosonSampling device isn’t really a “computer,” is it?
A: It depends what you mean by “computer”! If you mean a physical system that you load input into, let evolve according to the laws of physics, then measure to get an answer to a well-defined mathematical problem, then sure, it’s a computer! The only question is whether it’s a useful computer. We don’t believe it can be used as a universal quantum computer—or even, for that matter, as a universal classical computer. More than that, Alex and I weren’t able to show that solving the BosonSampling problem has any practical use for anything. However, we did manage to amass evidence that, despite being useless, the BosonSampling problem is also hard (at least for a classical computer). And for us, the hardness of classical simulation was the entire point.
Q: So, these experiments reported in Science this week have done something that no classical computer could feasibly simulate?
A: No, a classical computer can handle the simulation of 3 photons without much—or really, any—difficulty. This is only a first step: before this, the analogous experiment (called the Hong-Ou-Mandel dip) had only ever been performed with 2 photons, for which there’s not even any difference in complexity between the permanent and the determinant (i.e., between bosons and fermions). However, if you could scale this experiment up to about 30 photons, then it’s likely that the experiment would be solving the BosonSampling problem faster than any existing classical computer (though the latter could eventually solve the problem as well). And if you could scale it up to 100 photons, then you might never even know if your experiment was working correctly, because a classical computer would need such an astronomical amount of time to check the results.
Last Friday, I was at a “Symposium on the Nature of Proof” at UPenn, to give a popular talk about theoretical computer scientists’ expansions of the notion of mathematical proof (to encompass things like probabilistic, interactive, zero-knowledge, and quantum proofs). This really is some of the easiest, best, and most fun material in all of CS theory to popularize. Here are iTunes videos of my talk and the three others in the symposium: I’m video #2, logician Solomon Feferman is #3, attorney David Rudovsky is #4, and mathematician Dennis DeTurck is #5. Also, here are my PowerPoint slides. Thanks very much to Scott Weinstein at Penn for organizing the symposium.
In other news, the Complexity Zoo went down yet again this week, in a disaster that left vulnerable communities without access to vital resources like nondeterminism and multi-prover interaction. Luckily, computational power has since been restored: with help from some volunteers, I managed to get the Zoo up and running again on my BlueHost account. But while the content is there, it looks horrendously ugly; all the formatting seems to be gone. And the day I agreed to let the Zoo be ported to MediaWiki was the day I lost the ability to fix such problems. What I really need, going forward, is for someone else simply to take charge of maintaining the Zoo: it’s become painfully apparent both that it needs to be done and that I lack the requisite IT skills. If you want to take a crack at it, here’s an XML dump of the Zoo from a few months ago (I don’t think it’s really changed since then). You don’t even need to ask my permission: just get something running, and if it looks good, I’ll anoint you the next Zookeeper and redirect complexityzoo.com to point to your URL.
Update (Nov. 18): The Zoo is back up with the old formatting and graphics!! Thanks so much to Charles Fu for setting up the new complexity-zoo.net (as well as Ethan, who set up a slower site that tided us over). I’ve redirected complexityzoo.com to point to complexity-zoo.net, though it might take some time for your browser cache to clear.
Update (Nov. 8): Slate’s pundit scoreboard.
Update (Nov. 6): In crucial election news, a Florida woman wearing an MIT T-shirt was barred from voting, because the election supervisor thought her shirt was advertising Mitt Romney.
At the time of writing, Nate Silver is giving Obama an 86.3% chance. I accept his estimate, while vividly remembering various admittedly-cruder forecasts the night of November 5, 2000, which gave Gore an 80% chance. (Of course, those forecasts need not have been “wrong”; an event with 20% probability really does happen 20% of the time.) For me, the main uncertainties concern turnout and the effects of various voter-suppression tactics.
In the meantime, I wanted to call the attention of any American citizens reading this blog to the wonderful Election FAQ of Peter Norvig, director of research at Google and a person well-known for being right about pretty much everything. The following passage in particular is worth quoting.
Is it rational to vote?
Yes. Voting for president is one of the most cost-effective actions any patriotic American can take.
Let me explain what the question means. For your vote to have an effect on the outcome of the election, you would have to live in a decisive state, meaning a state that would give one candidate or the other the required 270th electoral vote. More importantly, your vote would have to break an exact tie in your state (or, more likely, shift the way that the lawyers and judges will sort out how to count and recount the votes). With 100 million voters nationwide, what are the chances of that? If the chance is so small, why bother voting at all?
Historically, most voters either didn’t worry about this problem, or figured they would vote despite the fact that they weren’t likely to change the outcome, or vote because they want to register the degree of support for their candidate (even a vote that is not decisive is a vote that helps establish whether or not the winner has a “mandate”). But then the 2000 Florida election changed all that, with its slim 537 vote (0.009%) margin.
What is the probability that there will be a decisive state with a very close vote total, where a single vote could make a difference? Statistician Andrew Gelman of Columbia University says about one in 10 million.
That’s a small chance, but what is the value of getting to break the tie? We can estimate the total monetary value by noting that President George W. Bush presided over a $3 trillion war and at least a $1 trillion economic melt-down. Senator Sheldon Whitehouse (D-RI) estimated the cost of the Bush presidency at $7.7 trillion. Let’s compromise and call it $6 trillion, and assume that the other candidate would have been revenue neutral, so the net difference of the presidential choice is $6 trillion.
The value of not voting is that you save, say, an hour of your time. If you’re an average American wage-earner, that’s about $20. In contrast, the value of voting is the probability that your vote will decide the election (1 in 10 million if you live in a swing state) times the cost difference (potentially $6 trillion). That means the expected value of your vote (in that election) was $600,000. What else have you ever done in your life with an expected value of $600,000 per hour? Not even Warren Buffett makes that much. (One caveat: you need to be certain that your contribution is positive, not negative. If you vote for a candidate who makes things worse, then you have a negative expected value. So do your homework before voting. If you haven’t already done that, then you’ll need to add maybe 100 hours to the cost of voting, and the expected value goes down to $6,000 per hour.)
I’d like to embellish Norvig’s analysis with one further thought experiment. While I favor a higher figure, for argument’s sake let’s accept Norvig’s estimate that the cost George W. Bush inflicted on the country was something like $6 trillion. Now, imagine that a delegation of concerned citizens from 2012 were able to go back in time to November 5, 2000, round up 538 lazy Gore supporters in Florida who otherwise would have stayed home, and bribe them to go to the polls. Set aside the illegality of the time-travelers’ action: they’re already violating the laws of space, time, and causality, which are well-known to be considerably more reliable than Florida state election law! Set aside all the other interventions that also would’ve swayed the 2000 election outcome, and the 20/20 nature of hindsight, and the insanity of Florida’s recount process. Instead, let’s simply ask: how much should each of those 538 lazy Floridian Gore supporters have been paid, in order for the delegation from the future to have gotten its money’s worth?
The answer is a mind-boggling ~$10 billion per voter. Think about that: just for peeling their backsides off the couch, heading to the local library or school gymnasium, and punching a few chads (all the way through, hopefully), each of those 538 voters would have instantly received the sort of wealth normally associated with Saudi princes or founders of Google or Facebook. And the country and the world would have benefited from that bargain.
No, this isn’t really a decisive argument for anything (I’ll leave it to the commenters to point out the many possible objections). All it is, is an image worth keeping in mind the next time someone knowingly explains to you why voting is a waste of time.
The following question emerged from a conversation with the machine learning theorist Pedro Domingos a month ago.
Consider a hypothetical race of intelligent beings, the Armchairians, who never take any actions: never intervene in the world, never do controlled experiments, never try to build anything and see if it works. The sole goal of the Armchairians is to observe the world around them and, crucially, to make accurate predictions about what’s going to happen next. Would the Armchairians ever develop the notion of cause and effect? Or would they be satisfied with the notion of statistical correlation? Or is the question kind of silly, the answer depending entirely on what we mean by “developing the notion of cause and effect”? Feel free to opine away in the comments section.