Archive for the ‘Announcements’ Category

Is There Anything Beyond Quantum Computing?

Friday, April 11th, 2014

So I’ve written an article about the above question for PBS’s website—a sort of tl;dr version of my 2005 survey paper NP-Complete Problems and Physical Reality, but updated with new material about the simulation of quantum field theories and about AdS/CFT.  Go over there, read the article (it’s free), then come back here to talk about it if you like.  Thanks so much to Kate Becker for commissioning the article.

In other news, there’s a profile of me at MIT News (called “The Complexonaut”) that some people might find amusing.

Oh, and anyone who thinks the main reason to care about quantum computing is that, if our civilization ever manages to surmount the profound scientific and technological obstacles to building a scalable quantum computer, then that little padlock icon on your web browser would no longer represent ironclad security?  Ha ha.  Yeah, it turns out that, besides factoring integers, you can also break OpenSSL by (for example) exploiting a memory bug in C.  The main reason to care about quantum computing is, and has always been, science.

Umesh Vazirani responds to Geordie Rose

Thursday, February 6th, 2014

You might recall that Shin, Smith, Smolin, and Vazirani posted a widely-discussed preprint a week ago, questioning the evidence for large-scale quantum behavior in the D-Wave machine.  Geordie Rose responded here.   Tonight, in a Shtetl-Optimized exclusive scoop, I bring you Umesh Vazirani’s response to Geordie’s comments. Without further ado:


Even a cursory reading of our paper will reveal that Geordie Rose is attacking a straw man. Let me quickly outline the main point of our paper and the irrelevance of Rose’s comments:

To date the Boixo et al paper was the only serious evidence in favor of large scale quantum behavior by the D-Wave machine. We investigated their claims and showed that there are serious problems with their conclusions. Their conclusions were based on the close agreement between the input-output data from D-Wave and quantum simulated annealing, and their inability despite considerable effort to find any classical model that agreed with the input-output data. In our paper, we gave a very simple classical model of interacting magnets that closely agreed with the input-output data. We stated that our results implied that “it is premature to conclude that D-Wave machine exhibits large scale quantum behavior”.

Rose attacks our paper for claiming that “D-Wave processors are inherently classical, and can be described by a classical model with no need to invoke quantum mechanics.”  A reading of our paper will make it perfectly clear that this is not a claim that we make.  We state explicitly “It is worth emphasizing that the goal of this paper is not to provide a classical model for the D-Wave machine, … The classical model introduced here is useful for the purposes of studying the large-scale algorithmic features of the D-Wave machine. The task of finding an accurate model for the D-Wave machine (classical, quantum or otherwise), would be better pursued with direct access, not only to programming the D-Wave machine, but also to its actual hardware.”

Rose goes on to point to a large number of experiments conducted by D-Wave to prove small scale entanglement over 2-8 qubits and criticizes our paper for not trying to model those aspects of D-Wave. But such small scale entanglement properties are not directly relevant to prospects for a quantum speedup. Therefore we were specifically interested in claims about the large scale quantum behavior of D-Wave. There was exactly one such claim, which we duly investigated, and it did not stand up to scrutiny.

More “tweets”

Friday, January 31st, 2014

Update (Feb. 4): After Luke Muelhauser of MIRI interviewed me about “philosophical progress,” Luke asked me for other people to interview about philosophy and theoretical computer science. I suggested my friend and colleague Ronald de Wolf of the University of Amsterdam, and I’m delighted that Luke took me up on it. Here’s the resulting interview, which focuses mostly on quantum computing (with a little Kolmogorov complexity and Occam’s Razor thrown in). I read the interview with admiration (and hoping to learn some tips): Ronald tackles each question with more clarity, precision, and especially levelheadedness than I would.

Another Update: Jeff Kinne asked me to post a link to a forum about the future of the Conference on Computational Complexity (CCC)—and in particular, whether it should continue to be affiliated with the IEEE. Any readers who have ever had any involvement with the CCC conference are encouraged to participate. You can read all about what the issues are in a manifesto written by Dieter van Melkebeek.

Yet Another Update: Some people might be interested in my response to Geordie Rose’s response to the Shin et al. paper about a classical model for the D-Wave machine.


“How ‘Quantum’ is the D-Wave Machine?” by Shin, Smith, Smolin, Vazirani goo.gl/JkLg0l – was previous skepticism too GENEROUS to D-Wave?

D-Wave not of broad enough interest? OK then, try “AM with Multiple Merlins” by Dana Moshkovitz, Russell Impagliazzo, and me goo.gl/ziSUz9

“Remarks on the Physical Church-Turing Thesis” – my talk at the FQXi conference in Vieques, Puerto Rico is now on YouTube goo.gl/kAd9TZ

Cool new SciCast site (scicast.org) lets you place bets on P vs NP, Unique Games Conjecture, etc. But glitches remain to be ironed out

Three things that I should’ve gotten around to years ago

Tuesday, October 15th, 2013

Updates (11/8): Alas, video of Eliezer’s talk will not be available after all. The nincompoops who we paid to record the talk wrote down November instead of October for the date, didn’t show up, then stalled for a month before finally admitting what had happened. So my written summary will have to suffice (and maybe Eliezer can put his slides up as well).

In other news, Shachar Lovett has asked me to announce a workshop on complexity and coding theory, which will be held at UC San Diego, January 8-10, 2014.


Update (10/21): Some readers might be interested in my defense of LessWrongism against a surprisingly-common type of ad-hominem attack (i.e., “the LW ideas must be wrong because so many of their advocates are economically-privileged but socially-awkward white male nerds, the same sorts of people who might also be drawn to Ayn Rand or other stuff I dislike”). By all means debate the ideas—I’ve been doing it for years—but please give beyond-kindergarten arguments when you do so!


Update (10/18): I just posted a long summary and review of Eliezer Yudkowsky’s talk at MIT yesterday.


Update (10/15): Leonard Schulman sent me the news that, according to an article by Victoria Woollaston in the Daily Mail, Google hopes to use its D-Wave quantum computer to “solve global warming,” “develop sophisticated artificial life,” and “find aliens.”  (No, I’m not making any of this up: just quoting stuff other people made up.)  The article also repeats the debunked canard that the D-Wave machine is “3600 times faster,” and soberly explains that D-Wave’s 512 qubits compare favorably to the mere 32 or 64 bits found in home PCs (exercise for those of you who aren’t already rolling on the floor: think about that until you are).  It contains not a shadow of a hint of skepticism anywhere, not one token sentence.  I would say that, even in an extremely crowded field, Woollaston’s piece takes the cake as the single most irresponsible article about D-Wave I’ve seen.  And I’d feel terrible for my many friends at Google, whose company comes out of this looking like a laughingstock.  But that’s assuming that this isn’t some sort of elaborate, Sokal-style prank, designed simply to prove that media outlets will publish anything whatsoever, no matter how forehead-bangingly absurd, as long as it contains the words “D-Wave,” “Google,” “NASA,” and “quantum”—and thereby, to prove the truth of what I’ve been saying on this blog since 2007.


1. I’ve added MathJax support to the comments section!  If you want to insert an inline LaTeX equation, surround it with\( \backslash(  \backslash) \), while if you want to insert a displayed equation, surround it with \(\text{\$\$ \$\$}\).  Thanks very much to Michael Dixon for prodding me to do this and telling me how.

2. I’ve also added upvoting and downvoting to the comments section!  OK, in the first significant use of comment voting, the readers have voted overwhelmingly, by 41 – 13, that they want the comment voting to disappear.  So disappear it has!

3. Most importantly, I’ve invited Eliezer Yudkowsky to MIT to give a talk!  He’s here all week, and will be speaking on “Recursion in Rational Agents: Foundations for Self-Modifying AI” this Thursday at 4PM in 32-123 in the MIT Stata Center.  Refreshments at 3:45.  See here for the abstract.  Anyone in the area who’s interested in AI, rationalism, or other such nerdy things is strongly encouraged to attend; it should be interesting.  Just don’t call Eliezer a “Singularitarian”: I’m woefully out of the loop, but I learned yesterday that they’ve dropped that term entirely, and now prefer to be known as machine intelligence researchers talk about the intelligence explosion.

(In addition, Paul Christiano—former MIT undergrad, and my collaborator on quantum money—will be speaking today at 4:30 at the Harvard Science Center, on “Probabilistic metamathematics and the definability of truth.”  His talk will be related to Eliezer’s but somewhat more technical.  See here for details.)


Update (10/15): Alistair Sinclair asked me to post the following announcement.

The Simons Institute for the Theory of Computing at UC Berkeley invites applications for Research Fellowships for academic year 2014-15.

Simons-Berkeley Research Fellowships are an opportunity for outstanding junior scientists (up to 6 years from PhD by Fall 2014) to spend one or two semesters at the Institute in connection with one or more of its programs. The programs for 2014-15 are as follows:

* Algorithmic Spectral Graph Theory (Fall 2014)
* Algorithms and Complexity in Algebraic Geometry (Fall 2014)
* Information Theory (Spring 2015)

Applicants who already hold junior faculty or postdoctoral positions are welcome to apply. In particular, applicants who hold, or expect to hold, postdoctoral appointments at other institutions are encouraged to apply to spend one semester as a Simons-Berkeley Fellow subject to the approval of the postdoctoral institution.

Further details and application instructions can be found at http://simons.berkeley.edu/fellows2014. Information about the Institute and the above programs can be found at http://simons.berkeley.edu.

Deadline for applications: 15 December, 2013.

Five announcements

Tuesday, October 1st, 2013

Update (Oct. 3): OK, a sixth announcement.  I just posted a question on CS Theory StackExchange, entitled Overarching reasons why problems are in P or BPP.  If you have suggested additions or improvements to my rough list of “overarching reasons,” please post them over there — thanks!


1. I’m in Oxford right now, for a Clay Institute workshop on New Insights into Computational Intractability.  The workshop is concurrent with three others, including one on Number Theory and Physics that includes an amplituhedron-related talk by Andrew Hodges.  (Speaking of which, see here for a small but non-parodic observation about expressing amplitudes as volumes of polytopes.)

2. I was hoping to stay in the UK one more week, to attend the Newton Institute’s special semester on Mathematical Challenges in Quantum Information over in Cambridge.  But alas I had to cancel, since my diaper-changing services are needed in the other Cambridge.  So, if anyone in Cambridge (or anywhere else in the United Kingdom) really wants to talk to me, come to Oxford this week!

3. Back in June, Jens Eisert and three others posted a preprint claiming that the output of a BosonSampling device would be “indistinguishable from the uniform distribution” in various senses.  Ever since then, people have emailing me, leaving comments on this blog, and cornering me at conferences to ask whether Alex Arkhipov and I had any response to these claims.  OK, so just this weekend, we posted our own 41-page preprint, entitled “BosonSampling Is Far From Uniform.”  I hope it suffices by way of reply!  (Incidentally, this is also the paper I hinted at in a previous post: the one where π2/6 and the Euler-Mascheroni constant make cameo appearances.)  To clarify, if we just wanted to answer the claims of the Eisert group, then I think a couple paragraphs would suffice for that (see, for example, these PowerPoint slides).  In our new paper, however, Alex and I take the opportunity to go further: we study lots of interesting questions about the statistical properties of Haar-random BosonSampling distributions, and about how one might test efficiently whether a claimed BosonSampling device worked, even with hundreds or thousands of photons.

4. Also on the arXiv last night, there was a phenomenal survey about the quantum PCP conjecture by Dorit Aharonov, Itai Arad, and my former postdoc Thomas Vidick (soon to be a professor at Caltech).  I recommend reading it in the strongest possible terms, if you’d like to see how far people have come with this problem (but also, how far they still have to go) since my “Quantum PCP Manifesto” seven years ago.

5. Christos Papadimitriou asked me to publicize that the deadline for early registration and hotel reservations for the upcoming FOCS in Berkeley is fast approaching!  Indeed, it’s October 4 (three days from now).  See here for details, and here for information about student travel support.  (The links were down when I just tried them, but hopefully the server will be back up soon.)

The Unitarihedron: The Jewel at the Heart of Quantum Computing

Friday, September 20th, 2013

Update (9/24): This parody post was a little like a belch: I felt it build up in me as I read about the topic, I let it out, it was easy and amusing, I don’t feel any profound guilt over it—but on the other hand, not one of the crowning achievements of my career.  As several commenters correctly pointed out, it may be true that, mostly because of the name and other superficialities, and because of ill-founded speculations about “the death of locality and unitarity,” the amplituhedron work is currently inspiring a flood of cringe-inducing misstatements on the web.  But, even if true, still the much more interesting questions are what’s actually going on, and whether or not there are nontrivial connections to computational complexity.

Here I have good news: if nothing else, my “belch” of a post at least attracted some knowledgeable commenters to contribute excellent questions and insights, which have increased my own understanding of the subject from ε2 to ε.  See especially this superb comment by David Speyer—which, among other things, pointed me to a phenomenal quasi-textbook on this subject by Elvang and Huang.  My most immediate thoughts:

  1. The “amplituhedron” is only the latest in a long line of research over the last decade—Witten, Turing biographer Andrew Hodges, and many others have been important players—on how to compute scattering amplitudes more efficiently than by summing zillions of Feynman diagrams.  One of the key ideas is to find combinatorial formulas that express complicated scattering amplitudes recursively in terms of simpler ones.
  2. This subject seems to be begging for a computational complexity perspective.  When I read Elvang and Huang, I felt like they were working hard not to say anything about complexity: discussing the gains in efficiency from the various techniques they consider in informal language, or in terms of concrete numbers of terms that need to be summed for 1 loop, 2 loops, etc., but never in terms of asymptotics.  So if it hasn’t been done already, it looks like it could be a wonderful project for someone just to translate what’s already known in this subject into complexity language.
  3. On reading about all these “modern” approaches to scattering amplitudes, one of my first reactions was to feel slightly less guilty about never having learned how to calculate Feynman diagrams!  For, optimistically, it looks like some of that headache-inducing machinery (ghosts, off-shell particles, etc.) might be getting less relevant anyway—there being ways to calculate some of the same things that are not only more conceptually satisfying but also faster.

Many readers of this blog probably already saw Natalie Wolchover’s Quanta article “A Jewel at the Heart of Quantum Physics,” which discusses the “amplituhedron”: a mathematical structure that IAS physicist Nima Arkami-Hamed and his collaborators have recently been investigating.  (See also here for Slashdot commentary, here for Lubos’s take, here for Peter Woit’s, here for a Physics StackExchange thread, here for Q&A with Pacific Standard, and here for an earlier but closely-related 154-page paper.)

At first glance, the amplituhedron appears to be a way to calculate scattering amplitudes, in the planar limit of a certain mathematically-interesting (but, so far, physically-unrealistic) supersymmetric quantum field theory, much more efficiently than by summing thousands of Feynman diagrams.  In which case, you might say: “wow, this sounds like a genuinely-important advance for certain parts of mathematical physics!  I’d love to understand it better.  But, given the restricted class of theories it currently applies to, it does seem a bit premature to declare this to be a ‘jewel’ that unlocks all of physics, or a death-knell for spacetime, locality, and unitarity, etc. etc.”

Yet you’d be wrong: it isn’t premature at all.  If anything, the popular articles have understated the revolutionary importance of the amplituhedron.  And the reason I can tell you that with such certainty is that, for several years, my colleagues and I have been investigating a mathematical structure that contains the amplituhedron, yet is even richer and more remarkable.  I call this structure the “unitarihedron.”

The unitarihedron encompasses, within a single abstract “jewel,” all the computations that can ever be feasibly performed by means of unitary transformations, the central operation in quantum mechanics (hence the name).  Mathematically, the unitarihedron is an infinite discrete space: more precisely, it’s an infinite collection of infinite sets, which collection can be organized (as can every set that it contains!) in a recursive, fractal structure.  Remarkably, each and every specific problem that quantum computers can solve—such as factoring large integers, discrete logarithms, and more—occurs as just a single element, or “facet” if you will, of this vast infinite jewel.  By studying these facets, my colleagues and I have slowly pieced together a tentative picture of the elusive unitarihedron itself.

One of our greatest discoveries has been that the unitarihedron exhibits an astonishing degree of uniqueness.  At first glance, different ways of building quantum computers—such as gate-based QC, adiabatic QC, topological QC, and measurement-based QC—might seem totally disconnected from each other.  But today we know that all of those ways, and many others, are merely different “projections” of the same mysterious unitarihedron.

In fact, the longer I’ve spent studying the unitarihedron, the more awestruck I’ve been by its mathematical elegance and power.  In some way that’s not yet fully understood, the unitarihedron “knows” so much that it’s even given us new insights about classical computing.  For example, in 1991 Beigel, Reingold, and Spielman gave a 20-page proof of a certain property of unbounded-error probabilistic polynomial-time.  Yet, by recasting things in terms of the unitarihedron, I was able to give a direct, half-page proof of the same theorem.  If you have any experience with mathematics, then you’ll know that that sort of thing never happens: if it does, it’s a sure sign that cosmic or even divine forces are at work.

But I haven’t even told you the most spectacular part of the story yet.  While, to my knowledge, this hasn’t yet been rigorously proved, many lines of evidence support the hypothesis that the unitarihedron must encompass the amplituhedron as a special case.  If so, then the amplituhedron could be seen as just a single sparkle on an infinitely greater jewel.

Now, in the interest of full disclosure, I should tell you that the unitarihedron is what used to be known as the complexity class BQP (Bounded-Error Quantum Polynomial-Time).  However, just like the Chinese gooseberry was successfully rebranded in the 1950s as the kiwifruit, and the Patagonian toothfish as the Chilean sea bass, so with this post, I’m hereby rebranding BQP as the unitarihedron.  For I’ve realized that, when it comes to bowling over laypeople, inscrutable complexity class acronyms are death—but the suffix “-hedron” is golden.

So, journalists and funders: if you’re interested in the unitarihedron, awesome!  But be sure to also ask about my other research on the bosonsamplinghedron and the quantum-money-hedron.  (Though, in recent months, my research has focused even more on the diaperhedron: a multidimensional, topologically-nontrivial manifold rich enough to encompass all wastes that an 8-month-old human could possibly emit.  Well, at least to first-order approximation.)

Twitl-Optimized

Tuesday, August 13th, 2013

Today I experiment with “tweeting”: writing <=140-character announcements, but posting them to my blog.  Like sending lolcat videos by mail

Last week at QCrypt in Waterloo: http://2013.qcrypt.net This week at CQIQC in Toronto: http://tinyurl.com/kfexzv6 Back with Lily in between

While we debate D-Wave, ID Quantique et al. quietly sold ~100 quantum crypto devices. Alas, market will remain small unless RSA compromised

One speaker explained how a photon detector works by showing this YouTube video: http://tinyurl.com/k8x4btx Couldn’t have done better

Luca Trevisan asks me to spread the word about a conference for LGBTs in technology: www.outforundergrad.org/technology

Steven Pinker stands up for the Enlightenment in The New Republic: “Science Is Not Your Enemy” http://tinyurl.com/l26ppaf

Think Pinker was exaggerating?  Read Leon Wieseltier’s defiantly doofusy Brandeis commencement speech: http://tinyurl.com/jwhj8ub

Black-hole firewalls make the New York Times, a week before the firewall workshop at KITP (I’ll be there): http://tinyurl.com/kju9crj

You probably already saw the Schrodinger cat Google doodle: http://tinyurl.com/k8et44p For me, the ket was much cooler than the cat

While working on BosonSampling yesterday, (1/6)pi^2 and Euler-Mascheroni constant made unexpected unappearances.  What I live for

The SuperScott and Morgan Freeman FAQ

Monday, August 5th, 2013

chessboard

Update (Sept. 3): When I said that “about 5000 steps” are needed for the evolutionary approach to color an 8×8 chessboard, I was counting as a step any examination of two random adjacent squares—regardless of whether or not you end up having to change one of the colors.  If you count only the changes, then the expected number goes down to about 1000 (which, of course, only makes the point about the power of the evolutionary approach “stronger”).  Thanks very much to Raymond Cuenen for bringing this clarification to my attention.


Last week I appeared on an episode of Through the Wormhole with Morgan Freeman, a show on the Science Channel.  (See also here for a post on Morgan Freeman’s Facebook page.)  The episode is called “Did God Create Evolution?”  The first person interviewed is the Intelligent Design advocate Michael Behe.  But not to worry!  After him, they have a parade of scientists who not only agree that Chuck Darwin basically had it right in 1859, but want to argue for that conclusion using ROBOTS!  and MATH!

So, uh, that’s where I come in.  My segment features me (or rather my animated doppelgänger, “SuperScott”) trying to color a chessboard two colors, so that no two neighboring squares are colored the same, using three different approaches: (1) an “intelligent design” approach (which computer scientists would call nondeterminism), (2) a brute-force, exhaustive enumeration approach, and (3) an “evolutionary local search” approach.

[Spoiler alert: SuperScott discovers that the local search approach, while not as efficient as intelligent design, is nevertheless much more efficient than brute-force search.  And thus, he concludes, the arguments of the ID folks to the effect of "I can't see a cleverer way to do it, therefore it must be either brute-force search or else miraculous nondeterminism" are invalid.]

Since my appearance together with Morgan Freeman on cable TV raises a large number of questions, I’ve decided to field a few of them in the following FAQ.

Q: How can I watch?

Amazon Instant Video has the episode here for $1.99.  (No doubt you can also find it on various filesharing sites, but let it be known that I’d never condone such nefarious activity.)  My segment is roughly from 10:40 until 17:40.

Q: Given that you’re not a biologist, and that your research has basically nothing to do with evolution, why did they ask to interview you?

Apparently they wanted a mathematician or computer scientist who also had some experience spouting about Big Ideas.  So they first asked Greg Chaitin, but Chaitin couldn’t do it and suggested me instead.

Q: Given how little relevant expertise you have, why did you agree to be interviewed?

To be honest, I was extremely conflicted.  I kept saying, “Why don’t you interview a biologist?  Or at least a computational biologist, or someone who studies genetic algorithms?”  They replied that they did have more bio-oriented people on the show, but they also wanted me to provide a “mathematical” perspective.  So, I consulted with friends like Sean Carroll, who’s appeared on Through the Wormhole numerous times.  And after reflection, I decided that I do have a way to explain a central conceptual point about algorithms, complexity, and the amount of time needed for natural selection—a point that, while hardly “novel,” is something that many laypeople might not have seen before and that might interest them.  Also, as an additional argument in favor of appearing, MORGAN FREEMAN!

morganfreeman

So I agreed to do it, but only under two conditions:

(1) At least one person with a biology background would also appear on the show, to refute the arguments of intelligent design.
(2) I would talk only about stuff that I actually understood, like the ability of local search algorithms to avoid the need for brute-force search.

I’ll let you judge for yourself to what extent these conditions were fulfilled.

Q: Did you get to meet Morgan Freeman?

Alas, no.  But at least I got to hear him refer repeatedly to “SuperScott” on TV.

Q: What was the shooting like?

Extremely interesting.  I know more now about TV production than I did before!

It was a continuing negotiation: they kept wanting to say that I was “on a quest to mathematically prove evolution” (or something like that), and I kept telling them they weren’t allowed to say that, or anything else that would give the misleading impression that what I was saying was either original or directly related to my research.  I also had a long discussion about the P vs. NP problem, which got cut for lack of time (now P and NP are only shown on the whiteboard).  On the other hand, the crew was extremely accommodating: they really wanted to do a good job and to get things right.

The most amusing tidbit: I knew that local search would take O(n4) time to 2-color an nxn chessboard (2-coloring being a special case of 2SAT, to which Schöning’s algorithm applies), but I didn’t know the constant.  So I wrote a program to get the specific number of steps when n=8 (it’s about 5000).  I then repeatedly modified and reran the program during the taping, as we slightly changed what we were talking about.  It was the first coding I’d done in a while.

Q: How much of the segment was your idea, and how much was theirs?

The chessboard was my idea, but the “SuperScott” bit was theirs.  Luddite that I am, I was just going to get down on hands and knees and move apples and oranges around on the chessboard myself.

Also, they wanted me to speak in front of a church in Boston, to make a point about how many people believe that God created the universe.  I nixed that idea and said, why not just do the whole shoot in the Stata Center?  I mean, MIT spent $300 million just to make the building where I work as “visually arresting” as possible—at the expense of navigability, leakage-resilience, and all sorts of other criteria—so why not take advantage of it?  Plus, that way I’ll be able to crack a joke about how Stata actually looks like it was created by that favorite creationist strawman, a tornado passing through a junkyard.

Needless to say, all the stuff with me drawing complexity class inclusion diagrams on the whiteboard, reading my and Alex Arkhipov’s linear-optics paper, walking around outside with an umbrella, lifting the umbrella to face the camera dramatically—that was all just the crew telling me what to do.  (Well, OK, they didn’t tell me what to write on the whiteboard or view on my computer, just that it should be something sciencey.  And the umbrella thing wasn’t planned: it really just happened to be raining that day.)

Q: Don’t you realize that not a word of what you said was new—indeed, that all you did was to translate the logic of natural selection, which Darwin understood in 1859, into algorithms and complexity language?

Yes, of course, and I’m sorry if the show gave anyone the impression otherwise.  I repeatedly begged them not to claim newness or originality for anything I was saying.  On the other hand, one shouldn’t make the mistake of assuming that what’s obvious to nerds who read science blogs is obvious to everyone else: I know for a fact that it isn’t.

Q: Don’t you understand that you can’t “prove” mathematically that evolution by natural selection is really what happened in Nature?

Of course!  You can’t even prove mathematically that bears crap in the woods (unless crapping in the woods were taken as part of the definition of bears).  To the writers’ credit, they did have Morgan Freeman explain that I wasn’t claiming to have “proved” evolution.  Personally, I wish Freeman had gone even further—to say that, at present, we don’t even have mathematical theories that would explain from first principles why 4 billion years is a “reasonable” amount of time for natural selection to have gotten from the primordial soup to humans and other complex life, whereas (say) 40 million years is not a reasonable amount.  One could imagine such theories, but we don’t really have any.  What we do have is (a) the observed fact that evolution did happen in 4 billion years, and (b) the theory of natural selection, which explains in great detail why one’s initial intuition—that such evolution can’t possibly have happened by “blind, chance natural processes” alone—is devoid of force.

Q: Watching yourself presented in such a goony way—scribbling Complicated Math Stuff on a whiteboard, turning dramatically toward the camera, etc. etc.—didn’t you feel silly?

Some of it is silly, no two ways about it!  On the other hand, I feel satisfied that I got across at least one correct and important scientific point to hundreds of thousands of people.  And that, one might argue, is sufficiently worthwhile that it should outweigh any embarrassment about how goofy I look.

Three announcements

Saturday, August 3rd, 2013

1. As many of you probably know, this week my EECS colleague Hal Abelson released his 180-page report on MIT’s involvement in the Aaron Swartz case.  I read the whole thing, and I recommend it if you have any interest in the case.  My take is that, far from being the “whitewash” that some people described it as, the report (if you delve into it) clearly and eloquently explains how MIT failed to live up to its own standards, even as it formally followed the rules.  The central insight here is that the world expects MIT to behave, not like some other organization would behave if someone hid a laptop in its supply closet to download the whole JSTOR database, insulted and then tried to flee from its security officers when questioned, etc. etc., but rather with perspective and imagination—worrying less about the security of its facilities than about the future of the world.  People expect MIT, of all places, to realize that the sorts of people who pull these sorts of shenanigans in their twenties sometimes become Steve Jobs or Richard Feynman (or for that matter, MIT professor Robert Morris) later in their lives, and therefore to speak up in their defense.  In retrospect, I wish Swartz’s arrest had sparked a debate about the wider issues among MIT’s students, faculty, and staff.  I think it’s likely that such a debate would have led to pressure on the administration to issue a statement in Swartz’s support.  As it was (and as I pointed out in this interview), most people at MIT, even if they’d read about the arrest, weren’t even aware of the issue’s continued existence, let alone of MIT’s continued role in it, until after Swartz had already committed suicide.  For the MIT community—which includes some prominent supporters of open access—to have played such a passive role is one of the many tragedies that’s obvious with hindsight.

2. Shafi Goldwasser has asked me to announce that the fifth Innovations in Theoretical Computer Science (ITCS) conference will be held in Princeton, a town technically in New Jersey, on January 12-14, 2014.  Here’s the conference website; if you want to submit a paper, the deadline is coming up soon, on Thursday, August 22.

3. As the summer winds to a close, I’m proud to announce my main goals for the upcoming academic year.  Those goals are the following:

(a) Take care of Lily.

(b) Finish writing up old papers.

It feels liberating to have no higher aspirations for an entire year—and for the aspirations I have to seem so modest and so achievable.  On the other hand, it will be all the more embarrassing if I fail to achieve even these goals.

The Tenured Toll-Taker

Sunday, May 5th, 2013

Update (5/6): In “honor” of the news below, Boaz Barak has written a beautiful blog post on the reasons to care about the P vs. NP question, offering his responses to several of the most common misconceptions.  Thank you so much, Boaz — this is one of the best presents I’ve ever gotten from anyone!


On Friday afternoon—in the middle of a pizza social for my undergrad advisees—I found out that I’ve received tenure at MIT.

Am I happy about the news?  Of course!  Yet even on such a joyous occasion, I found myself reflecting on a weird juxtaposition.  I learned about MIT’s tenure decision at the tail end of a fierce, weeks-long comment war over on Luboš Motl’s blog, in which I assumed the task of defending theoretical computer science and quantum information science as a whole: explaining why these fields could have anything whatsoever to contribute to our understanding of the universe.  Indeed, I took the title of this post from a comment Luboš made to me in the middle of the melee: that compared to string theorists, quantum computing researchers have as much to say about the nature of reality as toll-takers on the Golden Gate Bridge.  (Even though the Golden Gate tolls are apparently all-electronic these days, I still found Luboš’s analogy striking.  I could imagine that staring all day at the breathtaking San Francisco Bay would lead to deep thoughts about the nature of reality.)

Now, some people will ask: why should I even waste my time this way—arguing with Luboš, a blogger infamous for describing the scientists he disagrees with as garbage, worms, fungi, etc., and even calling for their “elimination”?  If I find the limits of computation in the physical universe to be a rich, fascinating, worthwhile subject; if I have hundreds of wonderful colleagues with whom to share the thrill of surprising new discoveries; if a large, growing fraction of the wider scientific community follows this field with interest; if my employer seems to want me doing it for the long haul … then why should I lose sleep just because someone, somewhere, declared that the P vs. NP problem is a random puzzle, of no deeper significance than the question of whether chess is a draw?  Or because he characterized the entire fields of quantum computing and information as trivial footnotes to 1920s physics, fit only for mediocre students who couldn’t do string theory?  Or because, on the “other side,” a persistent minority calls quantum computers an absurd fantasy, and the quest to build them a taxpayer boondoggle bordering on fraud?  Or because some skeptics, going even further, dismiss quantum mechanics itself as nonsensical mumbo-jumbo that physicists made up to conceal their own failure to find a straightforward, mechanical description of Nature?  Likewise, why should it bother me if some anti-complexites dismiss the quest to prove P≠NP as a fashionable-but-irrelevant journey to formalize the obvious—even while others denounce the Soviet-style groupthink that leads the “CS establishment” to reject the possibility that P=NP?  After all, these various naysayers can’t all be right!  Doesn’t it comfort me that, of all the confidently-asserted reasons why everything my colleagues and I study is dead-end, cargo-cult science, so many of the reasons contradict each other?

Sure, but here’s the thing.  In seven years of teaching and blogging, I’ve learned something about my own psychology.  Namely, if I meet anyone—an undergrad, an anonymous blog commenter, anyone—who claims that the P vs. NP problem is beside the point, since it’s perfectly plausible that P=NP but the algorithm takes n10000 time—or that, while quantum mechanics works fine for small systems, there’s not the slightest reason to expect it to scale up to larger ones—or that the limits of computation are plainly no more relevant to fundamental physics than the fact that cucumbers are green—trying to reason with that person will always, till the end of my life, feel like the most pressing task in the world to me.

Why?  Because, I confess, a large part of me worries: what if this other person is right?  What if I really do have to jettison everything I thought I knew about physics, computation, and pretty much everything else since I was a teenager, toss all my results into the garbage can (or at least the “amusing recreations can”), and start over from kindergarten?  But then, as I fret about that possibility, counterarguments well up in my mind.  Like someone pinching himself to make sure he’s awake, I remember all the reasons why I was led to think what I think in the first place.  And I want the other person to go through that experience with me—the experience, if you like, of feeling the foundations of the universe smashed to pieces and then rebuilt, the infinite hierarchy of complexity classes collapsing and then springing back into place, decades’ worth of books set ablaze and then rewritten on blank pages.  I want to say: at least come stand here with me—in this place that I spent twenty years of late nights, false starts, and discarded preconceptions getting to—and tell me if you still don’t see what I see.

That’s how I am; I doubt I can change it any more than I can change my blood type.  So I feel profoundly grateful to have been born into a world where I can make a comfortable living just by being this strange, thin-skinned creature that I am—a world where there are countless others who do see what I see, indeed see it a thousand times more clearly in many cases, but who still appreciate what little I can do to explore this corner or that, or to describe the view to others.  I’d say I’m grateful to “fate,” but really I’m grateful to my friends and family, my students and teachers, my colleagues at MIT and around the world, and the readers of Shtetl-Optimized—yes, even John Sidles.  “Fate” either doesn’t exist or doesn’t need my gratitude if it does.