Archive for August, 2007

What’s going down in AarTown

Friday, August 31st, 2007

Taking a cue from the Pontiff, I thought I’d provide three quick updates on my personal life (no, not my personal personal life; that’s none of your business).

  1. Last week I bought and moved into a condo in East Cambridge, a 10-minute walk from campus, with lovely views of Boston, the Charles River, and the Red Line T going over the bridge:


    (That’s mom on the sofa.) I can’t stress enough how fundamentally my life has changed now that I’m a homeowner. For example, instead of paying rent each month, I now pay something called a “mortgage,” and instead of going to a landlord, it goes to a bank. Also I get a massive tax break for some reason.
  2. The students showed up this week, and the semester is here. No, I’m not teaching this fall, but there’s still plenty to do, from organizing a theory lunch to deciding what kind of whiteboard should go in my office. (With a border or without? How big a tray for pens? These are serious decisions.) On Wednesday I went to an orientation for new MIT faculty, at which I got to tell President Susan Hockfield about quantum lower bounds, the prospects for practical quantum computers, and how her fine institution rejected me twice. Along with the usual pleasantries, Hockfield said one thing that deeply impressed me: “I know it’s gone out of fashion in many places, but you’re still allowed to use the word ‘truth’ here.”
  3. Besides moving, besides getting oriented, I’ve also been distracted from my blogging career by involvement with some … what’s it called? … actual research. Sorry about that; I assure you it’s just a temporary aberration.

What Google Won’t Find

Friday, August 31st, 2007

While I rummage around the brain for something more controversial to blog (that’s nevertheless not too controversial), here, for your reading pleasure, is a talk I gave a couple weeks ago at Google Cambridge. Hardcore Shtetl-Optimized fans will find little here to surprise them, but for new or occasional readers, this is about the clearest statement I’ve written of my religio-ethico-complexity-theoretic beliefs.

What Google Won’t Find

As I probably mentioned when I spoke at your Mountain View location two years ago, it’s a funny feeling when an entity that knows everything that ever can be known or has been known or will be known invites you to give a talk — what are you supposed to say?

Well, I thought I’d talk about “What Google Won’t Find.” In other words, what have we learned over the last 15 years or so about the ultimate physical limits of search — whether it’s search of a physical database like Google’s, or of the more abstract space of solutions to a combinatorial problem?

On the spectrum of computer science, I’m about as theoretical as you can get. One way to put it is that I got through CS grad school at Berkeley without really learning any programming language other than QBASIC. So it might surprise you that earlier this year, I was spending much of my time talking to business reporters. Why? Because there was this company near Vancouver called D-Wave Systems, which was announcing to the press that it had built the world’s first commercial quantum computer.

Let’s ignore the “commercial” part, because I don’t really understand economics — these days, you can apparently make billions of dollars giving away some service for free! Let’s instead focus on the question: did D-Wave actually build a quantum computer? Well, they apparently built a device with 16 very noisy superconducting quantum bits (or qubits), which they say they’ve used to help solve extremely small Sudoku puzzles.

The trouble is, we’ve known for years that if qubits are sufficiently noisy — if they leak a sufficient amount of information into their environment — then they behave essentially like classical bits. Furthermore, D-Wave has refused to answer extremely basic technical questions about how high their noise rates are and so forth — they care about serving their customers, not answering nosy questions from academics. (Recently D-Wave founder Geordie Rose offered to answer my questions if I was interested in buying one of his machines. I replied that I was interested — my offer was $10 US — and I now await his answers as a prospective customer.)

To make a long story short, it’s consistent with the evidence that what D-Wave actually built would best be described as a 16-bit classical computer. I don’t mean 16 bits in terms of the architecture; I mean sixteen actual bits. And there’s some prior art for that.

But that’s actually not what annoyed me the most about the D-Wave announcement. What annoyed me were all the articles in the popular press — including places as reputable as The Economist — that said, what D-Wave has built is a machine that can try every possible solution in parallel and instantly pick the right one. This is what a quantum computer is; this is how it works.

It’s amazing to me how, as soon as the word “quantum” is mentioned, all the ordinary rules of journalism go out the window. No one thinks to ask: is that really what a quantum computer could do?

It turns out that, even though we don’t yet have scalable quantum computers, we do know something about what they could do if we did have them.

A quantum computer is a device that would exploit the laws of quantum mechanics to solve certain computational problems asymptotically faster than we know how to solve them with any computer today. Quantum mechanics — which has been our basic framework for physics for the last 80 years — is a theory that’s like probability theory, except that instead of real numbers called probabilities, you now have complex numbers called amplitudes. And the interesting thing about these complex numbers is that they can “interfere” with each other: they can cancel each other out.

In particular, to find the probability of something happening, you have to add the amplitudes for all the possible ways it could have happened, and then take the square of the absolute value of the result. And if some of the ways an event could happen have positive amplitude and others have negative amplitude, then the amplitudes can cancel out, so that the event doesn’t happen at all. This is exactly what’s going on in the famous double-slit experiment: at certain spots on a screen, the different paths a photon could’ve taken to get to that spot interfere destructively and cancel each other out, and as a result no photon is seen.

Now, the idea of quantum computing is to set up a massive double-slit experiment with exponentially many paths — and to try to arrange things so that the paths leading to wrong answers interfere destructively and cancel each other out, while the paths leading to right answers interfere constructively and are therefore observed with high probability.

You can see it’s a subtle effect that we’re aiming for. And indeed, it’s only for a few specific problems that people have figured out how to choreograph an interference pattern to solve the problem efficiently — that is, in polynomial time.

One of these problems happens to be that of factoring integers. Thirteen years ago, Peter Shor discovered that a quantum computer could efficient apply Fourier transforms over exponentially-large abelian groups, and thereby find the periods of exponentially-long periodic sequences, and thereby factor integers, and thereby break the RSA cryptosystem, and thereby snarf people’s credit card numbers. So that’s one application of quantum computers.

On the other hand — and this is the most common misconception about quantum computing I’ve encountered — we do not, repeat do not, know a quantum algorithm to solve NP-complete problems in polynomial time. For “generic” problems of finding a needle in a haystack, most of us believe that quantum computers will give at most a polynomial advantage over classical ones.

At this point I should step back. How many of you have heard of the following question: Does P=NP?

Yeah, this is a problem so profound that it’s appeared on at least two TV shows (The Simpsons and NUMB3RS). It’s also one of the seven (now six) problems for which the Clay Math Institute is offerring a million-dollar prize for a solution.

Apparently the mathematicians had to debate whether P vs. NP was “deep” enough to include in their list. Personally, I take it as obvious that it’s the deepest of them all. And the reason is this: if you had a fast algorithm for solving NP-complete problems, then not only could you solve P vs. NP, you could presumably also solve the other six problems. You’d simply program your computer to search through all possible proofs of at most (say) a billion symbols, in some formal system like Zermelo-Fraenkel set theory. If such a proof existed, you’d find it in a reasonable amount of time. (And if the proof had more than a billion symbols, it’s not clear you’d even want to see it!)

This raises an important point: many people — even computer scientists — don’t appreciate just how profound the consequences would be if P=NP. They think it’s about scheduling airline flights better, or packing more boxes in your truck. Of course, it is about those things — but the point is that you can have a set of boxes such that if you could pack them into your truck, then you would also have proved the Riemann Hypothesis!

Of course, while the proof eludes us, we believe that P≠NP. We believe there’s no algorithm to solve NP-complete problems in deterministic polynomial time. But personally, I would actually make a stronger conjecture:

There is no physical means to solve NP-complete problems in polynomial time — not with classical computers, not with quantum computers, not with anything else.

You could call this the “No SuperSearch Principle.” It says that, if you’re going to find a needle in a haystack, then you’ve got to expend at least some computational effort sifting through the hay.

I see this principle as analogous to the Second Law of Thermodynamics or the impossibility of superluminal signalling. That is, it’s a technological limitation which is also a pretty fundamental fact about the laws of physics. Like those other principles, it could always be falsified by experiment, but after a while it seems manifestly more useful to assume it’s true and then see what the consequences are for other things.

OK, so what do we actually know about the ability of quantum computers to solve NP-complete problems efficiently? Well, of course we can’t prove it’s impossible, since we can’t even prove it’s impossible for classical computers — that’s the P vs. NP problem! We might hope to at least prove that quantum computers can’t solve NP-complete problems in polynomial time unless classical computers can also — but even that, alas, seems far beyond our ability to prove.

What we can prove is this: suppose you throw away the structure of an NP-complete problem, and just consider it as an abstract, featureless space of 2n possible solutions, where the only thing you can do is guess a solution and check whether it’s right or not. In that case it’s obvious that a classical computer will need ~2n steps to find a solution. But what if you used a quantum computer, which could “guess” all possible solutions in superposition? Well, even then, you’d still need at least ~2n/2 steps to find a solution. This is called the BBBV Theorem, and was one of the first things learned about the power of quantum computers.

Intuitively, even though a quantum computer in some sense involves exponentially many paths or “parallel universes,” the single universe that happened on the answer can’t shout above all the other universes: “hey, over here!” It can only gradually make the others aware of its presence.

As it turns out, the 2n/2 bound is actually achievable. For in 1996, Lov Grover showed that a quantum computer can search a list of N items using only √N steps. It seems to me that this result should clearly feature in Google’s business plan.

Of course in real life, NP-complete problems do have structure, and algorithms like local search and backtrack search exploit that structure. Because of this, the BBBV theorem can’t rule out a fast quantum algorithm for NP-complete problems. It merely shows that, if such an algorithm existed, then it couldn’t work the way 99% of everyone who’s ever heard of quantum computing thinks it would!

You might wonder whether there’s any proposal for a quantum algorithm that would exploit the structure of NP-complete problems. As it turns out, there’s one such proposal: the “quantum adiabatic algorithm” of Farhi et al., which can be seen as the quantum version of simulated annealing. Intriguingly, Farhi and his collaborators proved that, on some problem instances where classical simulated annealing would take exponential time, the quantum adiabatic algorithm takes only polynomial time. Alas, we also know of problem instances where the adiabatic algorithm takes exponential time just as simulated annealing does. So while this is still an active research area, right now the adiabatic algorithm does not look like a magic bullet for solving NP-complete problems.

If quantum computers can’t solve NP-complete problems in polynomial time, it raises an extremely interesting question: is there any physical means to solve NP-complete problems in polynomial time?

Well, there have been lots of proposals. One of my favorites involves taking two glass plates with pegs between them, and dipping the resulting contraption into a tub of soapy water. The idea is that the soap bubbles that form between the pegs should trace out the minimum Steiner tree — that is, the minimum total length of line segments connecting the pegs, where the segments can meet at points other than the pegs themselves. Now, this is known to be an NP-hard optimization problem. So, it looks like Nature is solving NP-hard problems in polynomial time!

You might say there’s an obvious difficulty: the soap bubbles could get trapped in a local optimum that’s different from the global optimum. By analogy, a rock in a mountain crevice could reach a lower state of potential energy by rolling up first and then down … but is rarely observed to do so!

And if you said that, you’d be absolutely right. But that didn’t stop two guys a few years ago from writing a paper in which they claimed, not only that soap bubbles solve NP-complete problems in polynomial time, but that that fact proves P=NP! In debates about this paper on newsgroups, several posters raised the duh-obvious point that soap bubbles can get trapped at local optima. But then another poster opined that that’s just an academic “party line,” and that he’d be willing to bet that no one had actually done an experiment to prove it.

Long story short, I went to the hardware store, bought some glass plates, liquid soap, etc., and found that, while Nature does often find a minimum Steiner tree with 4 or 5 pegs, it tends to get stuck at local optima with larger numbers of pegs. Indeed, often the soap bubbles settle down to a configuration which is not even a tree (i.e. contains “cycles of soap”), and thus provably can’t be optimal.

The situation is similar for protein folding. Again, people have said that Nature seems to be solving an NP-hard optimization problem in every cell of your body, by letting the proteins fold into their minimum-energy configurations. But there are two problems with this claim. The first problem is that proteins, just like soap bubbles, sometimes get stuck in suboptimal configurations — indeed, it’s believed that’s exactly what happens with Mad Cow Disease. The second problem is that, to the extent that proteins do usually fold into their optimal configurations, there’s an obvious reason why they would: natural selection! If there were a protein that could only be folded by proving the Riemann Hypothesis, the gene that coded for it would quickly get weeded out of the gene pool.

So: quantum computers, soap bubbles, proteins … if we want to solve NP-complete problems in polynomial time in the physical world, what’s left? Well, we can try going to more exotic physics. For example, since we don’t yet have a quantum theory of gravity, people have felt free to speculate that if we did have one, it would give us an efficient way to solve NP-complete problems. For example, maybe the theory would allow closed timelike curves, which would let us solve NP-complete and even harder problems by (in some sense) sending the answer back in time to before we started.

In my view, though, it’s more likely that a quantum theory of gravity will do the exact opposite: that is, it will limit our computational powers, relative to what they would’ve been in a universe without gravity. To see why, consider one of the oldest “extravagant” computing proposals: the Zeno computer. This is a computer that runs the first step of a program in one second, the second step in half a second, the third step in a quarter second, the fourth step in an eighth second, and so on, so that after two seconds it’s run infinitely many steps. (It reminds me of the old joke about the supercomputer that was so fast, it could do an infinite loop in 2.5 seconds.)

Question from the floor: In what sense is this even a “proposal”?

Answer: Well, it’s a proposal in the sense that people actually write papers about it! (Google “hypercomputation.”) Whether they should be writing those papers a separate question…

Now, the Zeno computer strikes most computer scientists — me included — as a joke. But why is it a joke? Can we say anything better than that it feels absurd to us?

As it turns out, this question takes us straight into some of the frontier issues in theoretical physics. In particular, one of the few things physicists think they know about quantum gravity — one of the few things both the string theorists and their critics largely agree on — is that, at the so-called “Planck scale” of about 10-33 centimeters or 10-43 seconds, our usual notions of space and time are going to break down. As one manifestation of this, if you tried to build a clock that ticked more than about 1043 times per second, that clock would use so much energy that it would collapse to a black hole. Ditto for a computer that performed more than about 1043 operations per second, or for a hard disk that stored more than about 1069 bits per square meter of surface area. (Together with the finiteness of the speed of light and the exponential expansion of the universe, this implies that, contrary to what you might have thought, there is a fundamental physical limit on how much disk space Gmail will ever be able to offer its subscribers…)

To summarize: while I believe what I called the “No SuperSearch Principle” — that is, while I believe there are fundamental physical limits to efficient computer search — I hope I’ve convinced you that understanding why these limits exist takes us straight into some of the deepest issues in math and physics. To me that’s so much the better — since it suggests that not only are the limits correct, but (more importantly) they’re also nontrivial.

Thank you.

Does it come with a 14-Gyr warranty?

Sunday, August 19th, 2007

As many of you probably saw, John Tierney of the New York Times thinks there’s a ~50% chance we’re living in a computer simulation, having been persuaded by Nick Bostrom’s infamous simulation argument.

(This argument, incidentally, is something that occurred to me as a teenager, and I’m guessing to many others of nerdly leanings as well. I didn’t consider it a profound metaphysical discovery, just a sign I needed to get out more.)

Peter Woit feels strongly that debates about whether the universe is a computer are not science and therefore have no place in the Times science section. Robin Hanson retorts that “rather than complain that something is not ‘science,’ or not ‘philosophy,’ it is much better to just say more specifically what it is that you don’t like about it.” Peter Shor points out that if we’re living in a simulation, then the incompatibility of quantum mechanics with general relativity might simply be a bug, in which case the universe will crash when the first black hole evaporates.

As for me, I tend to side with Woody Allen: yes, the universe might be a simulation, but where else can you get a decent steak?

The last word, however, goes to Bender Bending Rodriguez of Futurama.

Bender: “If that stuff wasn’t real, how can I be sure anything is real? Is it not possible, nay, probable that my whole life is just a product of my or someone else’s imagination?”

Clerk: “No, get out. Next!”

(Click here for the audio clip.)

Toads, lower bounds vie for control of Australia

Saturday, August 18th, 2007

What better way to procrastinate than to hear an Australian radio show interview me about the quantum query complexity of the collision problem, public-key cryptography, interactive proofs, computational intractability as a law of physics, and my great love for my high school? The first part of the program is about Australia’s population of cane toads (or rather, “tie-oads”). Then at 32:40, they start in with a report on the FQXi conference in Iceland, and interviews with Max Tegmark, Fred Adams, and Simon Saunders. I’m from 39:10 to 46:50.

A few comments/corrections:

  • The interviewer, Pauline Newman, asks me about the practical implications of the collision lower bound, and then cuts to me talking about how quantum computers could break the RSA cryptosystem. Of course, the connection is only an indirect one (the collision lower bound is what gives hope that one could design collision-resistant hash functions that, unlike RSA, are secure even against quantum attacks).
  • I said that, when trying to solve jigsaw puzzles or schedule airline flights, there doesn’t seem to be anything one can do that’s fundamentally better than trying every possibility. I should have added, “in the worst case.”
  • The reason I mentioned how old I was when IP=PSPACE was proved is not that I’m a narcissist (though I am), but because in a section that was cut, Pauline asked me if I proved IP=PSPACE, and I was trying to make it clear that I didn’t. The theorem was proved by Shamir, building on work of Lund, Fortnow, Karloff, and Nisan.
  • Pauline’s assertion that I “took off on a snowmobile without [my] passenger” and “left a distinguished physicist stranded on a glacier” is a gross exaggeration. What happened was, I waited and waited for someone — anyone — to climb onto my snowmobile. When no one did (maybe because everyone was scared by my abysmal driving ability), I figured I should just go.

Anyway, at least the um’s and uh’s seem to have been under control, compared to my interview with Lance two years ago.

My Favorite Growth Rates

Sunday, August 12th, 2007

Update (8/17): Believe it or not, this blog actually led to something (scroll down to comment #52 if the link doesn’t work).

Death to Verizon

Saturday, August 11th, 2007

Needing a token of my years in Waterloo, I figured it was finally time to trade in my Pleistocene Nokia phone for a BlackBerry. So I used some of my startup funds to buy a BlackBerry 8830 World Edition from Verizon. What particularly excited me about this model was that it was advertised as having a built-in GPS receiver — meaning (or so I thought) that I’d be able to pull up Google Maps wherever I was, and never get lost again.

Well, today the phone arrived, and I found out that Verizon has disabled the GPS (see here, here, and here). The reason, apparently, is that at some unknown time in the future, it plans to sell an inferior navigation service for $10/month, and doesn’t want people getting for free what it will later rip them off for.

I’ve been having fun imagining the conversation between Mike Lazaridis (the founder of Research in Motion, the Waterloo-based company that makes BlackBerries) and Verizon:

Lazaridis: It’s an abomination! As long as I draw breath, I’ll never agree to your crippling my invention!

Verizon CEO (breathing heavily): Young Lazaridis, come over to the Dark Side.

Lazaridis (pause): Actually, how much are you offering? I’ve been needing cash, ever since blowing all those millions on the Perimeter Institute and the Institute for Quantum Computing…

Some will say I’m a sucker, buyer beware, etc. The more sympathetic will call me a victim of false advertising — indeed, of the exact sort of corporate behavior that my best friend Alex Halderman and his adviser Ed Felten have battled for years with some spectacular successes.

Recently I attended a talk by the legendary free-software activist Richard Stallman, who thundered like an Old Testament prophet about human beings’ inalienable right to understand, modify, and share the technology they own. At the time I agreed with Stallman intellectually but found him a bit obsessive. Now I have my own dog in this fight.

I’ve always known that American cell phone companies are evil: they have shitty, unreliable networks, enormous advertising budgets, and miniscule R&D budgets. But Verizon has taken things to a level even I wouldn’t have predicted.

We’re not living in anything close to the efficient market dreamed of by my economist friends like Robin Hanson. The invisible hand has palsy and four missing fingers. And the proof is that, when a company like Verizon pulls a Monty Burns, there’s almost no risk it runs — almost nothing it fears. Indeed, about the only risk it does run is that some of its customers might have blogs — and that some of the savvier readers of those blogs might figure out how to hack the crippled phones and share that information with the world…

Intellectual whack-a-mole

Thursday, August 9th, 2007

Several readers have now written to me independently, asking for my reaction to the following paper:

An Optical Solution For The Traveling Salesman Problem
Tobias Haist and Wolfgang Osten

Abstract: We introduce an optical method based on white light interferometry in order to solve the well-known NP–complete traveling salesman problem. To our knowledge it is the first time that a method for the reduction of non–polynomial time to quadratic time has been proposed. We will show that this achievement is limited by the number of available photons for solving the problem. It will turn out that this number of photons is proportional to NN for a traveling salesman problem with N cities and that for large numbers of cities the method in practice therefore is limited by the signal–to–noise ratio. The proposed method is meant purely as a gedankenexperiment.

Look, this is really not hard. You really don’t need a world CompuCrackpotism expert to tell you what to think of this. If you read carefully, the authors were actually kind enough to explain themselves, right in the abstract, why their proposal doesn’t scale. (This, of course, is entirely to their credit, and puts them above ~98% of their colleagues in the burgeoning intersection of computer science, physics, and non-correctness.)

Hint: If the number of photons scales exponentially with N, and the photons have high enough energies that you can detect them, then the energy also scales exponentially with N. So by the Schwarzschild bound, the volume also scales exponentially with N; therefore, by locality, so does the time.

German comedy

Sunday, August 5th, 2007

I know I’ve been a derelict blogger since moving to MIT, allowing far, far too many of you to concentrate on work. But today I’m back with some quality procrastination material.

My colleague (and sometime überliberal commenter on this blog) Aram Harrow points me to a safety video for German forklift-truck drivers, which was posted to YouTube with English subtitles. As Aram says, it starts slow but is definitely worth watching to the end.

It’s funny: just this weekend, I was volunteering with the Cornell Alumni Association at the Greater Boston Food Bank. My job was to unload 40-pound boxes of canned goods from a forklift truck and place them on a conveyor belt. (And no, this is not something I’d normally do. Normally I’d offer to write a check to pay for ten people stronger than I am to unload boxes for the needy. Long story short, I was invited to do this by an individual of female persuasion.)

The whole time I was unloading boxes, I too was a bit worried about forklift safety — but, as I now know, not nearly as worried as I should have been.