## Archive for the ‘CS/Physics Deathmatch’ Category

### The Future of Computer Science, and Why Every Other Major Sucks By Comparison

Monday, April 12th, 2010

Does this post finally herald my return to regular blogging after a months-long absence?

I don’t know.  For me, writing a Shtetl-Optimized entry always followed the same process: I’d get an idea and start typing, furiously rocking back and forth in my chair.  Then the voices in my head would pipe up: no, I can’t say that—what will everyone think?—judging from past experience, they’ll probably take offense—I can already see the commenters boiling me alive—maybe if I rephrased it, or, y’know, provided some context—but to explain the real context, I’d need a whole book—and who has the time for that?—better wait till after tenure—meantime, maybe I could blog about something light and uncontroversial instead—but then what’s the point?—we already have one GASARCH—well, I could always put off a decision till later—

Back in the blog’s heyday, I’d win these fights about 40% the time and the voices would win about 60%.  (In other words: if you’ve ever taken offense at an entry of mine, rest assured that you haven’t even seen the half of my drafts folder.)  But now that I have an actual stake in this shabby world—students to advise and look after, a tenure case to build, conceivably even a family to start—the voices win more like 98% of the time.  And that’s why my blogging fell off.

Occasionally, though, something comes along so uncomplicatedly joyous that I feel no reservations about sharing it with the world.  Such was the case this weekend, when I was somehow called upon to represent MIT’s EECS Department in the annual “Professor Talent Show” at Campus Preview Weekend.  This is an event where six faculty members square off, taking eight minutes each to

(1) explain why their department is the coolest,
(2) crack jokes, and
(3) possibly demonstrate a musical or athletic talent.

Then, using electronic clickers, the several hundred prefrosh in attendence vote for which major carried the day.  Though I had no absolutely no talent of any kind to demonstrate, and was up against a banjo-player, violinist, and basketball-spinner among other tough competitors, for some reason EECS won!  You can see my PowerPoint slides here:

The Future of Computer Science, and Why Every Other Major Sucks By Comparison
http://www.scottaaronson.com/talks/futurecs.ppt

(You can read the jokes that go along with each slide in the slide notes at the bottom.)

Update (4/15): I hadn’t realized at all that there’s actually a video of me giving the talk!  (Click on “Part 2.”)

### Science: the toroidal pyramid

Wednesday, January 23rd, 2008

Chad Orzel gripes about this month’s Scientific American special issue on “The Future of Physics” — which is actually extremely good, but which turns out to be exclusively about the future of high-energy particle physics. Not surprisingly, the commenters on Chad’s blog reignite the ancient debate about which science is more fundamental than which other one, and whether all sciences besides particle physics are stamp collecting.

I started writing a comment myself, but then I realized I hadn’t posted anything to my own blog in quite some time, so being nothing if not opportunistic, I decided to put it here instead.

To me, one of the most delicious things about computer science is the way it turns the traditional “pyramid of sciences” on its head. We all know, of course, that math and logic are more fundamental than particle physics (even particle physicists themselves will, if pressed, grudgingly admit as much), and that particle physics is in turn more fundamental than condensed-matter physics, which is more fundamental than chemistry, which is more fundamental than biology, which is more fundamental than psychology, anthropology, and so on, which still are more fundamental than grubby engineering fields like, say, computer science … but then you find out that computer science actually has as strong a claim as math to be the substrate beneath physics, that in a certain sense computer science is math, and that until you understand what kinds of machines the laws of physics do and don’t allow, you haven’t really understood the laws themselves … and the whole hierarchy of fundamental-ness gets twisted into a circle and revealed as the bad nerd joke that it always was.

That was a longer sentence than I intended.

Note (Jan. 25): From now on, all comments asking what I think of the movie “Teeth” will be instantly deleted. I’m sick of the general topic, and regret having ever brought it up. Thank you for your understanding.

### My take on the Koblitz affair

Saturday, September 1st, 2007

Now that Luca, Michael Mitzenmacher, Jonathan Katz, and Oded Goldreich have all weighed in on Neal Koblitz’s critique of modern cryptography in the Notices of the AMS, I can no longer bear to be left out of the action.

My reaction is simple: we computer scientists should feel honored that the mathematicians have finally bestowed on us the level of contempt they once reserved for the physicists.

Update (9/6): If you want to understand what’s actually involved in this controversy, the best starting point I’ve found is this paper by Ivan Damgård.

### Experimental complexity theory

Wednesday, June 27th, 2007

I just came back from the MIT CSAIL (Computer Science and Artificial Intelligence Lab) annual meeting, which was held at a beach resort in Cape Cod. No, it isn’t California, but for at least a few months a year “my” coast can put up a respectable showing too:

Out of all the ideas I heard at the CSAIL meeting, the one that made me proudest to have become a professor was this: computer scientists should make a serious effort to address world hunger, deforestation, climate change, and other global crises, because of the significant opportunities to tap funding resources that are becoming available in these areas. I’m telling you, if a giant asteroid were going to hit the earth in a week, the first question academics would ask would be how to beat out competing proposals for the $50-million “Deflection of Space-Based Objects” initiative at NSF. The meeting ended with a “Wild & Crazy Ideas Session,” at which I (naturally) spoke. I briefly considered talking about quantum gravity computing, closed timelike curves, or quantum anthropic postselection, but ultimately decided on something a little less mainstream. My topic was “Experimental Computational Complexity Theory,” or “why do theoretical physicists get$8-billion machines for the sole purpose of confirming or refuting their speculative ideas, whereas theoretical computer scientists get diddlysquat?” More concretely, my proposal is to devote some of the world’s computing power to an all-out attempt to answer questions like the following: does computing the permanent of a 4-by-4 matrix require more arithmetic operations than computing its determinant? You can read my slides here.

### The wisdom of Gian-Carlo Rota (1932-1999)

Monday, April 9th, 2007

From www.rota.org:

Graph theory, like lattice theory, is the whipping boy of mathematicians in need of concealing their feelings of insecurity.

Mathematicians also make terrible salesmen. Physicists can discover the same thing as a mathematician and say ‘We’ve discovered a great new law of nature. Give us a billion dollars.’ And if it doesn’t change the world, then they say, ‘There’s an even deeper thing. Give us another billion dollars.’

When an undergraduate asks me whether he or she should major in mathematics rather than in another field that I will simply call X, my answer is the following: “If you major in mathematics, you can switch to X anytime you want to, but not the other way around.”

Flakiness is nowadays creeping into the sciences like a virus through a computer system, and it may be the greatest present threat to our civilization. Mathematics can save the world from the invasion of the flakes by unmasking them, and by contributing some hard thinking. You and I know that mathematics, by definition, is not and never will be flaky.

Note: Quotation here does not necessarily imply endorsement by Shtetl-Optimized LLC or any of its subsidary enterprises.

### A complexity theorist’s (non)apology

Wednesday, February 21st, 2007

Several respected physicists wrote to me privately to say how disappointed they were that Umesh and I would fight shoddy journalism by making a shoddy claim of our own: namely, that the inability of quantum computers to solve NP-complete problems efficiently is an established fact. I took a lot of flak in the comments section over the same issue.

Ladies and gentlemen of the jury, I will answer the unjust charges being leveled against me and my advisor.

But first, let’s review the facts. As I’ve said in pretty much every introductory talk I’ve ever given, obviously we can’t yet hope to prove that NP-complete problems are hard for quantum computers, since we haven’t even proved they’re hard for classical computers! (Nor, for that matter, do we have any idea how to prove that if they’re hard for classical computers then they’re also hard for quantum computers.) These are some of the most profound open problems in mathematics. Solving them could easily take decades or centuries.

I dare say that Umesh and I know this as well as anyone on Earth. And that’s why, even while trying in the space of a few sentences to correct a breathtaking misconception about the nature of the physical world that was being endlessly repeated to millions of people, we still took care in what we said.

Here’s Umesh:

Most egregious is your assertion that quantum computers can solve NP-complete problems in “one shot” by exploring exponentially many solutions at once. This mistaken view was put to rest in the infancy of quantum computation over a decade ago … For unstructured search problems like the NP-complete problems this means that there is no exponential speed up but rather at most a quadratic speed up.

In the above passage, Umesh is talking about an epochal theorem that he and others did manage to prove: namely, that quantum computers could not solve NP-complete problems by any “one-shot” method based on exploring exponentially many solutions in parallel. Throw away the structure of an NP-complete problem — consider it just as an abstract space of 2n solutions — and we know that quantum computers will give you at most a quadratic speedup over classical ones.

In the thirteen years since this “BBBV theorem” was proved, two interesting things happened:

1. Various experts dismissed the theorem as irrelevant, knocking down a straw man, stacking the deck in favor of its conclusion by imposing an utterly-unjustified “black-box” assumption, etc.
2. Hundreds of articles appeared, in both the popular press and the arXiv, that directly contradicted the theorem.

It reminds me of how theologians chide Richard Dawkins for refuting only a crude, anthropomorphic, straw-man god instead of a sophisticated Einsteinian one, and then (with an air of self-satisfaction) go off and pray to the crude god.

To be fair, we do have one quantum algorithm for NP-complete problems that falls outside the scope of the BBBV theorem: namely, the adiabatic algorithm of Farhi et al. This algorithm can be seen as a quantum version of simulated annealing. Intriguingly, Farhi, Goldstone, and Gutmann gave examples where simulated annealing gets stuck at local optima, whereas the adiabatic algorithm tunnels through to the global optimum. On the other hand, van Dam, Mosca, and Vazirani gave other examples where the adiabatic algorithm also gets stuck at local optima, taking exponential time to reach a global optimum.

The upshot is that, if a fast quantum algorithm for NP-complete problems existed, then just like a fast classical algorithm, it would have to be radically different from anything that’s yet been imagined. Because of this — not to mention the civilization-changing consequences that such an algorithm would have — Umesh and I feel strongly that claims to solve NP-complete problems should never be bandied about lightly. As with perpetual-motion machines or antigravity shields, the burden of proof lies entirely with the would-be inventor. “In case of fire, break glass.” “In case of algorithm, break skepticism.”

It might be objected that, while the experts know that this is what Umesh meant, laypeople could easily misinterpret his words — or in other words, that Umesh has pulled a D-Wave of his own. But here’s the crucial difference. Any motivated reader who wanted the real story behind Umesh’s three-sentence caricature could find that story in peer-reviewed articles only a Google search away. But with D-Wave, all they’d have to go on is the PR. Simplifying mathematical subtleties is a right you have to earn, by having the cards in case anyone calls your bluff.

So much for Umesh’s letter. Now let’s look at mine:

Today it is accepted that quantum computers could not solve NP-complete problems in a reasonable amount of time. Indeed, the view of quantum computers as able to “try all possible solutions in parallel,” and then instantly choose the correct one, is fundamentally mistaken.

Notice I didn’t say it was proved that quantum computers can’t solve NP-complete problems in reasonable time: I said it was accepted. This, I felt, was a difference few people would have trouble understanding. As an example, if biologists said it was accepted that the Loch Ness monster doesn’t exist, presumably no one would interpret that as meaning they’d actually proved its nonexistence. Indeed, the interesting difference between the two cases is that someday, it might actually be possible to prove the nonexistence of the fast quantum algorithm.

Or are we complexity theorists being too dogmatic? Should we concede to a certain subset of our physicist friends that, until an actual proof has been discovered, we have no basis even to guess whether P versus NP or NP versus BQP will go one way or the other way? Should we, in other words, hold ourselves to the same lofty standards of uncompromising mathematical rigor that the physicists themselves have always adhered to?

Oh — pardon me. I had momentarily forgotten that we were talking about the headmasters of handwaving, the sultans of sloppiness, the princes of proof-by-example. Indeed, I think it’s fair to say that if physicists had discovered the P versus NP question, they would have immediately declared that P≠NP — and they would have hailed this ‘discovery’ of theirs as another remarkable success for physics as a discipline. And everyone else — from other scientists to programmers to journalists to the general public — would have gone right along with it. The task of proving P≠NP would have been left as a technical detail, to be filled in by the mathematical hairsplitters — just like the task of proving quark confinement, or the ergodicity of particles in a box, or the existence of Yang-Mills theory, or the perturbative finiteness of string theory.

Clearly, the issue here can’t be the intelligence of physicists, some of whom actually seem reasonably smart. The issue, rather, is their different standard — much closer to the standard of everyday life — for saying that they know something is true. My favorite example in this vein comes from Leonid Levin, who tells me he couldn’t convince Richard Feynman that P versus NP was an open problem at all.

I believe Feynman was onto something, in that the only reason P versus NP is called an “open problem” is that we — the theoretical computer scientists and mathematicians — hold ourselves to a different standard of rigor than any other scientists. Were we less cautious, we could easily talk about the hardness of NP-complete problems as one of our great discoveries, a discovery for which working out the mathematical underpinnings admittedly remains as a challenge for future generations.

Ironically, our higher standard of rigor often gets turned against us, when outsiders use it to argue that we’re just guessing, or building castles in the sky, or making conjectures that could all turn out to be wrong. The same charges could obviously be leveled against the central hypotheses of physics or economics or pretty much any other field, but they rarely are — at least not by the same people.

I’m tired of double standards, is all I’m saying.

### Logicians on safari

Thursday, November 2nd, 2006

Sean Carroll, who many of you know from Cosmic Variance, asked the following question in response to my last entry:

I’m happy to admit that I don’t know anything about “one-way functions and interactive proofs.” So, in what sense has theoretical computer science contributed more in the last 30 years to our basic understanding of the universe than particle physics or cosmology? (Despite the fact that I’m a cosmologist, I don’t doubt your statement — I’d just like to be able to explain it in public.)

I posted my response as a comment, but it’s probably better to make it an entry of its own. So:

Hi Sean,

Of course I was joking when I mentioned “objective standards” for ranking scientific fields. Depending on which questions keep you up at night, different parts of “humankind’s basic picture of the universe” will seem larger or smaller. (To say that, of course, is not to suggest any relativism about the picture itself.)

What I can do, though, is to tell you why — by my own subjective standards — the contributions of theoretical computer science over the last 30 years rival those of theoretical physics or any other field I know about. Of course, people will say I only think that because I’m a theoretical computer scientist, but that gets the causal arrow wrong: I became a theoretical computer scientist because, as a teenager, I thought it!

1. We now know that, if an alien with enormous computational powers came to Earth, it could prove to us whether White or Black has the winning strategy in chess. To be convinced of the proof, we would not have to trust the alien or its exotic technology, and we would not have to spend billions of years analyzing one move sequence after another. We’d simply have to engage in a short conversation with the alien about the sums of certain polynomials over finite fields.
2. There’s a finite (and not unimaginably-large) set of boxes, such that if we knew how to pack those boxes into the trunk of your car, then we’d also know a proof of the Riemann Hypothesis. Indeed, every formal proof of the Riemann Hypothesis with at most (say) a million symbols corresponds to some way of packing the boxes into your trunk, and vice versa. Furthermore, a list of the boxes and their dimensions can be feasibly written down.
3. Supposing you do prove the Riemann Hypothesis, it’s possible to convince someone of that fact, without revealing anything other than the fact that you proved it. It’s also possible to write the proof down in such a way that someone else could verify it, with very high confidence, having only seen 10 or 20 bits of the proof.
4. If every second or so your computer’s memory were wiped completely clean, except for the input data; the clock; a static, unchanging program; and a counter that could only be set to 1, 2, 3, 4, or 5, it would still be possible (given enough time) to carry out an arbitrarily long computation — just as if the memory weren’t being wiped clean each second. This is almost certainly not true if the counter could only be set to 1, 2, 3, or 4. The reason 5 is special here is pretty much the same reason it’s special in Galois’ proof of the unsolvability of the quintic equation.
5. It would be great to prove that RSA is unbreakable by classical computers. But every known technique for proving that would, if it worked, simultaneously give an algorithm for breaking RSA! For example, if you proved that RSA with an n-bit key took n5 steps to break, you would’ve discovered an algorithm for breaking it in 2n^1/5 steps. If you proved that RSA took 2n^1/3 steps to break, you would’ve discovered an algorithm for breaking it in n(log n)^2 steps. As you show the problem to be harder, you simultaneously show it to be easier.

Alright, let me stop before I get carried away. The examples I’ve listed (and hundreds more like them) are not exactly discoveries about physics, but they don’t have the flavor of pure math either. And even if they have some practical implications for computing (which they do), they certainly don’t have the flavor of nitty-gritty software engineering.

So what are they then? Maybe it’s helpful to think of them as “quantitative epistemology”: discoveries about the capacities of finite beings like ourselves to learn mathematical truths. On this view, the theoretical computer scientist is basically a mathematical logician on a safari to the physical world: someone who tries to understand the universe by asking what sorts of mathematical questions can and can’t be answered within it. Not whether the universe is a computer, but what kind of computer it is! Naturally, this approach to understanding the world tends to appeal most to people for whom math (and especially discrete math) is reasonably clear, whereas physics is extremely mysterious.

In my opinion, one of the biggest challenges for our time is to integrate the enormous body of knowledge in theoretical computer science (or quantitative epistemology, or whatever you want to call it) with the rest of what we know about the universe. In the past, the logical safari mostly stayed comfortably within 19th-century physics; now it’s time to venture out into the early 20th century. Indeed, that’s exactly why I chose to work on quantum computing: not because I want to build quantum computers (though I wouldn’t mind that), but because I want to know what a universe that allows quantum computers is like.

Incidentally, it’s also why I try hard to keep up with your field. If I’m not mistaken, less than a decade ago cosmologists made an enormous discovery about the capacity of finite beings to learn mathematical truths: namely, that no computation carried out in the physical world can ever involve more than 1/Λ ~ 10122 bits.

Best,
Scott

### My daily dose of depression

Wednesday, November 1st, 2006

Yesterday’s Times ran an essay by Steve Lohr, based on speeches about the future of computing given by my former teachers Richard Karp and Jon Kleinberg. Though most of the essay is welcome and unobjectionable, let’s look at the first two paragraphs:

Computer science is not only a comparatively young field, but also one that has had to prove it is really science. Skeptics in academia would often say that after Alan Turing described the concept of the “universal machine” in the late 1930’s — the idea that a computer in theory could be made to do the work of any kind of calculating machine, including the human brain — all that remained to be done was mere engineering.

The more generous perspective today is that decades of stunningly rapid advances in processing speed, storage and networking, along with the development of increasingly clever software, have brought computing into science, business and culture in ways that were barely imagined years ago. The quantitative changes delivered through smart engineering opened the door to qualitative changes.

So, here are the two options on offer from the paper of record: either

1. computer science was finished off by Alan Turing, or
2. “stunningly rapid advances in processing speed, storage and networking” have reopened it just recently.

Even among the commenters on this post by Chad Orzel — which Dave Bacon forwarded to me with the subject line “bait” — awareness of any third possibility seems depressingly rare. Judging from the evidence, it’s not that people have engaged the mysteries of P versus NP, randomness and determinism, one-way functions and interactive proofs, and found them insufficiently deep. Rather, as bizarre as it sounds, it’s that people don’t know these mysteries exist — just as they wouldn’t know about black holes or the Big Bang if no one told them. If you want to understand why our subject — which by any objective standard, has contributed at least as much over the last 30 years as (say) particle physics or cosmology to humankind’s basic picture of the universe — receives a whopping \$5 million a year from the NSF (with even that in constant danger), look no further.

### Mistake of the Week: “X works on paper, but not in the real world”

Thursday, October 26th, 2006

Time again for Shtetl-Optimized’s Mistake of the Week series! This week my inspiration comes from a paper that’s been heating up the quantum blogosphere (the Blochosphere?): Is Fault-Tolerant Quantum Computation Really Possible? by M. I. Dyakonov. I’ll start by quoting my favorite passages:

The enormous literature devoted to this subject (Google gives 29300 hits for “fault-tolerant quantum computation”) is purely mathematical. It is mostly produced by computer scientists with a limited understanding of physics and a somewhat restricted perception of quantum mechanics as nothing more than unitary transformations in Hilbert space plus “entanglement.”

Whenever there is a complicated issue, whether in many-particle physics, climatology, or economics, one can be almost certain that no theorem will be applicable and/or relevant, because the explicit or implicit assumptions, on which it is based, will never hold in reality.

I’ll leave the detailed critique of Dyakonov’s paper to John Preskill, the Pontiff, and other “computer scientists” who understand the fault-tolerance theorem much better than a mere physicist like me. Here I instead want to take issue with an idea that surfaces again and again in Dyakonov’s paper, is almost universally accepted, but is nevertheless false. The idea is this: that it’s possible for a theory to “work on paper but not in the real world.”

The proponents of this idea go wrong, not in thinking that a theory can fail in the real world, but in thinking that if it fails, then the theory can still “work on paper.” If a theory claims to describe a phenomenon but doesn’t, then the theory doesn’t work, period — neither in the real world nor on paper. In my view, the refrain that something “works on paper but not in the real world” serves mainly as an intellectual crutch: a way for the lazy to voice their opinion that something feels wrong to them, without having to explain how or where it’s wrong.

“Ah,” you say, “but theorists often make assumptions that don’t hold in the real world!” Yes, but you’re sidestepping the key question: did the theorists state their assumptions clearly or not? If they didn’t, then the fault lies with them; if they did, then the fault lies with those practitioners who would milk a nonspherical cow like a spherical one.

To kill a theory (in the absence of direct evidence), you need to pinpoint which of its assumptions are unfounded and why. You don’t become more convincing by merely finding more assumptions to criticize; on the contrary, the “hope something sticks” approach usually smacks of desperation:

There’s no proof that the Earth’s temperature is rising, but even if there was, there’s no proof that humans are causing it, but even if there was, there’s no proof that it’s anything to worry about, but even there was, there’s no proof that we can do anything about it, but even if there was, it’s all just a theory anyway!

As should be clear, “just a theory” is not a criticism: it’s a kvetch.

Marge: I really think this is a bad idea.
Homer: Marge, I agree with you — in theory. In theory, communism works. In theory.

Actually, let’s look at Homer’s example of communism, since nothing could better illustrate my point. When people say that communism works “in theory,” they presumably mean that it works if everyone is altruistic. But regulating selfishness is the whole problem political systems are supposed to solve in the first place! Any political system that defines the problem away doesn’t work on paper, any more than “Call a SAT oracle” works on paper as a way to solve NP-complete problems. Once again, we find the “real world / paper” distinction used as a cover for intellectual laziness.

Let me end this rant by preempting the inevitable cliché that “in theory, there’s no difference between theory and practice; in practice, there is.” Behold my unanswerable retort:

In theory, there’s no difference between theory and practice even in practice.

### Newton vs. Leibniz: the wigs are off

Tuesday, October 17th, 2006

Of course, the greatest scientific flame war of all time was the calculus priority dispute between Isaac Newton and Gottfried Wilhelm Leibniz. This one had everything: intrigue, pettiness, hypocrisy, nationalism, and even hints of the physicist vs. computer scientist split that continues to this day.

In our opening talk at QIPC’2006 in London, Dave Bacon and I decided to relive the hatred — with Dave in a frilly white wig playing the part of Newton, and your humble blogger in a frilly black wig playing the part of Leibniz. We forgot to take photos, but here’s the script, and here are the slides for the … err, “serious” talk that Dave and I gave after dewigging.

Update (thanks to Dave and Viv Kendon):