Archive for the ‘CS/Physics Deathmatch’ Category

The wisdom of Gian-Carlo Rota (1932-1999)

Monday, April 9th, 2007

From www.rota.org:

Graph theory, like lattice theory, is the whipping boy of mathematicians in need of concealing their feelings of insecurity.

Mathematicians also make terrible salesmen. Physicists can discover the same thing as a mathematician and say ‘We’ve discovered a great new law of nature. Give us a billion dollars.’ And if it doesn’t change the world, then they say, ‘There’s an even deeper thing. Give us another billion dollars.’

When an undergraduate asks me whether he or she should major in mathematics rather than in another field that I will simply call X, my answer is the following: “If you major in mathematics, you can switch to X anytime you want to, but not the other way around.”

Flakiness is nowadays creeping into the sciences like a virus through a computer system, and it may be the greatest present threat to our civilization. Mathematics can save the world from the invasion of the flakes by unmasking them, and by contributing some hard thinking. You and I know that mathematics, by definition, is not and never will be flaky.

Note: Quotation here does not necessarily imply endorsement by Shtetl-Optimized LLC or any of its subsidary enterprises.

A complexity theorist’s (non)apology

Wednesday, February 21st, 2007

Several respected physicists wrote to me privately to say how disappointed they were that Umesh and I would fight shoddy journalism by making a shoddy claim of our own: namely, that the inability of quantum computers to solve NP-complete problems efficiently is an established fact. I took a lot of flak in the comments section over the same issue.

Ladies and gentlemen of the jury, I will answer the unjust charges being leveled against me and my advisor.

But first, let’s review the facts. As I’ve said in pretty much every introductory talk I’ve ever given, obviously we can’t yet hope to prove that NP-complete problems are hard for quantum computers, since we haven’t even proved they’re hard for classical computers! (Nor, for that matter, do we have any idea how to prove that if they’re hard for classical computers then they’re also hard for quantum computers.) These are some of the most profound open problems in mathematics. Solving them could easily take decades or centuries.

I dare say that Umesh and I know this as well as anyone on Earth. And that’s why, even while trying in the space of a few sentences to correct a breathtaking misconception about the nature of the physical world that was being endlessly repeated to millions of people, we still took care in what we said.

Here’s Umesh:

Most egregious is your assertion that quantum computers can solve NP-complete problems in “one shot” by exploring exponentially many solutions at once. This mistaken view was put to rest in the infancy of quantum computation over a decade ago … For unstructured search problems like the NP-complete problems this means that there is no exponential speed up but rather at most a quadratic speed up.

In the above passage, Umesh is talking about an epochal theorem that he and others did manage to prove: namely, that quantum computers could not solve NP-complete problems by any “one-shot” method based on exploring exponentially many solutions in parallel. Throw away the structure of an NP-complete problem — consider it just as an abstract space of 2n solutions — and we know that quantum computers will give you at most a quadratic speedup over classical ones.

In the thirteen years since this “BBBV theorem” was proved, two interesting things happened:

  1. Various experts dismissed the theorem as irrelevant, knocking down a straw man, stacking the deck in favor of its conclusion by imposing an utterly-unjustified “black-box” assumption, etc.
  2. Hundreds of articles appeared, in both the popular press and the arXiv, that directly contradicted the theorem.

It reminds me of how theologians chide Richard Dawkins for refuting only a crude, anthropomorphic, straw-man god instead of a sophisticated Einsteinian one, and then (with an air of self-satisfaction) go off and pray to the crude god.

To be fair, we do have one quantum algorithm for NP-complete problems that falls outside the scope of the BBBV theorem: namely, the adiabatic algorithm of Farhi et al. This algorithm can be seen as a quantum version of simulated annealing. Intriguingly, Farhi, Goldstone, and Gutmann gave examples where simulated annealing gets stuck at local optima, whereas the adiabatic algorithm tunnels through to the global optimum. On the other hand, van Dam, Mosca, and Vazirani gave other examples where the adiabatic algorithm also gets stuck at local optima, taking exponential time to reach a global optimum.

The upshot is that, if a fast quantum algorithm for NP-complete problems existed, then just like a fast classical algorithm, it would have to be radically different from anything that’s yet been imagined. Because of this — not to mention the civilization-changing consequences that such an algorithm would have — Umesh and I feel strongly that claims to solve NP-complete problems should never be bandied about lightly. As with perpetual-motion machines or antigravity shields, the burden of proof lies entirely with the would-be inventor. “In case of fire, break glass.” “In case of algorithm, break skepticism.”

It might be objected that, while the experts know that this is what Umesh meant, laypeople could easily misinterpret his words — or in other words, that Umesh has pulled a D-Wave of his own. But here’s the crucial difference. Any motivated reader who wanted the real story behind Umesh’s three-sentence caricature could find that story in peer-reviewed articles only a Google search away. But with D-Wave, all they’d have to go on is the PR. Simplifying mathematical subtleties is a right you have to earn, by having the cards in case anyone calls your bluff.

So much for Umesh’s letter. Now let’s look at mine:

Today it is accepted that quantum computers could not solve NP-complete problems in a reasonable amount of time. Indeed, the view of quantum computers as able to “try all possible solutions in parallel,” and then instantly choose the correct one, is fundamentally mistaken.

Notice I didn’t say it was proved that quantum computers can’t solve NP-complete problems in reasonable time: I said it was accepted. This, I felt, was a difference few people would have trouble understanding. As an example, if biologists said it was accepted that the Loch Ness monster doesn’t exist, presumably no one would interpret that as meaning they’d actually proved its nonexistence. Indeed, the interesting difference between the two cases is that someday, it might actually be possible to prove the nonexistence of the fast quantum algorithm.

Or are we complexity theorists being too dogmatic? Should we concede to a certain subset of our physicist friends that, until an actual proof has been discovered, we have no basis even to guess whether P versus NP or NP versus BQP will go one way or the other way? Should we, in other words, hold ourselves to the same lofty standards of uncompromising mathematical rigor that the physicists themselves have always adhered to?

Oh — pardon me. I had momentarily forgotten that we were talking about the headmasters of handwaving, the sultans of sloppiness, the princes of proof-by-example. Indeed, I think it’s fair to say that if physicists had discovered the P versus NP question, they would have immediately declared that P≠NP — and they would have hailed this ‘discovery’ of theirs as another remarkable success for physics as a discipline. And everyone else — from other scientists to programmers to journalists to the general public — would have gone right along with it. The task of proving P≠NP would have been left as a technical detail, to be filled in by the mathematical hairsplitters — just like the task of proving quark confinement, or the ergodicity of particles in a box, or the existence of Yang-Mills theory, or the perturbative finiteness of string theory.

Clearly, the issue here can’t be the intelligence of physicists, some of whom actually seem reasonably smart. The issue, rather, is their different standard — much closer to the standard of everyday life — for saying that they know something is true. My favorite example in this vein comes from Leonid Levin, who tells me he couldn’t convince Richard Feynman that P versus NP was an open problem at all.

I believe Feynman was onto something, in that the only reason P versus NP is called an “open problem” is that we — the theoretical computer scientists and mathematicians — hold ourselves to a different standard of rigor than any other scientists. Were we less cautious, we could easily talk about the hardness of NP-complete problems as one of our great discoveries, a discovery for which working out the mathematical underpinnings admittedly remains as a challenge for future generations.

Ironically, our higher standard of rigor often gets turned against us, when outsiders use it to argue that we’re just guessing, or building castles in the sky, or making conjectures that could all turn out to be wrong. The same charges could obviously be leveled against the central hypotheses of physics or economics or pretty much any other field, but they rarely are — at least not by the same people.

I’m tired of double standards, is all I’m saying.

Logicians on safari

Thursday, November 2nd, 2006

Sean Carroll, who many of you know from Cosmic Variance, asked the following question in response to my last entry:

I’m happy to admit that I don’t know anything about “one-way functions and interactive proofs.” So, in what sense has theoretical computer science contributed more in the last 30 years to our basic understanding of the universe than particle physics or cosmology? (Despite the fact that I’m a cosmologist, I don’t doubt your statement — I’d just like to be able to explain it in public.)

I posted my response as a comment, but it’s probably better to make it an entry of its own. So:

Hi Sean,

Thanks for your question!

Of course I was joking when I mentioned “objective standards” for ranking scientific fields. Depending on which questions keep you up at night, different parts of “humankind’s basic picture of the universe” will seem larger or smaller. (To say that, of course, is not to suggest any relativism about the picture itself.)

What I can do, though, is to tell you why — by my own subjective standards — the contributions of theoretical computer science over the last 30 years rival those of theoretical physics or any other field I know about. Of course, people will say I only think that because I’m a theoretical computer scientist, but that gets the causal arrow wrong: I became a theoretical computer scientist because, as a teenager, I thought it!

It’s probably best to start with some examples.

  1. We now know that, if an alien with enormous computational powers came to Earth, it could prove to us whether White or Black has the winning strategy in chess. To be convinced of the proof, we would not have to trust the alien or its exotic technology, and we would not have to spend billions of years analyzing one move sequence after another. We’d simply have to engage in a short conversation with the alien about the sums of certain polynomials over finite fields.
  2. There’s a finite (and not unimaginably-large) set of boxes, such that if we knew how to pack those boxes into the trunk of your car, then we’d also know a proof of the Riemann Hypothesis. Indeed, every formal proof of the Riemann Hypothesis with at most (say) a million symbols corresponds to some way of packing the boxes into your trunk, and vice versa. Furthermore, a list of the boxes and their dimensions can be feasibly written down.
  3. Supposing you do prove the Riemann Hypothesis, it’s possible to convince someone of that fact, without revealing anything other than the fact that you proved it. It’s also possible to write the proof down in such a way that someone else could verify it, with very high confidence, having only seen 10 or 20 bits of the proof.
  4. If every second or so your computer’s memory were wiped completely clean, except for the input data; the clock; a static, unchanging program; and a counter that could only be set to 1, 2, 3, 4, or 5, it would still be possible (given enough time) to carry out an arbitrarily long computation — just as if the memory weren’t being wiped clean each second. This is almost certainly not true if the counter could only be set to 1, 2, 3, or 4. The reason 5 is special here is pretty much the same reason it’s special in Galois’ proof of the unsolvability of the quintic equation.
  5. It would be great to prove that RSA is unbreakable by classical computers. But every known technique for proving that would, if it worked, simultaneously give an algorithm for breaking RSA! For example, if you proved that RSA with an n-bit key took n5 steps to break, you would’ve discovered an algorithm for breaking it in 2n^1/5 steps. If you proved that RSA took 2n^1/3 steps to break, you would’ve discovered an algorithm for breaking it in n(log n)^2 steps. As you show the problem to be harder, you simultaneously show it to be easier.

Alright, let me stop before I get carried away. The examples I’ve listed (and hundreds more like them) are not exactly discoveries about physics, but they don’t have the flavor of pure math either. And even if they have some practical implications for computing (which they do), they certainly don’t have the flavor of nitty-gritty software engineering.

So what are they then? Maybe it’s helpful to think of them as “quantitative epistemology”: discoveries about the capacities of finite beings like ourselves to learn mathematical truths. On this view, the theoretical computer scientist is basically a mathematical logician on a safari to the physical world: someone who tries to understand the universe by asking what sorts of mathematical questions can and can’t be answered within it. Not whether the universe is a computer, but what kind of computer it is! Naturally, this approach to understanding the world tends to appeal most to people for whom math (and especially discrete math) is reasonably clear, whereas physics is extremely mysterious.

In my opinion, one of the biggest challenges for our time is to integrate the enormous body of knowledge in theoretical computer science (or quantitative epistemology, or whatever you want to call it) with the rest of what we know about the universe. In the past, the logical safari mostly stayed comfortably within 19th-century physics; now it’s time to venture out into the early 20th century. Indeed, that’s exactly why I chose to work on quantum computing: not because I want to build quantum computers (though I wouldn’t mind that), but because I want to know what a universe that allows quantum computers is like.

Incidentally, it’s also why I try hard to keep up with your field. If I’m not mistaken, less than a decade ago cosmologists made an enormous discovery about the capacity of finite beings to learn mathematical truths: namely, that no computation carried out in the physical world can ever involve more than 1/Λ ~ 10122 bits.

Best,
Scott

My daily dose of depression

Wednesday, November 1st, 2006

Yesterday’s Times ran an essay by Steve Lohr, based on speeches about the future of computing given by my former teachers Richard Karp and Jon Kleinberg. Though most of the essay is welcome and unobjectionable, let’s look at the first two paragraphs:

Computer science is not only a comparatively young field, but also one that has had to prove it is really science. Skeptics in academia would often say that after Alan Turing described the concept of the “universal machine” in the late 1930’s — the idea that a computer in theory could be made to do the work of any kind of calculating machine, including the human brain — all that remained to be done was mere engineering.

The more generous perspective today is that decades of stunningly rapid advances in processing speed, storage and networking, along with the development of increasingly clever software, have brought computing into science, business and culture in ways that were barely imagined years ago. The quantitative changes delivered through smart engineering opened the door to qualitative changes.

So, here are the two options on offer from the paper of record: either

  1. computer science was finished off by Alan Turing, or
  2. “stunningly rapid advances in processing speed, storage and networking” have reopened it just recently.

Even among the commenters on this post by Chad Orzel — which Dave Bacon forwarded to me with the subject line “bait” — awareness of any third possibility seems depressingly rare. Judging from the evidence, it’s not that people have engaged the mysteries of P versus NP, randomness and determinism, one-way functions and interactive proofs, and found them insufficiently deep. Rather, as bizarre as it sounds, it’s that people don’t know these mysteries exist — just as they wouldn’t know about black holes or the Big Bang if no one told them. If you want to understand why our subject — which by any objective standard, has contributed at least as much over the last 30 years as (say) particle physics or cosmology to humankind’s basic picture of the universe — receives a whopping $5 million a year from the NSF (with even that in constant danger), look no further.

Mistake of the Week: “X works on paper, but not in the real world”

Thursday, October 26th, 2006

Time again for Shtetl-Optimized’s Mistake of the Week series! This week my inspiration comes from a paper that’s been heating up the quantum blogosphere (the Blochosphere?): Is Fault-Tolerant Quantum Computation Really Possible? by M. I. Dyakonov. I’ll start by quoting my favorite passages:

The enormous literature devoted to this subject (Google gives 29300 hits for “fault-tolerant quantum computation”) is purely mathematical. It is mostly produced by computer scientists with a limited understanding of physics and a somewhat restricted perception of quantum mechanics as nothing more than unitary transformations in Hilbert space plus “entanglement.”

Whenever there is a complicated issue, whether in many-particle physics, climatology, or economics, one can be almost certain that no theorem will be applicable and/or relevant, because the explicit or implicit assumptions, on which it is based, will never hold in reality.

I’ll leave the detailed critique of Dyakonov’s paper to John Preskill, the Pontiff, and other “computer scientists” who understand the fault-tolerance theorem much better than a mere physicist like me. Here I instead want to take issue with an idea that surfaces again and again in Dyakonov’s paper, is almost universally accepted, but is nevertheless false. The idea is this: that it’s possible for a theory to “work on paper but not in the real world.”

The proponents of this idea go wrong, not in thinking that a theory can fail in the real world, but in thinking that if it fails, then the theory can still “work on paper.” If a theory claims to describe a phenomenon but doesn’t, then the theory doesn’t work, period — neither in the real world nor on paper. In my view, the refrain that something “works on paper but not in the real world” serves mainly as an intellectual crutch: a way for the lazy to voice their opinion that something feels wrong to them, without having to explain how or where it’s wrong.

“Ah,” you say, “but theorists often make assumptions that don’t hold in the real world!” Yes, but you’re sidestepping the key question: did the theorists state their assumptions clearly or not? If they didn’t, then the fault lies with them; if they did, then the fault lies with those practitioners who would milk a nonspherical cow like a spherical one.

To kill a theory (in the absence of direct evidence), you need to pinpoint which of its assumptions are unfounded and why. You don’t become more convincing by merely finding more assumptions to criticize; on the contrary, the “hope something sticks” approach usually smacks of desperation:

There’s no proof that the Earth’s temperature is rising, but even if there was, there’s no proof that humans are causing it, but even if there was, there’s no proof that it’s anything to worry about, but even there was, there’s no proof that we can do anything about it, but even if there was, it’s all just a theory anyway!

As should be clear, “just a theory” is not a criticism: it’s a kvetch.

Marge: I really think this is a bad idea.
Homer: Marge, I agree with you — in theory. In theory, communism works. In theory.

Actually, let’s look at Homer’s example of communism, since nothing could better illustrate my point. When people say that communism works “in theory,” they presumably mean that it works if everyone is altruistic. But regulating selfishness is the whole problem political systems are supposed to solve in the first place! Any political system that defines the problem away doesn’t work on paper, any more than “Call a SAT oracle” works on paper as a way to solve NP-complete problems. Once again, we find the “real world / paper” distinction used as a cover for intellectual laziness.

Let me end this rant by preempting the inevitable cliché that “in theory, there’s no difference between theory and practice; in practice, there is.” Behold my unanswerable retort:

In theory, there’s no difference between theory and practice even in practice.

Newton vs. Leibniz: the wigs are off

Tuesday, October 17th, 2006

Of course, the greatest scientific flame war of all time was the calculus priority dispute between Isaac Newton and Gottfried Wilhelm Leibniz. This one had everything: intrigue, pettiness, hypocrisy, nationalism, and even hints of the physicist vs. computer scientist split that continues to this day.

In our opening talk at QIPC’2006 in London, Dave Bacon and I decided to relive the hatred — with Dave in a frilly white wig playing the part of Newton, and your humble blogger in a frilly black wig playing the part of Leibniz. We forgot to take photos, but here’s the script, and here are the slides for the … err, “serious” talk that Dave and I gave after dewigging.

Update (thanks to Dave and Viv Kendon):

The physicists and the wagon

Monday, June 19th, 2006

[Here's a little fable that I wrote today, while listening to a talk "showing" that a fault-tolerant quantum computer would need at least 100 physical qubits for every logical qubit. Physicists are welcome to shoot back with counter-fables, as are closet computer scientists like His Holiness.]

Update: The Pontiff has accepted my challenge and posted a counter-fable to his blog. I’ve replied in his comments section with a counter-counter-fable.

One day a group of physicists ran excitedly into the computer science building. “Guess what?” they cried. “You know how you’re always trying to prove lower bounds, but you almost never succeed? Well, today we proved a lower bound!”

“What did you prove?” asked the computer scientists.

“We proved that to pull a wagon through a forest, you need at least five oxen. It’s physically impossible to do it with four oxen or less, regardless of what other resources you have.”

“How did you prove that?”

“Well, we looked up the strength of a typical ox, the weight of a typical wagon, the size of every forest in a 30-mile radius…”

“Yeah, but what if you had an ox the size of a Brontosaurus? Or what if the forest was only two feet across? Or what if the wagon weighed less than a fingernail?”

The physicists snickered. “These are clearly unphysical assumptions. As long as you stay within a realistic region of parameter space, our impossibility proof is airtight.”

“Ah, but how do you know there couldn’t be some completely different method of pulling wagons — maybe even a method that’s not ox-based at all?”

“Look, we physicists are interested in the real world, not complexity-theory la-la land. And at least in the real world, when people want to pull wagons, oxen are what they use.”

The physicists weren’t heard from again until almost a decade later, when they once again barged into the CS building. “Guess what?” they cried. “We just discovered a loophole in the famous Five-Ox Theorem — the one we published years ago in Nature!”

“What’s the loophole?”

“Elephants! If you had an elephant pulling the wagon, you wouldn’t need any oxen at all. With hindsight it’s almost obvious, but what a paradigm shift it took!”

The computer scientists stared blankly.

“You see,” said the physicists. “This is why we never trust so-called impossibility proofs.”

The neologistas

Saturday, May 27th, 2006


Ever since I arrived at fellow blogger Dave Bacon‘s house on Tuesday, the Pontiff and I have been tossing around ideas for a joint blog initiative. Finally we hit on something: since we’re both neologistas — people who enjoy spending their free time coining new words — we decided to compile a list of the neologisms we’d most like to see adopted by the general population. Without further ado:

shnood: (roughly) an imposter; a person oblivious to just how trivial or wrong his ideas are.

“Were there any interesting speakers at the conference?”
“No, just a bunch of shnoods.”

“The magazine New Scientist loves to feature shnoods on the cover.”

Note: someone who’s utterly contemptible would not be a shnood, but rather a schmuck.

iriterie: a list or compilation of people named Irit.

See the comments on the last post for an example of an iriterie.

extralusionary intelligence: intelligence in one domain that is misapplied in another.

“Bob’s a brilliant physicist — I bet he’s onto something with his condensed-matter approach to P versus NP.”
“No, he’s just suffering from extralusionary intelligence.”

circumpolitical: So far to one end of the political spectrum that one is actually on the other end.

“Professor Zimmerman mounted a circumpolitical defense of hereditary dictatorship, female genital mutilation, and the dragging of murdered homosexuals through the streets, arguing that we have no right to condemn these indigenous practices of non-Western peoples.”

philosonomicon: A philosophical prolegomenon.

Dave’s PhD thesis begins with a philosonomicon, as does mine.

high-hanging fruit: the opposite of low-hanging fruit.

“Do you ever think about the Nonabelian Hidden Subgroup Problem?”
“No, that’s high-hanging fruit. I like to watch other people jump for it.”

napotonin: any substance that makes you want to nap.

“Ohhhh … must’ve been a lot of napotonin in that calzone … can’t work … unnngghhhh”

nontrivia: the opposite of trivia.

“If you’re so smart, how come you’re no good at Trivial Pursuit?”
“Because I prefer to fill my brain with nontrivia.”

In an effort to speed up the adoption of these words by the Oxford English Dictionary, Dave and I hereby ask that every comment on this post correctly use at least one of them. Also, while you’re welcome to crack the obvious jokes (“Scott is a shnood,” “Dave suffers from extralusionary intelligence,” etc.), be aware that we’ve just preempted them.

My interview with Lance

Thursday, December 15th, 2005

Listen to the latest edition of Lance Fortnow’s ComplexityCast (“Complexity on Your iPod”) on Podcast or MP3.

The topic: “What Physicists Should Know About Computational Complexity”
Length: 22 minutes
Geekiness: High

I’m, uh, sorry about all the, you know, mumbling. Clearly I haven’t yet been media-trained.

It’s science if it bites back

Wednesday, November 9th, 2005

Is math a science? What about computer science? (A commenter on an earlier post repeated the well-known line that “no subject calling itself a science is one.”)

These are, at the same time, boring definitional disputes best left to funding agencies, and profound mysteries worthy of such intellects as Plato, Leibniz, and Gödel. In a recent comment on Peter Woit’s blog, the physicist John Baez — as usual — went straight to the heart of the matter:

“The problem of course is that in the standard modern picture, science is empirical, based on induction, and tends to favor a materialistic ontology, while mathematics is non-empirical, based on deduction, and tends to favor a Platonist/Pythagorean ontology… yet somehow they need each other! So, mathematics is not only the queen and handmaiden of the sciences – it’s the secret mistress as well, a source of romantic fascination but also some embarrassment.”

That 17 is prime strikes us as absolutely certain, yet there’s nothing in the physical world we can point to as the source of that certainty. (Seventeen blocks that can’t be arranged into a rectangle? Give me a break.) In that respect, math seems more like subjective experience than science: you might be wrong about the sky being blue, but you can’t be wrong about your seeing it as blue. Maybe this has something to do with mathematicians’ much-noted mystical tendencies: Pythagoras sacrificing a hundred oxen because the square root of 2 was irrational; Cantor naming infinite cardinalities using the Hebrew letter aleph, which represents the “infinite greatness of God” in Kabbalah; Erdös forswearing earthly pleasures to devote his life to the Book; Gödel updating St. Anselm’s proof of the existence of God; Penrose speculating that quantum gravity gives rise to consciousness. My favorite novel about mathematicians, Rebecca Goldstein’s The Mind-Body Problem, gets much of its mileage from this ancient connection. (For empirical types: according to a 1997 survey by Larson and Witham, ~40% of mathematicians say they believe in God, compared to 20% of physicists and 30% of biologists.)

And yet, if mathematicians are mystics during those rare late-night epiphanies when they first apprehend (or believe they’ve apprehended) a timeless thought of God, then they’re scientists through and through when it comes time to LaTeX that thought and post it to the arXiv. What makes me so sure of that? Mostly, that my 10th-grade chemistry teacher claimed the opposite.

To give you some background, this is a teacher whose hatred of curiosity and independent thought was renowned throughout the school district — who’d give her students detentions for showing up fifteen seconds after the bell — who’d flunk me on exams, even when I got the answers right, because I refused to write things like (1 mol)/(1 g) = 1 mol/g. Immediately after enduring her class, I dropped out of high school and went straight to college, picking up a G.E.D. along the way. For I had sworn to myself, while listening to this woman lecture, that the goal of my life was to become her antithesis: the living embodiment of everything she detested. Ten years later, I still haven’t wavered from that goal.

Which brings me to the term project in her class. We were supposed to interview a scientist — any scientist — and then write a detailed report about his or her work. I chose a mathematician at Bell Labs who did operations research. After I’d interviewed the guy and finished my project, the teacher ordered me to redo it from scratch with a different interviewee. Why? Because “mathematicians aren’t real scientists.” (To give some context, the teacher did accept a pharmacist, a physical therapist, and an architect as real scientists.)

Now, is it possible that my views about the epistemological status of mathematics are hopelessly colored by enmity toward my chemistry teacher? Yes, it is. But as far as I can tell, the refusal to count math and CS among the sciences has done some real damage, even outside the intellectual prison known as high school. Let’s consider a few examples:

  • The New York Times hardly ever runs a story about math or CS theory, but it runs the same story about cosmology and string theory every two weeks.
  • We all know the recipe for getting a paper published in Science or Nature: first gather up all your analytical results, and bury them in your yard. Then make some multicolored charts of Experimental Data, which suggest (at a 2σ level) the same conclusions you previously reached via the forbidden method of proving them true.
  • Philosophers like Wittgenstein have gotten away with saying arbitrarily dumb things, like “Mathematical propositions express no thoughts.” As my adviser Umesh Vazirani pointed out to me, the proper response to anyone who says that is: “Indeed, the mathematical propositions that you know express no thoughts.”
  • Many people seem to have the idea that, whereas scientists proceed by proposing theories and then shooting them down, mathematicians somehow proceed in a different, alien way. Which raises the question: what other way is there? Whenever I hear someone claim that “quantum computers are really just analog computers,” or “all cellular automata that aren’t obviously simple are Turing-complete,” I’m reminded that Popper’s notion of falsifiability is just as important in math and CS as in any other sciences.
  • Saddest of all, many mathematicians and computer scientists seem to reason that, because they can write their results up with something approaching Platonic rigor, it follows that they should. Thus we have the spectacle of math/CS papers that, were they chemistry papers, would read something like this: “First I took the test tube out of the cabinet. Then I rinsed it. Then I filled it with the solution. Then I placed it on the bunsen burner…” For whom are such papers written? The author’s high-school teacher? God? I would think it obvious that the goal of writing a math paper should be to explain your results in just enough detail that your colleagues can “replicate” them — not in their labs or their computers, but in their minds.

The bottom line, of course, is that math and CS are similar to biology and physics in the most important sense: they bite back. Granted, you might be sitting in your armchair when you do them, but at least you’re probably leaning forward in the armchair, scribbling on a piece of paper and willing to be surprised by what you find there.

This seems like an appropriate time to quote the distinguished American philosopher Dave Barry.

Here is a very important piece of advice: be sure to choose a major that does not involve Known Facts and Right Answers. This means you must not major in mathematics, physics, biology, or chemistry, because these subjects involve actual facts. If, for example, you major in mathematics, you’re going to wander into class one day and the professor will say: “Define the cosine integer of the quadrant of a rhomboid binary axis, and extrapolate your result to five significant vertices.” If you don’t come up with exactly the answer the professor has in mind, you fail. The same is true of chemistry: if you write in your exam book that carbon and hydrogen combine to form oak, your professor will flunk you. He wants you to come up with the same answer he and all the other chemists have agreed on. Scientists are extremely snotty about this.

And, since I can’t resist, here’s a classic joke.

The dean summons the physics department chair to his office. “You people are bankrupting us!” he fumes. “Why do you need all this expensive equipment? All the mathematicians ever ask for is pencils, paper, and erasers. And the philosophers are better still: they don’t even ask for erasers!”