## Archive for the ‘Mistake of the Week’ Category

### The Generalized Linial-Nisan Conjecture is false

Sunday, July 11th, 2010

In a post a year and a half ago, I offered a prize of \$200 for proving something called the Generalized Linial-Nisan Conjecture, which basically said that almost k-wise independent distributions fool AC0 circuits.  (Go over to that post if you want to know what that means and why I cared about it.)

Well, I’m pleased to report that that’s a particular \$200 I’ll never have to pay.  I just uploaded a new preprint to ECCC, entitled A Counterexample to the Generalized Linial-Nisan Conjecture.  (That’s the great thing about research: no matter what happens, you get a paper out of it.)

A couple friends commented that it was wise to name the ill-fated conjecture after other people rather than myself.  (Then again, who the hell names a conjecture after themselves?)

If you don’t feel like downloading the ECCC preprint, but do feel like scrolling down, here’s the abstract (with a few links inserted):

In earlier work, we gave an oracle separating the relational versions of BQP and the polynomial hierarchy, and showed that an oracle separating the decision versions would follow from what we called the Generalized Linial-Nisan (GLN) Conjecture: that “almost k-wise independent” distributions are indistinguishable from the uniform distribution by constant-depth circuits. The original Linial-Nisan Conjecture was recently proved by Braverman; we offered a \$200 prize for the generalized version. In this paper, we save ourselves \$200 by showing that the GLN Conjecture is false, at least for circuits of depth 3 and higher.
As a byproduct, our counterexample also implies that Π2p⊄PNP relative to a random oracle with probability 1. It has been conjectured since the 1980s that PH is infinite relative to a random oracle, but the best previous result was NP≠coNP relative to a random oracle.
Finally, our counterexample implies that the famous results of Linial, Mansour, and Nisan, on the structure of AC0 functions, cannot be improved in several interesting respects.

To dispel any confusion, the \$200 prize still stands for the original problem that the GLN Conjecture was meant to solve: namely, giving an oracle relative to which BQP is not in PH.  As I say in the paper, I remain optimistic about the prospects for solving that problem by a different approach, such as an elegant one recently proposed by Bill Fefferman and Chris Umans.  Also, it’s still possible that the GLN Conjecture is true for depth-two AC0 circuits (i.e., DNF formulas).  If so, that would imply the existence of an oracle relative to which BQP is not in AM—already a 17-year-old open problem—and net a respectable \$100.

### Malthusianisms

Saturday, August 15th, 2009

Why, in real life, do we ever encounter hard instances of NP-complete problems?  Because if it’s too easy to find a 10,000-mile TSP tour, we ask for a 9,000-mile one.

Why are even some affluent parts of the world running out of fresh water?  Because if they weren’t, they’d keep watering their lawns until they were.

Why don’t we live in the utopia dreamed of by sixties pacifists and their many predecessors?  Because if we did, the first renegade to pick up a rock would become a Genghis Khan.

Why can’t everyone just agree to a family-friendly, 40-hour workweek?  Because then anyone who chose to work a 90-hour week would clean our clocks.

Why do native speakers of the language you’re studying talk too fast for you to understand them?  Because otherwise, they could talk faster and still understand each other.

Why is science hard?   Because so many of the easy problems have been solved already.

Why do the people you want to date seem so cruel, or aloof, or insensitive?  Maybe because, when they aren’t, you conclude you must be out of their league and lose your attraction for them.

Why does it cost so much to buy something to wear to a wedding?  Because if it didn’t, the fashion industry would invent more extravagant ‘requirements’ until it reached the limit of what people could afford.

Why do you cut yourself while shaving?  Because when you don’t, you conclude that you’re not shaving close enough.

These Malthusianisms share the properties that (1) they seem so obvious, once stated, as not to be worth stating, yet (2) whole ideologies, personal philosophies, and lifelong habits have been founded on the refusal to understand them.

Again and again, I’ve undergone the humbling experience of first lamenting how badly something sucks, then only much later having the crucial insight that its not sucking wouldn’t have been a Nash equilibrium.  Clearly, then, I haven’t yet gotten good enough at Malthusianizing my daily life—have you?

One might even go further, and speculate that human beings’ blind spot for this sort of explanation is why it took so long for Malthus himself (and his most famous disciple, Darwin) to come along.

### Mistake of the Week: “But even an X says so!”

Monday, March 17th, 2008

Consider the following less-than-hypothetical scenarios:

• Joseph Weizenbaum (who passed away two weeks ago), the MIT computer scientist who created the ELIZA chatbot in the 1960’s, spent the rest of his career decrying the evils of computer science research, holding (perhaps strangely) both that almost everything that’s done with computers could be done just as well without them, and that computers have made possible terrible things like missile guidance systems that now threaten our civilization.
• Distinguished mathematician Doron Zeilberger argues that mathematicians are wasting their time pursuing chimeras like “beauty” and “elegance,” and that within the near future, mathematics will be entirely the domain of computers.
• Ayaan Hirsi Ali, who was born into a Muslim family in Somalia, and who escaped from an arranged marriage after being forced to undergo FGM, tells Westerners they’re deluding themselves if they think current Islamic practices are compatible with Enlightenment values.
• John Browne, the Chief Executive of BP, tells the world that urgent action is needed on global warming.
• A former atheist stumps for Christianity (or vice versa).

The obvious question in all these cases is: how much extra credence does a person gain by belonging, or having once belonged, to the group he or she is criticizing? From a strict rationalist standpoint, the answer would seem to be zero: surely all that matters is the soundness of the arguments! Who cares if the keynote speaker at the anti-widget rally also happens to be past president of the Widget Club?

I can think of three possible reasons for giving extra credence to attacks from insiders:

1. The insider might simply know more about the realities of the situation than an outsider, or be less able to ignore those realities.
2. One assumes the insider is someone who’s at least grappled with the best arguments from her own side before rejecting them. (In FantasyLand, one could assume that anyone making an argument had first grappled with the best arguments from the opposing side, but FantasyLand≠Earth.)
3. When someone relentlessly attacks a single group of people — seeming to find them behind every perfidy on earth — history says to assume the worst about their motivations, and not to accept the refrain “I’m only criticizing them for their own good!” However, it’s possible that members of the group themselves should merit a pass in this regard. (Though even here there are exceptions: for example, if the person has renounced all ties with the despised group, or, as in the case of Bobby Fischer, refuses to accept the reality of his membership in it.)

On the other hand, I can think of five reasons why not to give extra credence to attacks from insiders:

1. Given any exotic mixture of beliefs and group affiliations, there’s almost certainly someone on earth who fits the description — and is even available for a fee to speak at your next event. If you want an accomplished scientist who sees science as an expensive sham or tool of the military, you can find one. If you want a former Republican hardliner who’s now a Naderite, you can find one. If you want a Jew who renounces Jews or Israel, you can find a stadium of them. So you can’t conclude anything from the mere existence of such people — at most, you can possibly learn something from their number.
2. Any group of people — computer scientists, CEO’s, Israelis, African-Americans — will consist (to put it mildly) of multiple factions, some of whom might seek to gain an advantage over the other factions by blasting their group as a whole before the outside world. So one can’t simply accept someone’s presentation of himself as a lone, courageous whistleblower, without first understanding the internal dynamics of the group he comes from and is criticizing.
3. The very fact that people within a group feel free to criticize it can in some cases speak well about the group’s tolerance for dissent, and thereby undermine some of the critics’ central claims. (Of course, one has to verify that the tolerated dissenters aren’t just a sham maintained by the ruling faction, as in Communist regimes throughout their history.)
4. Some people simply enjoy dissenting from their peers, as a way of proving their independence or of drawing attention to themselves.
5. Just as most people like to toot their own group’s horn, a few are masochistically biased toward the opposite extreme. We can all think of people who, for whatever deep psychological reasons, feel a constant need to repent the sins of themselves or their group, in a manner wildly out of proportion to any actual guilt. Granted, anyone can understand the conflict a physicist might feel over having participated in the Manhattan Project. On the other hand, when the invention you’re renouncing is the ELIZA chatbot, the question arises of whether you’ve earned the right to Faust-like penitence over the unspeakable evil you’ve unleashed.

So what’s my verdict? Belonging to the group you’re criticizing can give you one or two free starting chips at the table of argument, entitling you to a hearing where someone else wouldn’t be so entitled. But once you’ve sat down and entered the game, from then on you have to play by the same rules as everyone else.

### Ten Signs a Claimed Mathematical Breakthrough is Wrong

Saturday, January 5th, 2008

Yesterday several people asked my opinion of a preprint claiming to solve the Graph Isomorphism problem in deterministic polynomial time. I responded:

If I read all such papers, then I wouldn’t have time for anything else. It’s an interesting question how you decide whether a given paper crosses the plausibility threshold or not. For me personally, the AKS “PRIMES in P” paper somehow crossed it whereas this one somehow doesn’t.

Of course, I’d welcome an opinion from anyone who’s actually read the paper.

Three commenters wrote in to say the paper looked good. Then the author found a bug and retracted it.

Update (1/5): Laci Babai writes in to tell me that’s not quite what happened. See here for what did happen, and here for an argument that Friedland’s approach would if sound have implied P=NP.

My purpose here is not to heap embarrassment on the author: he’s a serious mathematician who had a well-defined and interesting approach, and who (most importantly) retracted his claim as soon as a bug was discovered. (Would that everyone did the same!) Though the stakes are usually smaller, similar things have happened to most of us, including me.

Instead I want to explore the following metaquestion: suppose someone sends you a complicated solution to a famous decades-old math problem, like P vs. NP. How can you decide, in ten minutes or less, whether the solution is worth reading?

For a blogger like me — whose opinions are both expected immediately and googlable indefinitely — this question actually matters. Err in one direction, and I’ll forever be known as the hidebound reactionary who failed to recognize some 21st-century Ramanujan. Err in the other direction, and I’ll spend my whole life proofreading the work of crackpots.

A few will chime in: “but if everyone wrote out their proofs in computer-checkable form, there’d be no need for this absurd dilemma!” Sure, and if everyone buckled up there’d be fewer serious accidents. Yet here’s the bloodied patient, and here we are in the emergency room.

In deciding whether to spend time on a paper, obviously the identity of the authors plays some role. If Razborov says he proved a superlinear circuit lower bound for SAT, the claim on our attention is different than if Roofus McLoofus says the same thing. But the danger of elitism is obvious here — so in this post, I’ll only be interested in what can be inferred from the text itself.

Inspired by Sean Carroll’s closely-related Alternative-Science Respectability Checklist, without further ado I now offer the Ten Signs a Claimed Mathematical Breakthrough is Wrong.

1. The authors don’t use TeX. This simple test (suggested by Dave Bacon) already catches at least 60% of wrong mathematical breakthroughs. David Deutsch and Lov Grover are among the only known false positives.

2. The authors don’t understand the question. Maybe they mistake NP≠coNP for some claim about psychology or metaphysics. Or maybe they solve the Grover problem in O(1) queries, under some notion of quantum computing lifted from a magazine article. I’ve seen both.

3. The approach seems to yield something much stronger and maybe even false (but the authors never discuss that). They’ve proved 3SAT takes exponential time; their argument would go through just as well for 2SAT.

4. The approach conflicts with a known impossibility result (which the authors never mention). The four months I spent proving the collision lower bound actually saved me some time once or twice, when I was able to reject papers violating the bound without reading them.

5. The authors themselves switch to weasel words by the end. The abstract says “we show the problem is in P,” but the conclusion contains phrases like “seems to work” and “in all cases we have tried.” Personally, I happen to be a big fan of heuristic algorithms, honestly advertised and experimentally analyzed. But when a “proof” has turned into a “plausibility argument” by page 47 — release the hounds!

6. The paper jumps into technicalities without presenting a new idea. If a famous problem could be solved only by manipulating formulas and applying standard reductions, then it’s overwhelmingly likely someone would’ve solved it already. The exceptions to this rule are interesting precisely because they’re rare (and even with the exceptions, a new idea is usually needed to find the right manipulations in the first place).

7. The paper doesn’t build on (or in some cases even refer to) any previous work. Math is cumulative. Even Wiles and Perelman had to stand on the lemma-encrusted shoulders of giants.

8. The paper wastes lots of space on standard material. If you’d really proved P≠NP, then you wouldn’t start your paper by laboriously defining 3SAT, in a manner suggesting your readers might not have heard of it.

9. The paper waxes poetic about “practical consequences,” “deep philosophical implications,” etc. Note that most papers make exactly the opposite mistake: they never get around to explaining why anyone should read them. But when it comes to something like P≠NP, to “motivate” your result is to insult your readers’ intelligence.

10. The techniques just seem too wimpy for the problem at hand. Of all ten tests, this is the slipperiest and hardest to apply — but also the decisive one in many cases. As an analogy, suppose your friend in Boston blindfolded you, drove you around for twenty minutes, then took the blindfold off and claimed you were now in Beijing. Yes, you do see Chinese signs and pagoda roofs, and no, you can’t immediately disprove him — but based on your knowledge of both cars and geography, isn’t it more likely you’re just in Chinatown? I know it’s trite, but this is exactly how I feel when I see (for example) a paper that uses category theory to prove NL≠NP. We start in Boston, we end up in Beijing, and at no point is anything resembling an ocean ever crossed.

Obviously, these are just some heuristics I’ve found successful in the past. (The nice thing about math is that sooner or later the truth comes out, and then you know for sure whether your heuristics succeeded.) If a paper fails one or more tests (particularly tests 6-10), that doesn’t necessarily mean it’s wrong; conversely, if it passes all ten that still doesn’t mean it’s right. At some point, there might be nothing left to do except to roll up your sleeves, brew some coffee, and tell your graduate student to read the paper and report back to you.

### Mistake of the Week: Explain Everything (Or Don’t Bother Explaining Anything)

Monday, November 26th, 2007

In today’s post I was going to announce the winners of my Unparadox Contest. But then I noticed the Lake Wobegon unparadox: if the total winnings are zero, then no one’s winnings are below average and in that sense, everyone’s a winner!

So instead of that, I thought I’d contribute to the general shnoodification of humankind, by discussing the same thing every other science blogger’s discussing: Paul Davies’s New York Times op-ed.

Over the years I have often asked my physicist colleagues why the laws of physics are what they are. The answers vary from “that’s not a scientific question” to “nobody knows.” The favorite reply is, “There is no reason they are what they are — they just are.” The idea that the laws exist reasonlessly is deeply anti-rational. After all, the very essence of a scientific explanation of some phenomenon is that the world is ordered logically and that there are reasons things are as they are. If one traces these reasons all the way down to the bedrock of reality — the laws of physics — only to find that reason then deserts us, it makes a mockery of science.

Now, I know Paul Davies: he took me out to a nice dinner in Iceland, and even quoted me next to Ludwig Wittgenstein in the epigraph of one of his papers. And I know for a fact that his views are much more nuanced than you’d think, if the above passage was all you were going on. I can assure you that, if his claim that physics without metaphysics is “a mockery of science” reminds you of those hooded monks from Monty Python and the Holy Grail, pounding their heads with wooden boards in between mystic incantations, then you’ve read his piece too superficially and have failed to grasp its subtler message.

But even so, reading his op-ed made me wonder: when did we, as a civilization, have a similar conversation before? Then I remembered: the early 1600’s!

Galileo: Hey, I’ve discovered that Jupiter has moons! And that objects in free fall follow parabolic trajectories! And that…

Jesuit schoolmen: Ah, foolish one, but you have told us nothing about the underlying causes of motion, or what it is that imbues the lunar bodies with their lunarity. Of what use are your so-called “explanations” if they rest on a foundation that is itself unexplained? One can hardly build a pyramid on sand!

One imagines the schoolmen feeling sorry for the naïve Galileo, with his rampant scientism and countless unexamined presuppositions. In their minds, if Galileo hadn’t explained everything then he hadn’t really explained anything — and hence they themselves (who had explained nothing) were the wiser by far.

Four hundred years after the scientific revolution, most people still think like the Jesuit schoolmen did:

How does a toaster work?

By converting electrical energy into heat.

But what is electricity?

The movement of electrons through a wire.

But what are electrons?

Fundamental particles with spin 1/2, negative charge, mass of 10-27 grams…

But why do particles exist? Why does anything exist?

Well, those are excellent and profound questions, and you see…

Aha! Aha! So science doesn’t have all the answers! Ultimately, then, science is just another form of faith!

The schoolman glances at the intermediate steps — how a toaster works, what electricity is, what electrons are — and is not only profoundly unimpressed, but baffled and annoyed that anyone thinks he should be impressed. What are these so-called “answers” but irrelevant distractions from the Answer? What are they but the build-up to the punchline, stepping-stones on the road to the metaphysical abyss?

Science, in the schoolman’s mind, is just a massive con game: an attempt to distract people from the ultimate questions of essence by petty conjuring tricks like curing diseases or discovering the constituents of matter. Even pure math is part of the con: all Wiles did was reduce Fermat’s Last Theorem to some supposedly “self-evident” axioms. But why bother with such a reduction, if you can’t justify the axioms or the laws of logic themselves?

I frequently encounter the schoolmen even in my little corner of the world. People will ask: isn’t computational complexity theory a colossal failure, since all you ever do is prove “this problem is as hard as that other one,” or “this problem is hard relative to an oracle,” and never really prove anything is hard?

Let’s leave aside the factual misunderstandings — we can prove certain problems are hard, etc. etc. — and concentrate on the subtext, which is:

Don’t waste my time with the accumulated insights of the last half-century. If you haven’t solved the P versus NP problem — and you haven’t, right? — then aren’t you, ultimately, just as ignorant about computation as I am?

Of course, “does P=NP?” differs from “where do the laws of physics come from?” in that we know, at least philosophically, what an answer to the former question would look like. And yet, if complexity theorists ever do prove P≠NP, I’m guessing the schoolmen will switch immediately to saying that that was merely a technical result, and that it doesn’t even touch the real question, which is something else entirely.

The schoolmen’s philosophy leads directly to a fatalist methodology. What causes polio? If you say a virus, then you also have to explain what viruses are, and why they exist, and why the universe is such that viruses exist, and even why the universe itself exists. And if you can’t answer all of these questions, then your so-called “knowledge” rests on a foundation of arbitrariness and caprice, and you’re no better off than when you started. So you might as well say that polio is caused by demons.

Yet so long as the schoolmen are careful — and define the “ultimate explanation for X” in such a way that no actual discovery about X will ever count — their position is at least logically consistent. I’ll even confess to a certain sympathy with it. I’ll even speculate that most scientists have a smidgen of schoolman inside.

All I really object to, then, is the notion that tracing every question down to what Davies calls “the bedrock of reality” represents a new, exciting approach to gathering knowledge — one at the cutting edge of physics and cosmology. Say whatever else you want about the schoolman’s way, it’s neither new nor untried. For most of human history, it’s the only approach that was tried.

### Mistake of the Week: Belief is King

Wednesday, February 14th, 2007

A couple days ago the Times ran a much-debated story about Marcus S. Ross, a young-earth creationist who completed a PhD in geosciences at the University of Rhode Island. Apparently his thesis was a perfectly-legitimate study of marine reptiles that (as he writes in the thesis) went extinct 65 million years ago. Ross merely disavows the entire materialistic paradigm of which his thesis is a part.

If you want some long, acrimonious flamewars about whether the guy’s PhD should be revoked, whether oral exams should now include declarations of (non)faith, whether Ross is a walking illustration of Searle’s Chinese Room experiment, etc., try here and here. Alas, most of the commentary strikes me as missing a key point: that to give a degree to a bozo like this, provided he indeed did the work, can only reflect credit on the scientific enterprise. Will Ross now hit the creationist lecture circuit, trumpeting his infidel credentials to the skies? You better believe it. Will he use the legitimacy conferred by his degree to fight against everything the degree stands for? It can’t be doubted.

But here’s the wonderful thing about science: unlike the other side, we don’t need loyalty oaths in order to function. We don’t need to peer into people’s souls to see if they truly believe (A or not(A)), or just assume it for practical purposes. We have enough trouble getting people to understand our ideas — if they also assent to them, that’s just an added bonus.

In his Dialogue Concerning the Two Chief World Systems, Galileo had his Salviati character carefully demolish the arguments for Ptolemaic astronomy — only to concede, in the final pages, that Ptolemaic astronomy must obviously be true anyway, since the church said it was true. Mr. G, of course, was just trying to cover his ass. The point, though, is that his ploy didn’t work: the church understood as well as he did that the evidence mattered more than the conclusions, and therefore wisely arrested him. (I say “wisely” because the church was, of course, entirely correct to worry that a scientific revolution would erode its temporal power.)

To say that science is about backing up your claims with evidence doesn’t go far enough — it would be better to say that the evidence is the claim. So for example, if you happen to prove the Riemann Hypothesis, you’re more than welcome to “believe” the Hypothesis is nevertheless false, just as you’re welcome to write up your proof in encrusted boogers or lecture about it wearing a live gerbil as a hat. Indeed, you could do all these things and still not be the weirdest person to have solved a Clay Millennium Problem. Believing your proof works can certainly encourage other people to read it, but strictly speaking is no more necessary than the little QED box at the end.

The reason I’m harping on this is that, in my experience, laypeople consistently overestimate the role of belief in science. Thus the questions I constantly get asked: do I believe the many-worlds interpretation? Do I believe the anthropic principle? Do I believe string theory? Do I believe useful quantum computers will be built? Never what are the arguments for and against: always what do I believe?

To explain why “belief” questions often leave me cold, I can’t do better than to quote the great Rabbi Sagan.

I’m frequently asked, “Do you believe there’s extraterrestrial intelligence?” I give the standard arguments — there are a lot of places out there, the molecules of life are everywhere, I use the word billions, and so on. Then I say it would be astonishing to me if there weren’t extraterrestrial intelligence, but of course there is as yet no compelling evidence for it.

Often, I’m asked next, “What do you really think?”

I say, “I just told you what I really think.”

“Yes, but what’s your gut feeling?”

But I try not to think with my gut. If I’m serious about understanding the world, thinking with anything besides my brain, as tempting as that might be, is likely to get me into trouble.

In my view, science is fundamentally not about beliefs: it’s about results. Beliefs are relevant mostly as the heuristics that lead to results. So for example, it matters that David Deutsch believes the many-worlds interpretation because that’s what led him to quantum computing. It matters that Ed Witten believes string theory because that’s what led him to … well, all the mindblowing stuff it led him to. My beef with quantum computing skeptics has never been that their beliefs are false; rather, it’s that their beliefs almost never seem to lead them to new results.

I hope nobody reading this will mistake me for a woo-woo, wishy-washy, Kuhn-wielding epistemic terrorist. (Some kind of intellectual terrorist, sure, but not that kind.) Regular readers of this blog will aver that I do have beliefs, and plenty of them. In particular, I don’t merely believe evolution is good science; I also believe it’s true. But as Richard Dawkins has pointed out, the reason evolution is good science is not that it’s true, but rather that it does nontrivial explanatory work. Even supposing creationism were true, it would still be too boring to qualify as science — as even certain creationists hunting for a thesis topic seem to agree.

Or anyway, that’s what I believe.

### Mistake of the Week: “X works on paper, but not in the real world”

Thursday, October 26th, 2006

Time again for Shtetl-Optimized’s Mistake of the Week series! This week my inspiration comes from a paper that’s been heating up the quantum blogosphere (the Blochosphere?): Is Fault-Tolerant Quantum Computation Really Possible? by M. I. Dyakonov. I’ll start by quoting my favorite passages:

The enormous literature devoted to this subject (Google gives 29300 hits for “fault-tolerant quantum computation”) is purely mathematical. It is mostly produced by computer scientists with a limited understanding of physics and a somewhat restricted perception of quantum mechanics as nothing more than unitary transformations in Hilbert space plus “entanglement.”

Whenever there is a complicated issue, whether in many-particle physics, climatology, or economics, one can be almost certain that no theorem will be applicable and/or relevant, because the explicit or implicit assumptions, on which it is based, will never hold in reality.

I’ll leave the detailed critique of Dyakonov’s paper to John Preskill, the Pontiff, and other “computer scientists” who understand the fault-tolerance theorem much better than a mere physicist like me. Here I instead want to take issue with an idea that surfaces again and again in Dyakonov’s paper, is almost universally accepted, but is nevertheless false. The idea is this: that it’s possible for a theory to “work on paper but not in the real world.”

The proponents of this idea go wrong, not in thinking that a theory can fail in the real world, but in thinking that if it fails, then the theory can still “work on paper.” If a theory claims to describe a phenomenon but doesn’t, then the theory doesn’t work, period — neither in the real world nor on paper. In my view, the refrain that something “works on paper but not in the real world” serves mainly as an intellectual crutch: a way for the lazy to voice their opinion that something feels wrong to them, without having to explain how or where it’s wrong.

“Ah,” you say, “but theorists often make assumptions that don’t hold in the real world!” Yes, but you’re sidestepping the key question: did the theorists state their assumptions clearly or not? If they didn’t, then the fault lies with them; if they did, then the fault lies with those practitioners who would milk a nonspherical cow like a spherical one.

To kill a theory (in the absence of direct evidence), you need to pinpoint which of its assumptions are unfounded and why. You don’t become more convincing by merely finding more assumptions to criticize; on the contrary, the “hope something sticks” approach usually smacks of desperation:

There’s no proof that the Earth’s temperature is rising, but even if there was, there’s no proof that humans are causing it, but even if there was, there’s no proof that it’s anything to worry about, but even there was, there’s no proof that we can do anything about it, but even if there was, it’s all just a theory anyway!

As should be clear, “just a theory” is not a criticism: it’s a kvetch.

Marge: I really think this is a bad idea.
Homer: Marge, I agree with you — in theory. In theory, communism works. In theory.

Actually, let’s look at Homer’s example of communism, since nothing could better illustrate my point. When people say that communism works “in theory,” they presumably mean that it works if everyone is altruistic. But regulating selfishness is the whole problem political systems are supposed to solve in the first place! Any political system that defines the problem away doesn’t work on paper, any more than “Call a SAT oracle” works on paper as a way to solve NP-complete problems. Once again, we find the “real world / paper” distinction used as a cover for intellectual laziness.

Let me end this rant by preempting the inevitable cliché that “in theory, there’s no difference between theory and practice; in practice, there is.” Behold my unanswerable retort:

In theory, there’s no difference between theory and practice even in practice.

### Mistake of the Week: “The Future Is In X”

Sunday, September 24th, 2006

One of the surest signs of the shnood is the portentous repetition of the following two slogans:

Biology will be the physics of the 21st century.

The future of the world is in China and India.

Let me translate for you:

You know the field of Darwin, Pasteur, and Mendel, the field that fills almost every page of Science and Nature, the field that gave rise to modern medicine and transformed the human condition over the last few centuries? Well, don’t count it out entirely! This plucky newcomer among the sciences is due to make its mark. Another thing you shouldn’t count out is the continent of Asia, which is situated next to Europe. Did you know that China, far more than a source of General Tso’s Chicken, has been one of the centers of human civilization for 4,000 years? And did you know that Gandhi and Ramanujan both hailed from a spunky little country called India? It’s true!

Let me offer my own counterslogans:

Biology will be the biology of the 21st century.

The future of China and India is in China and India, respectively.

### Chasmgasm

Friday, August 25th, 2006

The most important research question in astronomy, to judge from the news websites, is neither the nature of dark matter and energy, nor the origin of the Pioneer anomaly or gamma-ray bursts beyond the GZK cutoff, nor the possible existence of Earth-like extrasolar planets. No, the big question is whether Pluto is “really” a planet, and if so, whether Charon and Ceres are “really” planets, and whether something has to be round to be a planet, and if so, how round.

I was going to propose we bring in Wittgenstein to settle this. But I guess the astronomers have already “ruled.”

Richard Dawkins often rails against what he calls the “tyranny of the discontinuous mind.” As far as I know, he’s not complaining about those of us who like our Hilbert spaces finite-dimensional and our quantum gravity theories discrete. Rather, he’s complaining about those who insist on knowing, for every humanoid fossil, whether it’s “really” human or “really” an ape. Ironically, it’s often the same people who then complain about the “embarrassing lack of transitional forms”!

Can anyone suggest a word for a person obsessed with drawing firm but arbitrary lines through a real-valued parameter space? (“Lawyer” is already taken.) I’ve already figured out the word for a debate about such lines, like the one we saw in Prague: chasmgasm.

### Mistake of the Week: The Unknown Unknown

Wednesday, June 21st, 2006

And how is not this the most reprehensible ignorance, to think that one knows what one does not know? But I, O Athenians! in this, perhaps, differ from most men; and if I should say that I am in any thing wiser than another, it would be in this, that not having a competent knowledge of the things in Hades, I also think that I have not such knowledge.

Shtetl-Optimized’s Mistake of the Week series finally resumes today, with what’s arguably the #1 mistake of all time. This one’s been noted by everyone from Defense Secretary Donald Rumsfeld, to some toga-wearing ancient dude, to the authors of the paper Unskilled and Unaware of It: How Difficulties In Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.

Rather than give examples of this mistake — where would I start? where would I stop? how often have I made it myself? — I figured it’d be easier to give an example where someone didn’t make it. Today I received an email from a graduate student who had proved a quantum oracle separation, and wanted to know whether or not his result was too trivial to publish. I get fan mail, I get hate mail, I get crank mail, I get referee requests, but this is something I almost never see. After telling the student why his result was, indeed, too trivial to publish, I wrote:

There’s no shame in proving things that are already known, or that follow easily from what is. Everyone does it, the more so when they’re just starting out … The very fact that you cared enough to ask me if your result is trivial bodes well for your proving something nontrivial.