## Ten Signs a Claimed Mathematical Breakthrough is Wrong

Yesterday several people asked my opinion of a preprint claiming to solve the Graph Isomorphism problem in deterministic polynomial time. I responded:

If I read all such papers, then I wouldn’t have time for anything else. It’s an interesting question how you decide whether a given paper crosses the plausibility threshold or not. For me personally, the AKS “PRIMES in P” paper somehow crossed it whereas this one somehow doesn’t.

Of course, I’d welcome an opinion from anyone who’s actually read the paper.

Three commenters wrote in to say the paper looked good. Then the author found a bug and retracted it.

**Update (1/5):** Laci Babai writes in to tell me that’s not quite what happened. See here for what *did* happen, and here for an argument that Friedland’s approach would if sound have implied P=NP.

My purpose here is not to heap embarrassment on the author: he’s a serious mathematician who had a well-defined and interesting approach, and who (most importantly) retracted his claim as soon as a bug was discovered. (Would that everyone did the same!) Though the stakes are usually smaller, similar things have happened to most of us, including me.

Instead I want to explore the following metaquestion: *suppose someone sends you a complicated solution to a famous decades-old math problem, like P vs. NP. How can you decide, in ten minutes or less, whether the solution is worth reading?*

For a blogger like me — whose opinions are both expected immediately and googlable indefinitely — this question actually matters. Err in one direction, and I’ll forever be known as the hidebound reactionary who failed to recognize some 21^{st}-century Ramanujan. Err in the other direction, and I’ll spend my whole life proofreading the work of crackpots.

A few will chime in: “but if everyone wrote out their proofs in computer-checkable form, there’d be no need for this absurd dilemma!” Sure, and if everyone buckled up there’d be fewer serious accidents. Yet here’s the bloodied patient, and here we are in the emergency room.

In deciding whether to spend time on a paper, obviously the identity of the authors plays some role. If Razborov says he proved a superlinear circuit lower bound for SAT, the claim on our attention is different than if Roofus McLoofus says the same thing. But the danger of elitism is obvious here — so in this post, I’ll only be interested in what can be inferred from the text itself.

Inspired by Sean Carroll’s closely-related Alternative-Science Respectability Checklist, without further ado I now offer the *Ten Signs a Claimed Mathematical Breakthrough is Wrong*.

**1. The authors don’t use TeX.** This simple test (suggested by Dave Bacon) already catches at least 60% of wrong mathematical breakthroughs. David Deutsch and Lov Grover are among the only known false positives.

**2. The authors don’t understand the question.** Maybe they mistake NP≠coNP for some claim about psychology or metaphysics. Or maybe they solve the Grover problem in O(1) queries, under some notion of quantum computing lifted from a magazine article. I’ve seen both.

**3. The approach seems to yield something much stronger and maybe even false (but the authors never discuss that).** They’ve proved 3SAT takes exponential time; their argument would go through just as well for 2SAT.

**4. The approach conflicts with a known impossibility result (which the authors never mention).** The four months I spent proving the collision lower bound actually saved me some time once or twice, when I was able to reject papers violating the bound without reading them.

**5. The authors themselves switch to weasel words by the end.** The abstract says “we show the problem is in P,” but the conclusion contains phrases like “seems to work” and “in all cases we have tried.” Personally, I happen to be a big fan of heuristic algorithms, honestly advertised and experimentally analyzed. But when a “proof” has turned into a “plausibility argument” by page 47 — *release the hounds!*

**6. The paper jumps into technicalities without presenting a new idea.** If a famous problem could be solved only by manipulating formulas and applying standard reductions, then it’s overwhelmingly likely someone would’ve solved it already. The exceptions to this rule are interesting precisely because they’re rare (and even with the exceptions, a new idea is usually needed to *find* the right manipulations in the first place).

**7. The paper doesn’t build on (or in some cases even refer to) any previous work.** Math is cumulative. Even Wiles and Perelman had to stand on the lemma-encrusted shoulders of giants.

**8. The paper wastes lots of space on standard material.** If you’d really proved P≠NP, then you wouldn’t start your paper by laboriously defining 3SAT, in a manner suggesting your readers might not have heard of it.

**9. The paper waxes poetic about “practical consequences,” “deep philosophical implications,” etc.** Note that most papers make exactly the opposite mistake: they never get around to explaining why anyone should read them. But when it comes to something like P≠NP, to “motivate” your result is to insult your readers’ intelligence.

**10. The techniques just seem too wimpy for the problem at hand.** Of all ten tests, this is the slipperiest and hardest to apply — but also the decisive one in many cases. As an analogy, suppose your friend in Boston blindfolded you, drove you around for twenty minutes, then took the blindfold off and claimed you were now in Beijing. Yes, you do see Chinese signs and pagoda roofs, and no, you can’t immediately disprove him — but based on your knowledge of both cars and geography, isn’t it more likely you’re just in Chinatown? I know it’s trite, but this is exactly how I feel when I see (for example) a paper that uses category theory to prove NL≠NP. We start in Boston, we end up in Beijing, and at no point is anything resembling an ocean ever crossed.

Obviously, there are just some heuristics I’ve found successful in the past. (The nice thing about math is that sooner or later the truth comes out, and then you know for sure whether your heuristics succeeded.) If a paper fails one or more tests (particularly tests 6-10), that doesn’t necessarily mean it’s wrong; conversely, if it passes all ten that still doesn’t mean it’s right. At some point, there might be nothing left to do except to roll up your sleeves, brew some coffee, and tell your graduate student to read the paper and report back to you.

Comment #1 January 5th, 2008 at 1:14 am

Cliff Taubes also does not use TeX, and has more than his share of breakthroughs (in topology). I’m sure there are more examples.

Comment #2 January 5th, 2008 at 1:19 am

Scott doesn’t want to do it, but I’ll offer my perspective on the recent errant proof that GI is in P. This paper “suffers” from some mix of 6, 7, and 10. While it does mention the previous approaches to GI for special classes of graphs, it does not discuss the barriers that stopped others from proving GI is in P and how it intends to overcome them. The reason (10) applies is that the author is pursuing an approach that countless people must have tried in the 70s.

Comment #3 January 5th, 2008 at 1:33 am

For something with some publicity just wait a week or two; if it’s lasted that long it’s more likely to be wrong in a more interesting way.

Comment #4 January 5th, 2008 at 1:53 am

Dylan: Thanks! I inserted the weasel word “among.”

Comment #5 January 5th, 2008 at 2:20 am

Dear Scott,

Would you be so kind to give me your feedback on my latest draft entitled: “A Proof of Goldbach’s conjecture: Practical consequences for quantum teleportation and deep philosophical implications for the square root of 6.”

I will gladly send you a copy. I trust Wordstar format will be sufficient?

Kindest regards,

RM

Comment #6 January 5th, 2008 at 2:39 am

If such a paper doesn’t contain acknowledgments thanking one or two well-chosen experts for looking at a preliminary version of the paper, then I’m skeptical. (At least for pure math papers; YMMV in other fields.)

Comment #7 January 5th, 2008 at 2:57 am

Thanks, Dave! Two other signs I forgot to list earlier:

11. The paper never discusses barriers encountered in previous work and how the new approach overcomes them. (Thanks to Otto for reminding me of this — though the Supreme Court might rule that sign 11 is already in the “penumbra” cast by signs 4, 6, 7, and 10.)

12. The paper fails to distinguish clearly between an algorithm and its analysis, or a claim and its proof.

Comment #8 January 5th, 2008 at 5:40 am

A few will chime in: “but if everyone just wrote out their proofs in computer-checkable form, there’d be no need for this absurd dilemma!”…Huh. Is this even seriously an option for any modern nontrivial problem?

Comment #9 January 5th, 2008 at 6:31 am

At some point, there might be nothing left to do except to roll up your sleeves, brew some coffee, and tell your graduate student to read the paper and report back to you.

:))))))

GADOL!!!

Comment #10 January 5th, 2008 at 9:58 am

@Coin: there is a trend in the programming language community to write papers in such a way that the theorems can be automatically checked by a proof assistant. There’s a tutorial at this year’s Prinicples of Programming Languages on “How To Write Your Next Paper in Coq”.

http://www.cis.upenn.edu/~plclub/popl08-tutorial/

The theorems for which this tutorial is suited are limited, of course, but it can be done. In general it seems that programming language researchers are more interested in formalization of this type than theorists.

For more general mathematics, the Mizar project has been plugging away at machine-checkable proofs of natural mathematical statements since 1973.

http://mizar.uwb.edu.pl/

Here’s an example of what a Mizar article on Euhler’s polyhedron formula looks like, from “The Journal of Formalized Mathematics”:

http://mizar.uwb.edu.pl/fm/fm16-1/polyform.pdf

Comment #11 January 5th, 2008 at 10:24 am

“How To Write Your Next Paper in Coq”I find this a remarkably sexist tutorial.

Comment #12 January 5th, 2008 at 10:57 am

In regards to otto comments, PRIMES is in P violates 6, 7 and 10 as well.

Comment #13 January 5th, 2008 at 10:59 am

“The authors don’t use TeX. … David Deutsch and Lov Grover are the only known false positives.”

Scott,

Unfamiliar with this — what were the false positives to which you refer?

Comment #14 January 5th, 2008 at 11:47 am

Interesting list, these things are always useful for those of us who are on the outside, or one foot in one foot out in my case (but five

unreadanduncitedpublished papers in good Physics journals don’t count so much). Each list contributes a little to the self-criticism cycle.A few notes for academics:

(1) Look at your peers, and observe that many of them are publishing many articles, sometimes in Phys. Rev. Lett., etc., but not necessarily going anywhere fast. Not admitting failings is not less of a problem in one place than in another.

(2) The person on the outside wants to be on the inside. Perhaps best not to look

toocontent with how warm it is inside and how nice the conferences are, etc. Sackcloth and ashes over your teaching and administrative load is more becoming (I’m married to an academic, so I know it’s true, at least for academics who don’t shamelessly avoid teaching loads and departmental responsibilities because their research is far more important than other academics’ research — see (1)).(3) It is striking that in response to letters from someone unknown (me), almost all academics don’t reply. There are some, a very few, who send a one line polite reply saying that they don’t have time — thank you! My praise is unbounded for those of you who send a substantive two line comment. I’m sorry I send a 60 line reply to your comment, but you don’t have to reply to that, it’s just part of the process of understanding your point.

The above looks a little too trite. If anyone knows of a carefully thought out list of decent behaviors for academics who deal with cranks often, perhaps they could link to it. There are quite a few guides for cranks who have to deal often with academics. Here’s another:

Notes to my peers:

(1) You’re almost all crazy.

(2) Sean is almost right about almost everything.

(3) LaTeX is easy enough, and it’s free. It’s a first step.

(4) 10 years, 20 years, 40 years on a project does not make it right. 10,000 man years on string theory doesn’t make it right (see academics (1)). Your project might be right, but time served says nothing.

(5) In all probability, your project will come to nothing, like mine, maybe. Work on being cool with that. Belief in what you’re doing is not incompatible with being cool.

(6) When you break in, please be kind to me. I’ll try to do the same for you. Expect the one-line polite reply, but I’ll try hard for the two-liner.

(7) Academics are smart! Really they are! Some of them are smart, anyway. I’ve found some who I like, even. Do you want to be in with a bunch of losers?

(8) The acid test is not whether someone will read a paper, not whether they will comment on it or think it’s interesting, but whether they can see a way to use its methods in their own work. A reader who can use a paper deserves respect.

PS. I thought two weeks ago that my

J. Math. Phys.article, which has since appeared, might have a nice enough abstract and be a clear enough account that Physicists would read and understand the paper, but apparently it isn’t. The title isn’t right, I now think, amongst other things. Being unknown, so no-one reads my papers, no-one tells me where they’re bad and how bad they are, and I hardly make progress, sucks — but I’m thankfully past that, I’ve had a few useful responses, clarifying what the sticking points are a little. I hope there’ll be more. After 15 years of hitting my head, slowly and deliberately, against a wall (see peers (4)), I will produce another paper, which perhaps will be clearer.Comment #15 January 5th, 2008 at 11:52 am

Too many blogs: For Sean, read Scott. Big embarrassed Sorry! Sean Carroll’s list is of course also almost right about almost everything. I wonder how academics understand the crank experience well enough to give advice?

Comment #16 January 5th, 2008 at 1:19 pm

Peter:

I actually spend a lot of my time answering unsolicited email from all sorts of people, but maybe I’m weird that way! Here’s my advice if you want to email an academic and get a substantive response back:

(1) Don’t spam dozens of people.

(2) Make it clear why you’re contacting this person and not someone else (reference something specific in his or her work).

(3) Don’t send a long manifesto or paper to be evaluated. Keep it to a couple paragraphs (giving a URL for more details is OK).

(4) Ask a clear question, rather than trying to start an open-ended conversation.

(5) Don’t send an itemized list of questions.

(6) Don’t forward a complicated exchange with someone else.

(7) Don’t write as if you’ve been corresponding with the person for a long time, if you haven’t.

(8) Don’t ask for something like a fellowship. No one could be in a position to offer you that after reading one email.

(9) Follow ordinary rules of politeness (express gratitude for the person’s time, no insults, etc.).

(Now that I think about it, with the exception of (4) these could all also be rules for asking someone on a date. 🙂 )

I wonder how academics understand the crank experience well enough to give advice?Experience, my friend, experience.

Comment #17 January 5th, 2008 at 1:22 pm

Michael: I just meant that Deutsch and Grover have written breakthrough papers, but inexplicably don’t use TeX.

Comment #18 January 5th, 2008 at 1:35 pm

In regards to otto comments, PRIMES is in P violates 6, 7 and 10 as well.Deja vu: I actually disagree with that assessment. When I first read the PRIMES in P paper, I saw a big new idea (derandomizing primality testing by derandomizing a special case of polynomial identity testing), a reason why no one might have thought of that before (people had only started recently to think seriously about derandomizing polynomial identity testing), and the building on of nontrivial work in number theory (like Fouvry’s estimate). And PRIMES had been hanging above P by such a thin thread that I was ready to believe it might even work.

Comment #19 January 5th, 2008 at 1:49 pm

koko: Is “GADOL” some variant of LOL or ROTFL that I’ve never seen? (I know it’s the Hebrew word for big, and that meaning was all I got on Google.)

Comment #20 January 5th, 2008 at 2:06 pm

Do authors usually typeset their own docs, or do they hand that task off to someone else?

Comment #21 January 5th, 2008 at 2:15 pm

Scott: “Gadol” does literally mean big, but it’s used in slang as appreciation for a good joke, or something along those lines.

Comment #22 January 5th, 2008 at 2:15 pm

Job: Before the age of TeX, they used to hand off the typesetting to other people, but today (at least in math, CS, and physics) they almost always do it themselves.

Comment #23 January 5th, 2008 at 2:25 pm

“Deja vu: I actually disagree with that assessment. When I first read the PRIMES in P paper, I saw a big new idea (derandomizing primality testing by derandomizing a special case of polynomial identity testing), a reason why no one might have thought of that before (people had only started recently to think seriously about derandomizing polynomial identity testing), and the building on of nontrivial work in number theory (like Fouvry’s estimate). And PRIMES had been hanging above P by such a thin thread that I was ready to believe it might even work.”

There’s another thing though from that paper — the techniques there using are fairly elementary – except for some blackboxed results, it’s all stuff from a first graduate course in algebra. And it’s clear immediately that the authors have a strong grasp of that material and are using it is meaningful ways. That paper singularly convinced me of how useful it can be that a finite multiplicative subgroups of the units of a ring is cyclic. The writing in Primes in P is clear, not obtuse at all. I think this is a good example of the positive that goes with the negative – there are things you can do to make yourself seem credible as well, not just crackpot indicators to avoid.

Comment #24 January 5th, 2008 at 3:21 pm

Now that I think about it, with the exception of (4) these could all also be rules for asking someone on a date. 🙂Thanks for the tips Scott:)

Comment #25 January 5th, 2008 at 3:51 pm

I’m surprised by the fact that no less a researcher than Babai was the first to find the flaw. He says,

“The extraordinary interest compelled me to set everything else aside and study the paper without further delay.”

Did Laszlo fail to apply your heuristics as well as you? You saved yourself a few days of work, while he did not. With any luck, those saved days will be spent composing ever more interesting blog posts about dealing with cranks. 😛

Gus

Comment #26 January 5th, 2008 at 4:09 pm

Is the argument that “GI in P implies P = NP” evidence that GI is at least NP-Complete?

Comment #27 January 5th, 2008 at 4:23 pm

I mean NP-Hard.

Comment #28 January 5th, 2008 at 4:25 pm

Job: No, it’s not that GI in P implies P=NP; it’s just that Friedland’s specific approach would’ve implied that.

It’s clear that GI is in NP, but we have very strong evidence that it’s

notNP-complete. (E.g., if it’s NP-complete then the polynomial hierarchy collapses.)Comment #29 January 5th, 2008 at 4:40 pm

So the general consensus is that GI is probably in P? I wasn’t aware of that.

Comment #30 January 5th, 2008 at 4:45 pm

Job, there’s not really a consensus either way. It might be in P or it might be NP-intermediate. One reason for suspecting it’s in P is that in practice, it seems really difficult to find non-isomorphic graphs that are hard to distinguish. But maybe we just don’t know where to look for them.

(There’s also the question of whether GI is in BQP. Again there’s no consensus.)

Comment #31 January 5th, 2008 at 5:04 pm

So GI’s like factorization, except it’s not in coNP.

Comment #32 January 5th, 2008 at 5:17 pm

It is not known whether GI is in coNP.

Comment #33 January 5th, 2008 at 5:19 pm

Well, GI is in coAM, meaning that under a plausible derandomization hypothesis, it

isin coNP.Compared to factoring, I would say that GI is

(1) much easier to solve in practice (random graphs are easy to distinguish, whereas random n-bit integers are hard to factor), but

(2) much harder to base cryptosystems on or invent quantum algorithms for. (One reason is that, while factoring is naturally associated with the cyclic group, GI is naturally associated with the much more complicated symmetric group.)

Comment #34 January 5th, 2008 at 6:30 pm

Rule #10 is related to the “law of conservation of difficulty” that Terry Tao jokingly enunciates here. If one is proving a genuinely non-trivial result then the proof cannot work by simply translating to a different setting and applying standard approaches there. The non-trivial work must be done somewhere.

Typically, I find P!=NP papers easy to review. After all the bizarre notation and irrelevant steps are set out, there always come the point of “there is an exponential number of solutions, and it takes at least constant time to rule out each solution,” or something similar that either obviously relativizes or obviously proves exponential lower bounds for 2SAT (typically, both).

(Actually, I remember the case of one such P!=NP proof that also proves 2SAT requires exponential time. The author replied to the review, “no, no, I

proved in a lemmathat it takes at least constant time to rule out each possible solution (from a certain exponential-size set) of 3SAT, but the same lemma is false for 2SAT because there is a polynomial time algorithm!”)P=NP papers, however, can be tricky. Thankfully, I am never asked to review them.

Comment #35 January 6th, 2008 at 2:01 am

Am I correct in recalling the subgraph isomorphism, however, is NP-complete? Or is that just something I imagined one day?

Comment #36 January 6th, 2008 at 3:10 am

Hey Scott, if I wrote a paper which claimed to show a superlinear lower bound on an artificial problem in P using a technique which avoided naturalization limits by using a super-ackermannian function, and thus required ZFC and couldn’t be expressed in PA, would that set off your crank-o-meter?

(I have no such result, but hey, I can dream. Actually, I think my next step towards the likely futile task of showing a crack in lower bound limits is to understand relativization, which I’ve never quite grokked, and then algebrization. I’ve given up on showing P != NP, but I still have a dream of finding the Big Idea which allows progress to get started, and am increasingly convinced that such an idea is both there and HSSU once found.)

Comment #37 January 6th, 2008 at 3:15 am

Anonymous: That’s correct; subgraph isomorphism is NP-complete. It’s an interesting fact that, if you break the “algebraic structure” of GI in almost any way, you immediately get an NP-complete problem.

Comment #38 January 6th, 2008 at 3:47 am

Bram, my crank-o-meter doesn’t even give a reading until I’ve seen the actual paper for 30 seconds or so (or found out that there isn’t one).

For what it’s worth, though, I’d be astonished if any interesting circuit lower bound were provable in ZFC but not PA. Almost everything we care about in discrete math and CS can be proved not only in PA, but in very weak fragments of it.

Also, someone correct me if I’m wrong, but I think the situation is this: we know that any lower bound formalizable in certain weak fragments of PA (like S

_{2}^{2}) naturalizes. But we don’t know the other direction, and indeed it’s far from obvious.Assuming that’s correct, to escape the natural proofs barrier

it’s not enough for your proof to require ZFC. For even then, there might still be a polynomial-time “hard-vs.-easy” distinguishing algorithm extractable from your proof. (Though maybe that algorithm would require ZFC for its analysis.)Comment #39 January 6th, 2008 at 4:24 am

To be clear, I think requiring ZFC would be a side-effect, rather than a central component, of such an approach. I’m working by analogy with some ramsey theory results, which basically wind up requiring ZFC by going completely nuts with fast-growing functions, (some of them are for theorems which can even be expressed in PA), and to escape naturalization you apparently have to go kind of nuts with the fast-growing functions, so it wouldn’t surprise me in the least if meaty lower bound results were similar, because those have the same ‘feel’.

By the way, is there a decent introductory explanation of relativization somewhere? It wasn’t covered in any of the textbooks I’ve read, and web searching doesn’t turn up anything particularly introductory.

My own experiences with being an ‘outsider’ and occasionally doing some academic-worthy work are a bit interesting. Oded Regev actually contacted me about a cryptosystem I made up which he thought was noteworthy (not necessarily ‘good’, just ‘noteworthy’, but I am proud to say still unbroken) and I pointed out some things about it to him which he hadn’t thought of because he doesn’t study protocol implementations much (those things had already been thought of by other people, but the pointing out I consider a worthwhile effect by itself).

More typically when I do worthwhile theoretical work it turns out to be a rediscovery of something unknown at the time I dropped out of college but figured out in the interim. My ‘unrelated xors’ conjecture was at least original and interesting to the author of this blog though.

Of course I’m not exactly an ‘outsider’ in that there’s a whole field of study related to my work, and were I in academia I’d probably be drowning in requests to review things. This would probably result in a very high reject rate but a big improvement to the quality of work in the field. It almost makes me tempted to volunteer to review papers. Maybe someday in my Copious Free Time.

Comment #40 January 6th, 2008 at 5:03 am

Bram, I don’t know where you got the idea that escaping naturalization requires “going nuts with the fast-growing functions,” but it’s wrong. Circuit lower bounds like PP⊄SIZE(n

^{k}) and MA_{EXP}⊄P/poly don’t naturalize, but also don’t involve any functions bigger than exponential.Googling “introduction to relativization” turned up this from Madhu Sudan. Also try this from Goldreich and this from Arora and Barak.

Comment #41 January 6th, 2008 at 5:17 am

One aspect of “PRIMES is in P” that made it very likely to be worth reading is that its senior author, Manindra Agrawal, had just a couple of years previously (with Somenath Biswas) produced a very simple and clever algorithm that gave an alternate proof that PRIMES is in coRP; the PRIMES is in P paper could easily be seen to be using a related approach to this problem.

Comment #42 January 6th, 2008 at 6:34 am

Silly me, I searched for ‘relativization’ but not ‘introduction to relativization’. Thanks Scott for the links, I *think* I understand the one from Goldreich. Is it fair to say that relativization basically means that if a technique would show P != NP relative to any oracle then it’s necessarily busted, because P = NP relative to an oracle which is PSPACE complete? I feel a little not-dumb for not knowing that because on a mailing list devoted to cryptography some discussion of the possible scariness of quantum algorithms came up, and I hypothesized that for any oracle there’s always an encryption algorithm it enables which it can’t be used to break, and not one person said ‘Duh Bram, if the oracle is PSPACE complete that’s totally not true, haven’t you ever read about relativization?’

And er, yeah, by ‘requires’ I meant ‘it sure seems to me like if you’re going to bust naturalization for the sorts of problems I would like to, you would have to…’ So yeah, I misspoke fairly badly. This thought and general approach are pure speculation.

Comment #43 January 6th, 2008 at 7:49 am

A question for any GI experts out there: are there any well-known barriers to proving GI in P? Also, does anyone have any comments on the approach of Douglas and Wang that gives a classical algorithm for GI based on quantum walk ideas?

Comment #44 January 6th, 2008 at 9:54 am

Ashley. They have no proofs in their paper and only conjecture that their method is good based on some experiments. Similar and better experiments have been done long before them and those authors could also find examples which were not separated by these spectrum based methods. See e.g.

http://arxiv.org/abs/math/0507251

http://arxiv.org/abs/quant-ph/0505026

http://www.cse.psu.edu/~furer/Papers/cfi.ps

So while there is nothing outright wrong in their paper there is very little that makes one think that there is anything very interesting there either.

Comment #45 January 6th, 2008 at 10:35 am

Almost everything we care about in discrete math and CS can be proved not only in PA, but in very weak fragments of it.True. The Hydra game in which one cuts branches of a tree which are then replaced by the adversary by more heads is one of the few discrete-math/CS-ish examples for which it is known that the proof cannot be done within PA alone.

http://www.cs.fudan.edu.cn/theory/~rudolf/Paper/hydra_fun07.pdf

Comment #46 January 6th, 2008 at 2:30 pm

This thread is enjoyable both for content and for humor.

Suppose you are a young researcher who wants to avoid being tripped-up by the “Ten Signs”. Then your biggest return-for-investment—as all posters have agreed—is to learn to write in TeX.

No one has yet discussed “How far can Tex-embracing be pushed?”. Broadly speaking, there are four levels of TeX-based research: (1) write your prose and equations in LaTeX, (2) create your graphics in LaTeX-based packages such as “picture” and “pgf”, (3) craft your presentation using the Latex-based “Beamer” package, and (4) write your software in a Knuth-style literate programming environment such as “cweb” or “nuweb”.

AFAICT, pretty much the only researcher to operate consistently at TeX-Level 4 is Donald Knuth himself … which is a pretty strong recommendation! In particular, Knuth’s famous essay Computer Programming as an Art could, with minimal changes, be retitled “Complexity Theory as an Art” or even “<Any research discipline> as an Art.”

As for TeX-Level 3, many well-respected researchers operate there. As a friend of mine says “Pretty much all the hep-cats at Microsoft Research use ‘Beamer.'” 🙂

The point being, that the embrace of TeX both reflects and helps to reinforce habits of thought that (seemingly) are conducive to good research and clear exposition.

So Dave Bacon’s suggestion (1) is IMHO very good advice.

Comment #47 January 6th, 2008 at 2:40 pm

AFAICT i’m not familiar with the AFAICT acronym.

Comment #48 January 6th, 2008 at 2:48 pm

Anonymous #23:

The units of a ring do not necessarily form a cyclic group, although those of a field do. Z/8 is a counterexample.

Comment #49 January 6th, 2008 at 2:59 pm

Is it fair to say that relativization basically means that if a technique would show P != NP relative to any oracle then it’s necessarily busted, because P = NP relative to an oracle which is PSPACE complete?Yes, that’s one thing it means. (Note that a technique that would show P

equalsNP relative to any oracle is also busted.)But besides P vs. NP, you can also study the relativized versions of pretty much

anyquestion in complexity theory (is NP in BQP? is BQP in PH? is PH infinite? does NP have linear-size circuits? do one-way functions imply one-way permutations? etc.) — and when you do, things get much more interesting.Comment #50 January 6th, 2008 at 5:10 pm

AFAICT is as far as I can tell

Comment #51 January 6th, 2008 at 5:57 pm

Is GI implicitely equivalent to the problem of hashing a given graph such that isomorphic graphs hash to the same value, and different graphs hash to different values?

Comment #52 January 6th, 2008 at 6:41 pm

Interesting question! Such a hash function would certainly

sufficefor solving GI, but it’s not at all clear that it’s necessary.Comment #53 January 6th, 2008 at 8:12 pm

This simple test (suggested by Dave Bacon) already catches at least 60% of wrong mathematical breakthroughs.Why can “not using Tex to write up the manuscript” serve as a test to catch the wrong mathematical breakthroughs?

Comment #54 January 6th, 2008 at 9:05 pm

NER: All I claim is that

empirically, it’s a very successful heuristic.As for

whythis is so, one might speculate as follows: those who can’t be bothered to learn the tools, notation, conventions, etc. of whatever subject they’re writing about, are highly correlated (though not identical) with those who can’t be bothered to learn the subject itself.Comment #55 January 7th, 2008 at 2:32 am

An isomorphic hash seems like a more adequate way to represent objects because it is perspectiveless (the hash of the graph of a pen is always the same no matter how i look at it). It’s like an unique absolute value, intrinsically associated with a given object.

Comment #56 January 7th, 2008 at 6:03 am

I think Job’s “isomorphic hash” is a very nice concept. To the lurking GI-experts here: Does this concept have a name in the literature? Did anyone study Job’s conjecture? (That if GI is in P then there is an isomorphic hash in FP.)

By the way, there are well-studied analogs in computer vision, see Zernike moments. There we deal with pictures instead of graphs, and the isomorphism is rotation/translation. And there is noise, of course.

Comment #57 January 7th, 2008 at 9:36 am

I prefer to think of this concept as canonicalization rather than hashing. You are seeking to define a representation of graphs in “canonical form”, in such a way that isomorphic graphs will naturally have the same canonical form.

By way of a simple analogy, consider testing whether two strings are anagrams. You could try to identify pointwise differences (a letter one string has that the other doesn’t), but an easy algorithm is to sort both strings and compare them. In other words, sorting provides the canonical form.

I prefer this terminology because hashing to me evokes thoughts of compressed random strings of bits. There’s no reason why the “hash” in this case needs to be more than a reordering of the graph’s contents that respects the ambient notion of isomorphism. Or, if you prefer, it’s a way of identifying the lexicographically least of all representations of the graph.

Comment #58 January 7th, 2008 at 12:18 pm

Jobs hashing looks like the same thing as a “canonical labelling”. Labellings of this type can be produced by Brendan McKay’s program Nauty, which seems to be the most efficient GI program today.

http://cs.anu.edu.au/~bdm/nauty/

Comment #59 January 7th, 2008 at 2:00 pm

I googed for canonical labeling. It is a very relevant but slightly different concept.

I’ll call Job’s concept a graph id. Canonical labeling (canonical form, really) is a special kind of graph id: an isomorphic copy of the graph.

Now assume that GI is in P (or add it as an oracle). In a good world, this implies that we can effectively find a canonical form. In a bad world, this does not even imply that we can effectively find graph ids. In an ugly world, it implies that there are graph ids in FP, but none of these are of the canonical form kind.

Using the (flawed) analogy from computer vision: Zernike moments are NOT a canonical rotation of the picture.

In five minutes, I could not rule out any of these three worlds. But I believe that someone even slightly familiar with the topic could immediately rule out at least one. It is an extremely well-studied question. But the most relevant papers are all behind publisher firewalls.

Comment #60 January 7th, 2008 at 3:36 pm

But representing graphs in some canonical/lexicographically least form seems maybe too

strongto be equivalent to Job’s “graph hash,” right? Certainly if you can convert a graph to a canonical form in FP, you could hash that “canonical form;” but I don’t see any obvious reason why an isomorphism-preserving hash would necessarily correspond to an easily computable canonical form. I have the suspicion that Daniel’s “ugly worlds” actually exist, although actually finding one would probably take a lot of work.(Incidentally, Klas: Having written a fair bit about graph labelings, I’ve learned to stick to the one-l spelling. “Labellings” is, I’m pretty sure, totally legit, but Google and word processors will yell at you if you use it. So I just use one.)

Comment #61 January 7th, 2008 at 4:34 pm

If there exists a hash function h such that h(G) = h(H) if and only if G and H are isomorphic then I would expect it to be easy to find a canonical form too. One reason for this is that if we have such a hash then we can easily find an isomorphism between H and G by just adding a suitable gadget-graph to a vertex of each graph and checking if they still have the same hash.

But I haven’t thought that much about this.

(harrison: I just picked that spelling here because it was the one this page’s spell checker liked. In general I believe that the single L spelling is the US main version and the double L the UK one. That’s what I’ve been told at least. )

Comment #62 January 7th, 2008 at 8:51 pm

Here is one you forgot:

13. You claim to have proved the Riemann Hypothesis.

While it admittedly doesn’t catch every bogus proof, it does a pretty good job.

Comment #63 January 7th, 2008 at 9:55 pm

Relating to the “GI hash”/Canonical-forms comments since Job’s #51, here’s a problem maybe someone can solve:

Let A stand for adjacency matrices of undirected n-vertex graphs, N for n-choose-2, and h for functions that “unroll” the upper triangle of A into an N-bit string. Given any family H of functions h, one for each n, define the problem “Can(H)”:

Instance:A matrix A and a string y \in {0,1}^N.Question:Is there a matrix A’ such that A and A’ define isomorphic graphs and h(A’) >= y?Here “>=” can mean standard string lex order or some other total order. Then Can(H) is always GI-hard, indeed you can get a canonical matrix by binary search. Assuming H isn’t perverse, Can(H) is in NP. Concrete examples:

1. When h(A) unrolls the upper triangle taking the longest diagonal first, then Can(H) is NP-complete, as taking y = 1^(n-1)0…0 reduces HAMILTON PATH to it.

2. When h(A) unrolls going down-by-columns, then a similar y reduces CLIQUE to Can(H).

But

3. When h(A) unrolls going across-by-rows (the n-1 entries in row 1, then n-2 from row 2 etc.), what is the complexity? That’s the problem.

Comment #64 January 7th, 2008 at 10:41 pm

The problem of telling if an adjacency matrix is the lexicographically least (of all permutations) is an NP-complete problem.

How do I know that? Arghhh! I can’t remember and google doesn’t remember either. Toran, Koebler, and what’s-his-face in the Graph Isomorphism Problem book, probably proved in the seventies or early eighties by Babai.

But here’s a question that I haven’t seen solved anywhere… is GI actually known to be P-hard? Garey and Johnson -assume- it’s ‘between’ P and NP. But I’ve never seen a justification. (I’ve never seen a justification that PRIME (though in P) is P-hard either).

Comment #65 January 8th, 2008 at 2:06 am

Mitch: Maybe it’s just my lack of imagination, but it’s very hard for me to imagine evaluating Boolean circuits using PRIME. I just googled for graph.isomorphism p.hard, and the current best result is the highly nontrivial DET-hardness by Torán:

http://theorie.informatik.uni-ulm.de/Personen/toran/papers/hard.pdf

Quite probably, GI is not P-hard but still outside P.

Comment #66 January 8th, 2008 at 3:38 pm

3. When h(A) unrolls going across-by-rows (the n-1 entries in row 1, then n-2 from row 2 etc.), what is the complexity? That’s the problem.I’d think it’s NP-complete (and under

sometotal ordering, of course, it is; just rearrange the bits to reduce to examples (1) or (2). But that’s not really fair). Like Scott said, GI hangs by a thin thread from being NP-complete in that almost any problem taking away its algebraic structure becomes NP-complete.That said, though, I’m not sure about the status of the “random oracle” version of this (where the oracles are the h(A)s). On the one hand, my intuition tells me that a random unrolling function won’t mess with the algebraic structure, so it’s likely to be precisely as hard as (suitably relativized?) GI; on the other, though, an unstructured oracle intuitively leads to an unstructured search problem, which are typically NP-complete. So my intuition tells me that the polynomial hierarchy collapses, but then tells me that it doesn’t, and thus promptly disappears in a puff of logic. Blahh. Any insights from anyone with…insight?

Comment #67 January 8th, 2008 at 11:01 pm

I know this is unrelated, but it’s urgent. Biting vaginas the movie!!!

http://www.youtube.com/watch?v=2K0OS4gCpos

Comment #68 January 9th, 2008 at 2:50 am

For a canonical representation of a graph we could just pick the smallest binary representation (i.e. 7, 25, 42, 52 correspond to graphs of a single triangle, using KWregan’s encoding #3 – 7 therefore being the canonical representation of all triangles, for example).

What is the number of unique graphs of n nodes? There are 2^[n(n-1)/2] possible graphs, and n! permutations for each. Without looking at duplicates (which is beyond me), the number of unique graphs of n nodes is “around” 2^[n(n-1)/2]/n! (?).

So for obtaining a canonical representation of a graph (given a solution to GI) we might have to search & compare with all the unique graphs (not in P), unless we can do a binary search (which i guess is why KWReagan defined his question using h(A’) >= y).

But then Mitch says this has been shown to be NP-Complete. Is that right?

Comment #69 January 9th, 2008 at 3:58 am

Biting vaginas the movie!!!Anonymous, you are about the 500,000

^{th}person to have emailed me or posted a comment about that. Please, stop.Comment #70 January 9th, 2008 at 9:15 am

Hi everyone,

I was interested by this posting in a particular question which is the Hidden Subgroup problem.

Please can anyone tell me if the Hidden Subgroup problem over nonabelian groups is NP complete or is it some different complexity class.

Thanks in advance Zelah

Comment #71 January 9th, 2008 at 12:34 pm

Zelah: First of all, HSP is not a computational problem in the usual sense, because it involves an oracle (the hiding function). But one can still ask whether or not it’s NP-complete given some concrete instantiation of the hiding function. And the answer is no — not unless the polynomial hierarchy collapses! (Indeed, for the same reasons the polynomial hierarchy would collapse if GI were NP-complete.)

As for where HSP lives in ComplexityLand: GI reduces to HSP, and HSP is contained in the class SZK (Statistical Zero-Knowledge). But it’s probably strictly between them (i.e., neither GI-complete nor SZK-complete).

Of course, here I’m talking about HSP over an

arbitrarygroup. When we consider HSP overparticulargroups the situation can be different.Comment #72 January 9th, 2008 at 9:33 pm

I asked Laci Babai about the canonical form question. He wrote that

1. Job’s “graph hash” is called “complete invariant” in the literature.

2. complete invariants in P imply canonical forms in P, so my “ugly world” is ruled out. The elementary two-page proof is in:

http://research.microsoft.com/~gurevich/Opera/131.pdf

3. GI in P is not known to imply complete invariants in P.

Comment #73 January 10th, 2008 at 11:51 pm

Which problem do you think resembles GI the most: Primality Testing or Factoring? Why?

Comment #74 January 11th, 2008 at 5:48 pm

Friedland has posted a 3rd version that better incorporates the feedback of Babai & other researchers @

http://arxiv.org/abs/0801.0398

Comment #75 January 13th, 2008 at 10:42 pm

Isn’t it narrow-minded and constrictive to say that any proposal that is not presented in TeX is not worthy of consideration? If new ideas were judged by their adherence to conventional or restricted form, then men such as Grassmann, Frege, and Feynman would have been ignored, to our detriment. A creative idea can be as well presented in Baskerville or Times New Roman as in TeX. It is the content of the idea that is important, not the form. To judge every thought by whether it is presented in TeX is stifling.

Comment #76 January 22nd, 2008 at 11:26 pm

Senge Tame!

1.

This affair explains why mathematicians need to learn

some basics of complexity: proving that some thing

is “hard” gives semi-rigorous, conditional

proof of impossibility of “simple” descriptions of the thing.

In other words, it provides implicit counter-examples.

And this strategy is very kosher when indecidability is used.

2. The “mathematical” situation here is very common:

Given a set X, with “easy” membership for its convex hull

CO(X). Is the membership for the convex

hull CO(X \otimes X) “easy”?

Quite often the answer is negative. The good example

for this “quantum” blog is the separability/entanglement

problem. I wonder if there exists some general

statement of the sort?

Comment #77 February 9th, 2008 at 4:17 am

I left academe 13 years ago and have been doing research quietly and steadily while earning a living by other means. Most of my work involves the development of algorithms for challenging problems, especially problems in graph theory. My working set of problems includes Graph Isomorphism, Subgraph Isomorphism, Maximal Cliques, Maximum Cliques, k-Clique Existence, and Factoring.

I’m partially shifting into a writing mode so that I can get a lot of work out of my head, software, and notes and into the world. In particular, I have several new algorithms for Graph Isomorpism, Maximal Cliques, Maximum Cliques, and k-Clique Existence that are ready to share.

I have published none of my work beyond my dissertation, so we’re talking about 20 years of research that I need to organize and write up. I believe that some of my work will be of interest. I have no interest in publishing in journals. I plan on posting papers on ArXiv or the ACM equivalent.

I haven’t been planning on making source code available – a lot of work would be required to prepare it and maintain it that I think could be better spent. I haven’t ruled this out, though.

Call me unconventional or attribute it to a personality quirk. I’m hoping that my work will be taken seriously but I’m not overly concerned that it might set off an occasional alarm on first exposure.

I hadn’t thought seriously about learning TeX or how to use TeX-related tools until reading this blog entry. I’ve been planning on posting in PDF to an archive. I have very little need for the typesetting capabilities that TeX offers. I’m inclined to skipping TeX and using a conventional word-processor to generate PDF.

Comments and suggestions on my situation and plans are most welcome.

Comment #78 February 9th, 2008 at 6:54 am

Re provability in PA: the graph minor theorem demonstrably is not provable in PA, though it’s provable in second order arithmetic (probably in some weak fragment). It certainly sounds plausible to me that P vs NP is independent of PA but can be solved in PA2. Set theory (ZFC) is something else again, and it would be awful if solving P vs NP needed it.

Craig, it wouldn’t occur to me to use a word processor to format a math paper, because of how messy it would be to enter the equations. With TeX you type them pretty much the way you would read them out loud.

Comment #79 February 9th, 2008 at 2:18 pm

asdf – To date, I’m using remarkably few equations. If I were using more, TeX-ware would be of greater value to me, but as it is, it seems to be one more thing to install and learn. Doing so just to placate people who think everyone should do things the same way rubs me the wrong way.

Apart from facilitating the typesetting of equations, are there compelling reasons for someone to use TeX who will not be submitting papers to journals?

Comment #80 February 9th, 2008 at 9:59 pm

What can I say; if a paper is full of spelling errors, that doesn’t necessarily mean the mathematical content is wrong, but it’s not a good sign. TeX is part of mathematical culture and using anything else might be similar to using an unconventional notation for a familiar math concept. Again, not conclusive, but not a good sign. Nothing short of actually reading a paper (or at least reading enough to find a definite error) is enough to form a conclusive judgement of its content. But lots of things, like the presence or absence of spelling errors and weird notation and typography, can raise or lower expectations at the outset. That I think is what Scott was trying to get at.

Comment #81 February 10th, 2008 at 4:30 am

Isn’t it narrow-minded and constrictive to say that any proposal that is not presented in TeX is not worthy of consideration?…To judge every thought by whether it is presented in TeX is stifling.Nobody’s saying the use of TeX is a fully reliable test for whether a paper is good, but here’s an analogy:

Suppose someone handwrites a paper in crayon. Logically, this tells us nothing about the content of the paper, but in practice, it tells us a lot about the author. Very few people who write in crayon have anything interesting or worthwhile to say. It’s not a logical guarantee, but if you see a research paper written in crayon, it’s perfectly reasonable not to take it seriously (since it almost certainly isn’t serious).

Not using TeX in a math/CS paper is not nearly as extreme as writing in crayon, but it is similar in spirit. There are a handful of older or eccentric researchers who have never learned TeX. Other than that, anybody in math or theoretical CS who doesn’t use TeX looks like a rube. This may not be fair, but it’s a true statement about appearances within the community.