## Eight Signs A Claimed P≠NP Proof Is Wrong

As of this writing, Vinay Deolalikar still hasn’t retracted his P≠NP claim, but a clear consensus has emerged that the proof, as it stands, is fatally flawed. The first reason is that we’re not going to separate k-SAT from much easier problems purely by looking at the structure of the solution space: see for example this comment by Ryan Williams. The second reason is that, independently of the solution-space issue, Neil Immerman has identified critical flaws in the finite model theory part of the paper.

The researchers who actually studied Deolalikar’s paper, thought hard about it, and figured out in a matter of days why it doesn’t work deserve our undying gratitude (they certainly have mine!). At the same time, if someday we have a P≠NP claim at this level several times per year—which I see as entirely possible—then it’s clear that the sort of heroic effort we saw over the last week isn’t going to scale. And thus, several commenters wanted to know how someone as lazy as I am could nevertheless be so confident in predicting what would happen:

Would you enlighten us as to what was the PROCESS behind your quick and correct judgment? … Your quick nondeterministic hunch about the wrongness of all the 100 pages was quickly verified as correct. But how did you do it, confidently risking your reputation like a smooth poker player?

Scott Aaronsohn [sic], like Nassim Nicholas Taleb, you predicted the crash before it happened! You knew some fundamental weaknesses intuitively that the other Myron-Scholes Nobel prize winning economists (computer scientists) fell for!

While it pains me to say so, these commenters give me way too much credit. The truth, as far as I can tell, is that many (most?) complexity theorists reached exactly the same conclusion as I did and just as quickly; it’s just that most (with some notable exceptions) were too cautious to say so in public. Among those who *did* comment publicly, the tendency was to bend over backwards to give Deolalikar the benefit of the doubt—an approach that I considered taking as well, until I imagined some well-meaning economist or physicist reading my generous words and coming away with the impression that P≠NP must be either licked or else hanging by a thread, and at any rate couldn’t have been *nearly* as hard as all those computer scientists made it out to be.

So, in the future, how can *you* decide whether a claimed P≠NP proof is worth reading? I’ll now let you in on my magic secrets (which turn out not to be magic or secret at all).

The thing *not* to do is to worry about the author’s credentials or background. I say that not only for ethical reasons, but also because there are too many cases in the history of mathematics where doing so led to catastrophic mistakes. Fortunately, there’s something else you can do that’s *almost* as lazy: scan the manuscript, keeping a mental checklist for the eight warning signs below.

- The author can’t immediately explain why the proof fails for 2SAT, XOR-SAT, or other slight variants of NP-complete problems that are known to be in P. Historically, this has probably been the single most important “sanity check” for claimed proofs of P≠NP: in fact, I’m pretty sure that every attempt I’ve ever seen has been refuted by it.
- The proof doesn’t “know about” all known techniques for polynomial-time algorithms, including dynamic programming, linear and semidefinite programming, and holographic algorithms. This is related to sign 1, but is much more stringent. Mulmuley’s GCT program is the only approach to P vs. NP I’ve seen that even has serious aspirations to “know about” lots of nontrivial techniques for solving problems in P (at the least, matching and linear programming). For me, that’s probably the single strongest argument in GCT’s favor.
- The paper doesn’t prove any weaker results along the way: for example, P≠PSPACE, NEXP⊄P/poly, NP⊄TC
^{0}, permanent not equivalent to determinant by linear projections, SAT requires superlinear time … P vs. NP is a staggeringly hard problem, which one should think of as being*dozens*of steps beyond anything that we know how to prove today. So then the question arises: forget steps 30 and 40, what about steps 1, 2, and 3? - Related to the previous sign, the proof doesn’t encompass the
*known*lower bound results as special cases. For example: where, inside this proof, are the known lower bounds against constant-depth circuits? Where’s Razborov’s lower bound against monotone circuits? Where’s Raz’s lower bound against multilinear formulas? All these things (at least the uniform versions of them) are implied by P≠NP, so any proof of P≠NP should imply them as well. Can we see more-or-less explicitly why it does so? - The paper lacks the traditional lemma-theorem-proof structure. This sign was pointed out (in the context of Deolalikar’s paper) by Impagliazzo. Say what you like about the lemma-theorem-proof structure, there are excellent reasons why it’s used—among them that, exactly like modular programming, it enormously speeds up the process of finding bugs.
- The paper lacks a coherent overview, clearly explaining how and why it overcomes the barriers that foiled previous attempts. Unlike most P≠NP papers, Deolalikar’s
*does*have an informal overview (and he recently released a separate synopsis). But reading the overview felt like reading Joseph Conrad’s*Heart of Darkness*: I’d reread the same paragraph over and over, because the words would evaporate before they could stick to my brain. Of course, maybe that just means I was too dense to understand the argument, but the fact that I couldn’t form a mental image of how the proof was supposed to work wasn’t a promising sign. - The proof hinges on subtle issues in descriptive complexity. Before you reach for your axes: descriptive complexity is a beautiful part of TCS, full of juicy results and open problems, and I hope that someday it might even prove useful for attacking the great separation questions. Experience has shown, however, that descriptive complexity also a powerful tool for fooling yourself into thinking you’ve proven things that you haven’t. The reason for this seems to be that subtle differences in encoding schemes—for example, whether you do or don’t have an order relation—can correspond to
*huge*differences in computational complexity. As soon as I saw how heavily Deolalikar’s proof relied on descriptive complexity, I guessed that he probably made a mistake in applying the results from that field that characterize complexity classes like P in terms of first-order logic. I’m almost embarrassed to relate this guess, given how little actual understanding went to it. Intellectual honesty does, however, compel me to point out that it was correct. - Already in the first draft, the author waxes philosophical about the meaning of his accomplishment, profusely thanks those who made it possible, etc. He says things like, “confirmations have already started arriving.” To me, this sort of overconfidence suggests a would-be P≠NP prover who hasn’t yet grasped the sheer number of mangled skeletons and severed heads that line his path.

You might wonder: if I had all these more-or-less articulable reasons for doubting Deolalikar’s proof, then why didn’t I just *state* my reasons in the first place, instead of placing a $200,000 wager?

Well, I probably *should* have stated the reasons. I apologize for that.

The best one can say about the lazy alternative I chose is that it led to a somewhat-interesting socio-mathematical experiment. By putting my life savings on the line, could I give the world a dramatic demonstration of just how high the stakes are with P vs. NP—that when computer scientists say this problem won’t be solved without a titanic advance in human knowledge, without overcoming obstacles like the ones mentioned in points 1-4 above, *they’re not kidding?* After such a demonstration, would more people get it? Would they refrain from leaping out of their chairs at the next P vs. NP announcement? Like Richard Dawkins staring unflinchingly at a steel pendulum swinging toward his face (which he knows has enough potential energy to *almost* hit him but not quite), would they remember that the only miracle in life is that there *are* no miracles, neither in mathematics nor in anything else?

I don’t know how well the experiment succeeded.

**Update (8/14):**Somehow I completely forgot, over the course of the last three posts, to link to the PowerPoint talk Has There Been Progress on the P vs. NP Question?, which has lots of relevant material about why it’s so hard to prove P≠NP and how to evaluate proposed attempts. Thanks to several commenters for correcting my oversight—I’ll try to give the author of those slides proper credit in the future!

Comment #1 August 13th, 2010 at 9:26 pm

Scott, sorry for off-topic, do you already know if there will be another GCT tutorial during FOCS 2010? I’ve tried to find the conference program but it looks like it’s not ready yet.

Comment #2 August 13th, 2010 at 9:46 pm

Speaking of attempts to prove P != NP and GCT…

Scott, you went to the GCT workshop at Princeton but never blogged about it. What do you think about GCT?

Comment #3 August 13th, 2010 at 9:47 pm

Scott,

Is it possible that P=NP is undecidable? Or that it may be possible to prove that we can’t prove it one way or the other?

Comment #4 August 13th, 2010 at 9:52 pm

John T, you can take a look at Scott’s paper, “Is P vs. NP formally independent?” where he argues that it’s either equal or inequal, not formally independent of the axioms of mathematics.

But (as he writes there) it’s certainly possible that regardless of its truth or falsity, the proposition is formally impossible to prove.

Comment #5 August 13th, 2010 at 10:46 pm

Ok, this is an off topic comment. But, you couldn’t understand Conrad’s “Heart of Darkness”!

Eeck!

” The conquest of the earth, which mostly means the taking it away from those who have a different complexion or slightly flatter noses than ourselves, is not a pretty thing when you look into it too much.

What redeems it is the idea only.

An idea at the back of it; not a sentimental pretense but an idea; and an unselfish belief in the idea–something you can set up, and bow down before, and offer a sacrifice to. . . .”

Doesn’t that pretty much describe the attempts to gain mathematical glory?

Comment #6 August 13th, 2010 at 10:49 pm

@john t and Mysterio #3,4

An interesting extention, is it always possible to gain some information regarding the validity of a statement? Or do their exist statements that can’t be proven nor disproven in the accepted system (or is formally independent), and you can’t prove that you can’t prove it (or you can’t prove its independence, or its independence is independent), and you can’t prove that you can’t prove that you can’t prove it…….etc.

A problem like this, even to a few levels let alone infinite (I’d imagine infinite ordinals might come into it) would be a real bastard…let’s just hope P vs. NP isn’t such.

Comment #7 August 13th, 2010 at 10:57 pm

@michael webster #5

You read it clean through without facing this?

“I’d reread the same paragraph over and over, because the words would evaporate before they could stick to my brain.”

QFT x 100000000

If I had a penny for every time I suffered from this…Heart of Darkness as a perfect example. I envy people that can just “read” dense texts like that and get everything right away. a lot of it comes down to interest and other factors, but some of it is just intrinsic to the style. More often then I’d like I end up reading the same thing over and over again in classic literature, slowly widdling my reading speed to a page every couple of minutes.

It feels like a disease.

Comment #8 August 13th, 2010 at 11:14 pm

Enjoy your vacation, Scott. Your bet was the first unambiguous indication to me that I had no reason to get excited until the experts had digested the claimed proof, and I thank you for it. Also, I was amused. I appreciated your desire to be generous to Vinay while honestly reporting the results of the smell test.

Comment #9 August 14th, 2010 at 12:10 am

The Richard Dawkins reference is surprisingly compelling. I love it.

Comment #10 August 14th, 2010 at 12:10 am

How about #9: The proof apparently involved physics… which should hardly be relevant to a purely logical problem? Granted, I’m hardly an expert, so I might be wrong. The physics references definitely rang alarm bells though (as, to a lesser extent, the statistics references)

Comment #11 August 14th, 2010 at 12:22 am

I think a proof might have to contain an elementary, yet mind-blowing concept, like discovering that you can walk along a fourth dimension (wouldn’t that be cool if you were walking down the street and then suddenly, accidentally made a turn in a fourth dimension, where things look familiar but not quite).

Comment #12 August 14th, 2010 at 12:27 am

I believe Richard Feynman is in the story about the pendulum, not Richard Dawkins.

Comment #13 August 14th, 2010 at 12:29 am

Scott:

“So, in the future, how can you decide whether a claimed P≠NP proof is worth reading? …”

You you do believe P≠NP is true?

Comment #14 August 14th, 2010 at 12:35 am

The question presents the possibility that, upon being solved, a de facto world-view is confirmed and ratified. You are right to be suspicious of an easy solution. The philosophical razor’s edge that separates confirmation and denial of this theorem is a strident and imperious bifurcation in the nature of Truth itself.

Comment #15 August 14th, 2010 at 12:52 am

How to prove that P≠NP:

Step One. Thoroughly study Scott’s talk, Has There Been Progress on the P vs. NP Question? Make sure you understand what he’s talking about at every step. Study all the references just as thoroughly.

Step Two. Go back and do Step One properly.

Step Three is revealed only to those who perform steps one and two.

Comment #16 August 14th, 2010 at 1:01 am

Scott, I may have missed something prior to this paper about how “experience has shown” the dangers of using descriptive complexity to fool oneself.. or did I simply forget?

I agree that differences in computational power are subtler in descriptive complexity since the characterizations are not robust to the kind of reductions that we use all the time without thinking about them. My favorite example is the beautiful but difficult Ajtai-Fagin paper proving that directed connectivity is not in monadic-NP (existential second-order unary quantifiers) whereas undirected connectivity easily is in monadic-NP. However, as Ajtai-Fagin also show, CONSTANT-DEGREE directed reachability IS in monadic-NP despite the fact that it is equivalent to the general directed connectivity problem under constant-depth uniform reductions.

Comment #17 August 14th, 2010 at 1:29 am

Stas #1:

Scott, sorry for off-topic, do you already know if there will be another GCT tutorial during FOCS 2010?Yes!

Comment #18 August 14th, 2010 at 1:30 am

CS nobody (#13):

You you do believe P≠NP is true?Yes!

Comment #19 August 14th, 2010 at 1:33 am

Anon #2:

Scott, you went to the GCT workshop at Princeton but never blogged about it. What do you think about GCT?It’s a-comin’!

Comment #20 August 14th, 2010 at 1:34 am

Seeing as how Scott’s rule #1 of invalidating a P != NP proof (using 2SAT) is pretty effective, I’m thinking that one additional way to prove that P = NP would be by showing that for every possible proof M that P != NP, we can always build a variant of 2SAT, 2SATv, such that M places 2SATv outside of P.

Suppose that someone claims that this is the case – how would you disprove it? Are we able to prove that known barriers can be overcome, or is it something obvious that we can just take for granted?

Also, this is a similar approach to proving that P != NP by showing that for every polynomial time algorithm M for an NP-Complete problem, we can always build an input such that M reports the incorrect answer.

It’s interesting that, on the P != NP side above, there is the useful upfront requirement that the solution algorithm run in polynomial time, whereas on the P = NP side the solution proof doesn’t have to conform to a given size, it can be anything. So on the P != NP side we have more to work with.

Comment #21 August 14th, 2010 at 1:45 am

john t #3:

Is it possible that P=NP is undecidable? Or that it may be possible to prove that we can’t prove it one way or the other?(Relatively) Short Answer:yes, it’s certainly a logical possibility that P=NP could be independent of axiom systems such as ZFC. I don’t think it’s at all likely: the history of mathematics gives us much more reason to believe that we simply haven’t been smart enough to find the proof. Of course, if P=NPwasindependent, the question would arise of whether that factitselfwas provable in ZFC, or whether it was also independent, and so on ad infinitum.It’s worth stressing that, even if P=NP was independent, there’s either a polynomial-time algorithm for 3SAT or there isn’t! In other words, unlike (say) the Continuum Hypothesis, P=NP is clearly either true or false; independence would just mean that our standard axiom systems were too weak to decide which. (For more about the distinction between “P-vs.-NP-like questions” and “CH-like questions,” see my recent Math Overflow post.)

Long Answer:See the survey article Is P vs. NP Formally Independent? that I wrote 7 years ago!Comment #22 August 14th, 2010 at 1:52 am

Paul Beame #16:

Scott, I may have missed something prior to this paper about how “experience has shown” the dangers of using descriptive complexity to fool oneself.. or did I simply forget?Back when I was an undergrad (in the late 90s), there was a serious-looking attempt to prove NL≠NP using descriptive complexity and category theory, which had me hitting the library to study those topics. The attempt ultimately failed for reasons closely related to Deolalikar’s: namely, confusion about whether the first-order theory did or didn’t have an order relation. (I can’t remember the name of the author, but we could probably dig it up.)

I

seemto remember at least one other attempt based on descriptive complexity, but it’s possible that I’m fooling myself and multiplying a single example.Comment #23 August 14th, 2010 at 1:54 am

Amol #12:

I believe Richard Feynman is in the story about the pendulum, not Richard Dawkins.Yes, apparently Feynman was doing it long before Dawkins, and probably others were doing it long before him.

Can anyone find a video of Feynman doing the pendulum demonstration on the web?

Comment #24 August 14th, 2010 at 2:02 am

Xamuel #10:

How about #9: The proof apparently involved physics… which should hardly be relevant to a purely logical problem?Sorry, I can’t get behind that test! Yes, certainly P vs. NP is a math problem, whose answer is independent of the laws of physics. On the other hand, there’s a very well-established and beautiful connection between random k-SAT and statistical physics. This connection led the statistical physicists to make a large number of conjectures about the behavior of random k-SAT—many of which were later rigorously confirmed by mathematicians and computer scientists, and none of which (as far as I know) were disconfirmed.

On the other hand, a legitimate reason for skepticism of Deolalikar’s attempt is that proving the hardness of random k-SAT seems like a much harder problem than “merely” proving P≠NP! Walk before you run, and all that (or in this case, achieve interstellar travel before intergalactic travel…).

Comment #25 August 14th, 2010 at 2:20 am

Xamuel:

Physics, specifically Statistical Physics, has studied 3SAT, which they call something like “mean field diluted spin glasses”.

But to me also the author’s obvious enthusiasm for the physics community rang alarm bells. The problem with physicists in this situation is that culturally they love empirical results to such an extent that they become less rigorous. This allows physicists to succeed with the major engineering efforts necessary to discover new particles or build a nuclear bomb, but it can’t help prove P != NP.

I’m being too harsh — statistical physicists’ insights have led to cleaner proofs of mathematical results, such as some of the work with Alternating Sign Matrices. But with 3SAT I haven’t seen anything sufficiently rigorous.

Comment #26 August 14th, 2010 at 2:41 am

http://www.topnews.in/p-vs-np-mathematical-problem-successfully-solved-indian-brain-2268953

P vs NP mathematical problem is successfully solved by an Indian brain

Even though Vinay Deolalikar has solved this P vs NP mystery but many of the scientists have not accepted his way of solving the theory, they want him to solve it practically in the right way.

Few of them also claim that if Vinay can prove its claim right then they promise to pay $ 2 million as a prize.

Comment #27 August 14th, 2010 at 2:52 am

I suppose it’s just hypothetical now, but I was wondering, if it were to have happened that Deolalikar was offered the Clay Prize, could he refuse Clay’s $1million, and yet claim and receive Scott’s $200k?

Comment #28 August 14th, 2010 at 2:59 am

Does the checklist apply for vague

ideas how one could try to attack

the problem as well?

Vinay Deolalikar’s previous work

included the topic of P vs NP for

infinite Turing machines. I stumbled

upon this paper before.

As i asked a too specific

(and problably to “cranky” )

question about it in this blog before,

so i’d like to ask more generally:

Would it be hopless, regarding

to the cheklist, to try to

find connections between the

P vs NP problem and the infinite

version of it, wich is allready proven.

Comment #29 August 14th, 2010 at 3:22 am

Scott,

I feel that the statement “experience has shown that descriptive complexity is a powerful way of fooling yourself into thinking you’ve proven things that you haven’t” is unfair to the area. It all depends on who uses it. The subtleties you refer to are just trivial matters to the experts. Maybe you want to ask Neil about this. The fact that there have been dozens of bogus proofs using descriptive complexity does not justify the claim any further (as you seem to imply in your reply to Paul). Think of the hundreds of bogus papers claiming hypercomputation, or P = NP or all sorts of other things through the most esoteric of methods often including quantum computation.

Comment #30 August 14th, 2010 at 3:43 am

Albert: I actually agree with you that my statement about descriptive complexity was unfair—in the same way that profiling “people who look like a wild-eyed Middle-Eastern terrorist” at the airport is unfair (that criterion probably “works” much better than chance, but it also picks up at least one distinguished theoretical computer scientist… ). On the other hand, if you’re at all Bayesian, and you see descriptive complexity in a P≠NP proof in addition to other signs that the author might be fooling himself, the odds don’t look great.

Let me stress again that, in pointing out the potential for abuse, I mean not the slightest disrespect to descriptive complexity itself. I had a wonderful conversation with Neil I. a month ago, which renewed my determination (fueled, also, by Ben Rossman’s recent breakthrough) to learn more about the area and maybe even work on it myself someday.

Comment #31 August 14th, 2010 at 3:48 am

Cranky McCrank:

Would it be hopless, regarding to the cheklist, to try to find connections between the P vs NP problem and the infinite version of it, wich is allready proven.Alas, I’ve never studied the “infinite version”—would anyone who has be kind enough to summarize it for us here?

Comment #32 August 14th, 2010 at 4:00 am

The “infinite version” presumably refers to the P!=NP theorem for infinite time Turing machines: http://wwwmath.uni-muenster.de/logik/Personen/rds/pnenp.pdf

Infinite time TM’s are a really cute (but probably useless) construction in mathematical logic. They are like classical TM’s except if you run a non-halting program, instead of running forever they enter a special state corresponding to a limit ordinal, after which they can see the result of the infinite computation. They can decide any proposition in the arithmetic hierarchy. A basic article about them is here: http://arxiv.org/pdf/math.LO/0212047.gz

There are a bunch more related papers at http://jdh.hamkins.org/Publications

Comment #33 August 14th, 2010 at 4:02 am

I can give the following links but i have to add, that i’m

ofcourse just speculating a bit without much knowlege of

the subject

I thought this might be a fun idea thinking about the

P vs NP Problem

Joel D. Hamkins gives an introduction to infinite time turing

machines in this link.

http://arxiv.org/PS_cache/math/pdf/0212/0212047v1.pdf

The P vs NP problem is definde here

http://arxiv.org/PS_cache/math/pdf/0212/0212046v1.pdf

Ofcourse i have to admit this is nor a short summarize.

Comment #34 August 14th, 2010 at 4:14 am

I should have said, P!=NP for ITTM’s was proven in 2002 by Ralf Schindler. Deolalikar (with Schindler and J.D.Hamkins) proved that P != NP-intersect-Co-NP for ITTM’s: http://arxiv.org/pdf/math.LO/0307388.gz

I see the P!=NP link in my previous post is timing out (it worked a few days ago). The journal reference is Ralf Schindler, P != NP for infinite time Turing machines, Monatshefte für Mathematik 139 (4) (2003), pp.335-340.

ITTM’s are fun to read and think about, but at least for me I don’t see any way to get classical separation results from them.

FWIW I also saw the invocation of physics in Deolalikar’s paper as a red flag (because physics arguments are often unrigorous) but then I saw Deolalikar had also worked on mathematical logic and presumably knew what a proof was. So I thought maybe the physics and logic cancelled each other out ;-).

Comment #35 August 14th, 2010 at 4:28 am

This is just wild speculation but my hopes would

be if (and thats a big if) the following definition of

an infinite P vs NP problem could be proven by the

allready existing “infinite P vs NP problem” of J.D. Hamkins,

one could do intressting things with it.

Take a determenistic 3-Band Turing machine but allow

the inputword w on the first band to be to be infinititely long.

The turing machine is allowed to perform countably infinitley many steps, if it writes “1” infinitley many times on the third

band it accepts w

An example would be an infinitley long string of natural

numbers. The question, does this string contains the

number “2” infinetly many times could easy be solved

by such a turing machine.

A nondetermenistic version of such a machine would

resemble to the class of infinite NP.

The qestion would be, is the non deterministic version more

powerfull.

Of course this is just speculation on my side

maybe this definition doesn’t even make sense

but maybe one could show that from Hamkins definition of an infinite P not equal NP, P not equal NP for my definition aswell.

If so the idea would be to invent infininte Sat formulars

with finitley long clauses, wich a satisfyble if and only if the variables can be choosen in a way that each clause gets true.

……..

the rest is “explained” here

http://rjlipton.wordpress.com/2010/07/01/can-amateurs-solve-pnp/#comment-4234

I just thought this might be a nice little idea,

basicly on the idea that for example n^3 and 23^n

are much easier do distinct if you choose n to

be the ordinality of the natural numbers.

Comment #36 August 14th, 2010 at 4:48 am

Scott, there is a syntactic characterization of P by Cook and Bellantoni that I’ve always been surprised doesn’t get mentioned more often in P vs NP discussions. Do you know about it? Would it count as “descriptive complexity”? The P vs NP problem could then be turned into software implementation:

1) concoct a “programming language” based on the Cook-Bellantoni scheme. This would be a Haskell-like functional programming language allowing only a certain restricted form of recursion and linear types, the result of which is that the algorithms expressable in this language are precisely the polynomial-time algorithms.

2) attempt to solve your favorite NP-hard problem by writing a program for it in this language. If you succeed, or else prove (maybe with ordinal analysis or something like that) that it’s impossible, you’re done ;-).

Comment #37 August 14th, 2010 at 4:51 am

Oops, reference for above:

H. Schwichtenberg, “Program extraction from proofs”, chapter 3.

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.125.8891&rep=rep1&type=pdf

Comment #38 August 14th, 2010 at 5:21 am

[

This is a test of whether it was the length of my previous comment that triggered moderation. Sorry for any renundancy.]Pardon me if I’m being dense, but 3 questions [

only 1 of which will appear in this comment...]———————

1) I must admit I don’t get Ryan Williams’s objection that SAT can be (non-uniformly, and presumably not in polynomial time, of course!) mapped onto SAT0 while keeping the same number of satisfying assignments, clusters, and other major aspects of the solution “structure”. It seems like cheating to me. How does the fact I can

plant a specific solutioninto a hard problem mean it’s an easy problem? I hasten to add that this question is spawned by the following not-at-all-rigorous thought experiment:How much would I want to punch a professor if he said to me after an exam “Oh, you found the exam hard? That’s weird, the exam was quite easy! After all, I made sure the questions had the solutions I wanted them to have. Therefore, the exam shoulda been easy for you even though I, of course, didn’t tell you these solutions in advance!”

I’d want to punch him a lot. Am I wrong?

[BTW, speaking of planting solutions and not changing the statistics, here's the physicists' (namely, Zdeborova and Krazakla, who are two key people in the statistical physics of random SAT and who've appeared on the thread on Lipton's blog) take on it: http://arxiv.org/abs/0901.2130 ]

Comment #39 August 14th, 2010 at 5:24 am

[

This is continuing my attempt to circumvent triggering moderation by breaking my 3 question post into 3 posts of 1 question each. Again, sorry for any redundancy]2) Let me preface by saying I realize that Deolalikar’s proof seems to have several fatal flaws presently in the part where it says FO(LFP) applied to SAT (and thus PTIME algorithms via Immerman-Vardi) can only characterize the distribution of solutions of n-bit instances of SAT over the 2^n possibilities down to distributions with only 2^{polylog(n)} “independent

parameters”.

That said, how is this

in principleincompatible with finding k-XORSAT is in P?Yes, I know random instances of k-XORSAT in certain regimes will almost always exhibit clustering, condensation, and freezing just like random instances of k-SAT almost always do in certain regimes. But owing to the fact k-XORSAT is linear, won’t there be additional correlations in the conditional probabilities of each bit on the others in the solutions that wouldn’t be there in full k-SAT? Is the problem that the FO(LFP) argument — assuming its presently apparently fatal flaws could be fixed — is powerless to encompass algorithms that’d hit on that hidden “structure”?

Comment #40 August 14th, 2010 at 5:26 am

3) Finally, I wholeheartedly agree that Deolalikar’s focus on the “complexity” of parameterizing the solutions of a SAT instance is wrong-headed since, for example, Valiant & Vazirani’s randomized reduction from SAT to SAT-with-at-most-1-solution make the set of solutions trivial. But can’t this be circumvented if Deolalikar broadened his focus from the question

How many independent parameters are needed to describe how the solutions to an an n-bit instance of random k-SAT are distributed over all 2^n bit strings?

to

How many independent parameters are needed to describe the distribution of the number of clauses violated over all 2^n bit strings?

[Sidenote 1: I ask this in part since, being a guy who's investigating how

local random search heuristicslike classical and quantum annealing algorithmsapproximatethe solutions to random constrained minimization problems, the important "structure" of constrained minimization problems to me intuitively isn't just where the global minima are, but where the local minima are too as well as how high the barriers between the global and local minima are.][Sidenote 2: I by no means am saying this generalization would be easy. Granted, it might take major new ideas to attack the 2nd question, and if one succeeded, one might be merely(!) end up separating P and #P rather than P and NP.]

So, Scott and everybody, how dense am I being? (Or, perhaps better said, how much denser than usual am I being?! )

Comment #41 August 14th, 2010 at 5:27 am

Darn it, my two other questions that IMHO are probably less dense triggered moderation. At the risk of sounding like a cranky customer despite the fact the product’s free, what triggers moderation?

Comment #42 August 14th, 2010 at 5:38 am

Well, since short statements without html seem not to trigger moderation, lemme add 1 question to the #2 and #3 that are awaiting moderation:

It seems no one has tried to make a theory of quantum descriptive complexity. Is there a reason to believe it’d be stupid to try?

Comment #43 August 14th, 2010 at 5:56 am

Blogspot has captcha to deter spam. WordPress relies on the ingenuity of its programmers to decide whats spam and whats not. It may even be outsourcing this activity.

Comment #44 August 14th, 2010 at 6:03 am

There is a clear sign a researcher doesn’t value his time: he reads beyond the title papers resolving P vs. NP.

Hey guys, once you’re done with Deolalikar, there is plenty of his colleague’ postings on arxive and everywhere else. Have fun!

Comment #45 August 14th, 2010 at 6:40 am

sorry to ruin the 42 comments but you’re soooo cool Scott

Comment #46 August 14th, 2010 at 7:25 am

Scott asserts:

“A clear consensus has emerged that the proof, as it stands, is fatally flawed.”That’s a fine topic sentence, and yet IMHO, Tim Gowers’ weblog has just posted an equally good one:

“Deolalikar is making a move that I, for one, had not thought of, which is to assume that P=NP and derive a contradiction.”The most fun aspect about this episode (for me) is that so many weblogs are emitting Great Truths, that is, truths whose

oppositealso is true. To assemble a collection of Great Truths, just read the various weblogs … it’s great fun!Of all the (many!) commentators, it seems to me that Terry Tao’s posts are providing the most nearly synoptic overview of the Great Truths that are flying around … obviously it ain’t easy to synthesize such an overview.

Complexity theory is turning out to resemble quantum physics in a respect that Feynman memorably remarked upon:

“There is always another way to say the same thing that doesn’t look at all like the way you said it before. I don’t know what the reason for this is.”As we achieve a better understanding of Feynman’s remark—in physics and/or in complexity theory—then perhaps all these Great Truths will begin to condense into a more compact (and more practically useful) set of truths. And *that* would be the most fun outcome of all.

Comment #47 August 14th, 2010 at 7:57 am

I have this feeling that this whole P and NP thing is not only a profound problem that needs solving, but something that can be infinitely curious to try and wrap your mind around…

Thing is- there’s a whole world of great minded, genius hackers out here that can’t understand one iota of what anyone is talking about. We’re not your traditional code-savvy hackers; we’re your inventors, life hackers, researchers, scientists… and I think I can speak for most of us when I say: We would love to take the time to really dive into this thread, but we ask that someone (you) write a blog that breaks this whole thing down into a rest-of-the-world-friendly P/NP fpr dummies… or at least explain it to us like we’re stupid as hell… at this point I’m really okay with even that.

Comment #48 August 14th, 2010 at 9:44 am

Could one add a 9th point: That the author does not prove how his proof evades the known barriers (relativization, natural proofs and algebrization)?

Comment #49 August 14th, 2010 at 10:28 am

Great … now we have

.~~Eight~~Nine Signs A Claimed P≠NP Proof Is WrongHmmm … perhaps it would be comparably useful to assemble a list of

Signs That A Claimed P≠NP Proof Has Enduring Value.And it is trivially easy to start such a list:

—————

Six Signs A Claimed P≠NP Proof Is ValuableDick Lipton says “Timothy Gowers, Gil Kalai, Ken Regan, Terence Tao, and Suresh Venkatasubramanian are some of the top people who are trying to unravel what is up.”Tim Gowers says “The proof makes a move that I, for one, had not thought of, which is to assume that P=NP and derive a contradiction.”Neil Gurvits remarks “This approach is a very clever, in a good sense elementary, way to push the previous algebraic-geometric approaches.”Bob Sloan (former NSF/TCS Program Director) says: “What a great story for the program director for this area to give to the new AD of NSF.”Scott Aaronson bets that the proof itself is wrong, but does not bet that the proof strategy is wrong.Terry Tao observes (among Tao’s dozens of valuable comments): “It’s also worth mentioning an older observation of Thurston that mathematicians ultimately are not after proofs; they are after understanding.”—————

Obviously the preceding list could be made considerably longer … and it points to a Scott-style assertion that is provocative yet defensible on the evidence:

Deolalikar’s claimed proof has already exerted a comparably positive impact—even as a (likely) failure—than the aggregate impact of the the Wiles proof, the AKS proof, and the Hamilton/Perelman proof.Now, do we *want* this statement to be true? Perhaps not. But *is* this statement true? Hmmm … well, is it?

Comment #50 August 14th, 2010 at 10:30 am

Comment #46 John Sidles

“Deolalikar is making a move that I, for one, had not thought of, which is to assume that P=NP and derive a contradiction.”

Vinay said yesterday “I was able to fix the issues raised with the earlier versions – the final version should be up in 3-4 days here.”

3-4 days from yesterday is August 15, Independence Day in India. So maybe there is a Phoenix rising from the ashes?

Comment #51 August 14th, 2010 at 10:35 am

If Deolalikar is revising his paper, it would be a really good idea for him to post versions to the arXiv, so that the paper won’t be a moving target.

Comment #52 August 14th, 2010 at 10:42 am

Sid: Thanks, that was really an inexcusable omission! To be honest, I haven’t really used relativization / algebrization / natural proofs much in “applied P!=NP proof-killing”, but they should certainly be in the proof-killer’s toolkit.

Comment #53 August 14th, 2010 at 11:04 am

Scott:

Read this!

http://www.topnews.in/p-vs-np-mathematical-problem-successfully-solved-indian-brain-2268953

It looks like your 200 grand have already inflated to a million….

Comment #54 August 14th, 2010 at 11:06 am

Errata: inflated to 2 million…

Comment #55 August 14th, 2010 at 11:19 am

Is this like the FIFA World Cup of the nerds.

Comment #56 August 14th, 2010 at 12:50 pm

Two things about the quote attributed to me (about the move that I had not thought of). First, it is, with hindsight, a rather basic idea, and certainly not something that would answer the question “Where’s the beef?” Secondly, I have updated my post with an argument that suggests that the idea cannot really help much.

(I realize your comment is not meant to be taken too seriously, but I thought that perhaps the above was worth saying, in case anyone does take it seriously.)

Comment #57 August 14th, 2010 at 12:51 pm

Sorry, that was a response to comment #49 by John Sidles.

Comment #58 August 14th, 2010 at 1:13 pm

The sum of human knowledge is so large that is easy for an expert in one area to make a fool of themselves in another area. So kudos to Gowers and Tao for risking wading in on this problem and perhaps formalising some new obstructions.

Comment #59 August 14th, 2010 at 1:35 pm

Tim (Gowers), thanks for your comments! My list was intended to be taken seriously to precisely the same extent that (AFAICT) Scott intended *his* list to be taken seriously,

viz., by leaving the determination of seriousness mainly up to the reader, not the writer.In which case, perhaps a mixed strategy is nearly optimal: don’t take either list

tooseriously.By a factor of two, IMHO we should take

mostseriously the comment that Terry Tao attributed to Bill Thurston. `Cuz hey … Terry TaoandBill Thurston!We cannot know for sure which of Bill Thurston’s writings Terry Tao had in mind, but one candidate is Thurston’s foreword to Daina Taimina’s

Crocheting Adventures with Hyperbolic Planes(the book that won the 2009 Diagram Prize!!!); Thurston’s essay can be read through the magic of Google Books.As Thurston’s foreword says:

“Mathematics is an art ofhumanunderstanding … Mathematics sings when we feel it with our whole brain.”So if anyone has felt their brain “singing” in consequence of Deolalikar’s attempted proof (including any of the commentary upon it) then by the Thurston/Tao criterion, the proof has been highly successful!

Comment #60 August 14th, 2010 at 1:48 pm

The 3 issues that were raised were (a) the use of locality and the techniques used to deal with complex fixed points (b) the difference between 9-SAT and XORSAT/2-SAT and (c) clarification on the method of parametrization. I will address all these in the version being prepared, which will be sent for peer and journal review.

“peer” = internet?

Comment #61 August 14th, 2010 at 2:33 pm

@John Sidles This probably isn’t what Thurston had in mind, but I’d say that a result of the discussion surrounding Deolalikar’s paper is that my brain has buzzed a bit. Every so often I like to bang my head against the P-NP brick wall, and next time I do so I hope I’ll do so very slightly more interestingly than I would have done without Deolalikar, even if his influence is accidental and indirect.

Comment #62 August 14th, 2010 at 3:30 pm

“

known techniques for polynomial-time algorithms, including dynamic programming, linear and”semidefinite programmingSemidefinite Programming is only approximately (weak-membership and ϵ-optimality) solvable in polynomial time, but non known to be in NP (in the bit-length model) when it comes exact membership, exact optimum and related problems.

Comment #63 August 14th, 2010 at 3:34 pm

Is this like the FIFA World Cup of the nerds.Yes, that’s

exactlywhat it’s like. Indeed, here is a photo of me blowing a vuvuzela, in support of P≠NP not having been proved.Comment #64 August 14th, 2010 at 4:02 pm

Scott:

Would you please write a concise and clear post on how you read a proof? (This was inspired by your passing comment of how “words evaporated …” before sticking to your brain.

I would be interested in knowing this from you because you write so well, and because you are so smart Scott. So smart.

No seriously, could you? Please?

Comment #65 August 14th, 2010 at 4:03 pm

@Govers:

“… my brain has buzzed a bit …”Aye. lad … yer brain buzzes? … now that’s looxury!

(repurposing

Four Yorkshiremenas Monty Python’s reminiscence of the hardness of proving P≠NP)Comment #66 August 14th, 2010 at 4:05 pm

“Before you reach for your axes … ”

When I read this, I imagined somebody reaching for an expansive cross made of thin aluminium wire and wondered why Scott was bringing that up in the middle of talk of descriptive complexity theory.

Comment #67 August 14th, 2010 at 4:12 pm

Scotty,

“By putting my life savings on the line, could I give the world a dramatic demonstration of just how high the stakes are with P vs. NP—that when computer scientists say this problem won’t be solved without a titanic advance in human knowledge, without overcoming obstacles like the ones mentioned in points 1-4 above, they’re not kidding? After such a demonstration, would more people get it?”

I’ve not heard you ever advertise the possibility that P!=NP can be proved. Or even other separations. Isn’t it important to do this type of “positive publicity” about the questions in TCS as well, lest the audience leave the TCS theater with the impression that “those guys ask impossible questions … they are not deep, they are just impossible!”?

Comment #68 August 14th, 2010 at 4:39 pm

John wrote: “Deolalikar’s claimed proof has already exerted a comparably positive impact even as a (likely) failure than the aggregate impact of the the… [great achievments listed].

Dear John, this is sheer nonsense.

I dont see how you can possibly defend such a statement.

Comment #69 August 14th, 2010 at 4:48 pm

Continuing on SDP…

It was shown here that if the SDP decision problem (SDP-DP := Does a given Linear Matrix Inequality system with semidefinite constraints and rational coefficients have a feasible solution over the reals?) belongs to NP, then it’d belong to Co-NP, and vice versa. It follows from this that SDP-DP belongs to NP ∩ Co-NP in the BSS real number model, and also that in the Turning machine model, SDP-DP is not NP-Complete unless NP=Co-NP.

IOW, the popular perception that “SDP is known to be in P” is incorrect, and it remains a challenging open problem in the Theory of SDP to establish, or prove otherwise, that SDP-DP belongs to P.

Comment #70 August 14th, 2010 at 5:15 pm

Gil:

Dear John, this is sheer nonsense. I dont see how you can possibly defend such a statement.With sincerest respect Gil … because you too are widely recognized as a

lamed-vavnikof STEM blogging … it is only necessary that we both of us be patient, while people (other than myself) make a case that we’re experiencing the Woodstock of TCS.———————————

Background for the young-and-globalized:the 1969 Music Festival at Woodstock, New York was the first—and nowadays many say the greatest—of the world’s rock-and-roll megafestivals.The quality of the music at Woodstock was uneven at best, but somehow the festival was not

onlyabout the music … and afterwards, nothing was the same.In this week’s impromptu TCS festival, Deolalikar was the first musician on the stage … now many others have joined him … and the audience is enjoying the music very much.

This is an emerging model of TCS that (perhaps) viably scales to a world population of 10^10 that includes a mathematician population of 10^7; that is why I look forward very much to seeing what people like Michael Nielsen have to say about it (once the dust settles).

Comment #71 August 14th, 2010 at 5:21 pm

Hi Gil!

I wrote a good-nature reply that began “Gil, you too are widely recognized as a lamed-vavnik of STEM blogging” that is snared in awaiting-moderation purgatory. With luck it may emerge soon; in any case I have emailed it to you.

And thank you for your wonderful weblog!

Comment #72 August 14th, 2010 at 9:00 pm

Scott, what in your opinion is the most likely first problem to get a real lower bound? ‘SAT is superlinear’, while an extremely limited statement, feels like it’s obfuscating the issue by dealing with something completely general.

I’m very fond of the particular arithmetic-only circuit which I conjecture requires n^(3/2) time, but while coming up with that conjecture made me think I’d really clarified the core issues, studying it more just convinced me that I have no idea whatsoever how to make any headway on the problem.

Comment #73 August 14th, 2010 at 9:51 pm

QTR #67:

I’ve not heard you ever advertise the possibility that P!=NP can be proved. Or even other separations. Isn’t it important to do this type of “positive publicity” about the questions in TCS as well, lest the audience leave the TCS theater with the impression that “those guys ask impossible questions … they are not deep, they are just impossible!”?Check out my talk Has There Been Progress On The P vs. NP Question?, which answers the title question with a resounding yes.

Look, I can’t honestly advertise that “P≠NP is provable in our lifetimes,” because I have no idea if it is! But I can certainly advertise the wonderful, nontrivial

progressbeing made in complexity theory. As I see it, there’s no question that we understand the P vs. NP problem substantially better than we did 30 years ago. So it seems reasonable to guess that 30 years from now, we’ll understand it better still (assuming civilization is still around then!).Science is cumulative: we have our “aspirational problems,” but if we can’t solve them, we’re delighted just to do

anythingthat brings the next generation a little closer to solving them than we were. Of course, it helps if the “anything” that we do has interesting spinoffs in the meantime, apart from the aspirational problems (and that’s certainly been the case for theoretical computer science).Comment #74 August 14th, 2010 at 9:58 pm

Not Scott (Quantum Tennis Racket) Comment #64

Don’t heckle Scott for having a different opinion. You can go ask the herd on the other websites why they bet that Vinay Deolalikar’s paper was more promising for public review than any of the other floating on the internet.

Comment #75 August 14th, 2010 at 10:03 pm

Scott, what in your opinion is the most likely first problem to get a real lower bound?The difficulty with your question is that we already have “real lower bounds”—for example, for monotone circuits, constant-depth circuits, multilinear formulas, etc.

The problems that are currently on the “far horizon” include NEXP vs. P/poly, arithmetic lower bounds for the permanent, derandomizing polynomial identity testing (which is known to be closely related to the first two), lower bounds for ACC

^{0}(constant-depth circuits with, say, mod-6 gates), lower bounds ontensor rank, and no doubt some others that I’m forgetting right now.Comment #76 August 14th, 2010 at 10:13 pm

QTR #64:

Would you please write a concise and clear post on how you read a proof?At least in the case of attempted P≠NP proofs, I just told you exactly what I look for when I read one! You can find more in my Ten Signs A Claimed Mathematical Breakthrough Is Wrong post.

Of course, reading “ordinary” proofs (that don’t claim to solve famous open problems) is a very different matter, and a huge topic in its own right. A lot depends on what the purpose is: are you trying to verify correctness? to understand the high-level ideas? to adapt or generalize the proof to a new application? to decide how

novelthe proof is (the usual task of STOC/FOCS program committees)?Comment #77 August 14th, 2010 at 10:28 pm

7 signs is not enough for you? Damn it Scott, you have to learn not to overdo things

Comment #78 August 14th, 2010 at 10:57 pm

There is a great post on http://geomblog.blogspot.com/2004/04/meta-proof.html and I take the liberty of pasting it here.

A meta-proof

inspired by pandagon: a meta-proof of P=/!=NP, coming to a newsgroup near you:

P: I would like to announce my proof of P=/!=NP. The proof is very short and demonstrates how to solve/not solve SAT in polynomial time. You may find a write up of the proof here.

|– V: I started reading your proof and when you claim ‘foobar’ do you mean ‘foobar’ or ‘phishbang’ ?

|—-P: I meant ‘phishbang’. Thanks for pointing that out. An updated version is here.

|——V: Well if you meant ‘phishbang’ then statement “in this step we assume the feefum” is incorrect.

|——–P: No no, you don’t understand. I can assume feefum because my algorithm has a glemish.

|———–V: It has a glemish ? !! But having a glemish doesn’t imply anything. All algorithms have glemishes !!

|—-V': Yes, and in fact in the 3rd step of argument 4, your glemish contradicts the first propum.

|–V”: I think you need to understand some basic facts about complicity theory before you can go further. Here is a book to read.

|—-P: My proof is quite clear, and I don’t see why I have to explain it to you if you don’t understand. I have spent a long time on this.

|——V': Um, this is a famous problem, and there are many false proofs, and so you do have to convince us that the argument using glemishes can actually work.

|——–P: But what is wrong in my proof ? I don’t see any problems with it, and if you can’t point one out, how can you say it is wrong.

|———-V””: I don’t have to read the entire proof: glemished algorithms are well known not to work.

|————V”””: Check out this reference by to see why.

P:

|–P: . This is what I mean by a glemish. it is really a flemish, not a glemish, which answers your objection.

|—-P': Keep up the good work P. I tried publishing my result, and these people savaged my proof without even trying to identify a problem. All great mathematical progress has come from amateurs like us. See this link of all the theorems proved by non-experts.

|——V': Oh jeez, not P’ again. I thought we had established that your proof was wrong.

|——–P': no you didn’t: in fact I have a new version that explains the proof in such simple language even dumb&%&%s like you can get it.

|——P: Thanks P’, I understand that there will be resistance from the community since I have proved what they thought to be so hard.

|–V': P, I’m trying to understand your proof, with the flemishes, and it seems that maybe there is a problem in step 517 with the brouhaha technique.

P:

|—-P: V’, thanks for pointing out that mistake. you are right. Instead of a brouhaha technique I need a slushpit. The details are complicated, so I will fix it and post a corrected version of the proof shortly. Thanks to all those who gave me constructive advice. I am glad that at least some of you have an open mind to accept new ideas.

This was prompted by the latest P=NP fiasco on comp.theory. I can only express my amazement and wonder at the few tireless souls (and they seem to be the same ones) who patiently try to address each new crackpot proof that comes down the pipe.

Comment #79 August 14th, 2010 at 11:53 pm

Anonymous says:

“I can only express my amazement and wonder at the few tireless souls …”Hmmm … it *is* a thought-provoking list of souls, isn’t it? … to which we can add the name “GASARCH”:

“My prediction is still that something interesting will come out of this but NOT P ≠ NP.”It interesting that most folks’ intuitions are markedly self-fulfilling … indeed, our intuitions would scarcely be useful otherwise. From this we can predict … well … each reader can choose for themselves … guided by (what else?) their intuition.

Comment #80 August 15th, 2010 at 12:54 am

The problems that are currently on the “far horizon” include NEXP vs. P/poly, arithmetic lower bounds for the permanent, derandomizing polynomial identity testing (which is known to be closely related to the first two), lower bounds for ACC0 (constant-depth circuits with, say, mod-6 gates), lower bounds on tensor rank, and no doubt some others that I’m forgetting right now.Maybe it’s because I’m a programmer, but some of the more mathy problems feel clumsy to me, specifically because the circuits for multiplication, and even addition, are complicated and obtuse. But calculating the determinant of a matrix mod 2 feels clean, and same for the permanent. But isn’t it the case that the actual time for calculating determinant is conjectured to be n^2 for n being the length of a side of the matrix, making it actually linear by normal measures?

Do you think we might find a proof that P=BPP for algorithms whose correctness can be proven in ZFC, but still be open for arbitrary algorithms? Intuitively it feels like working up to that point is a problem one could steadily chip away at.

Comment #81 August 15th, 2010 at 12:58 am

Well, atleast the paper generated some interest :).. Perhaps Mr Deolalikar should have consulted some experts before going public.

Dr Scott , on the other hand seems to be more satisfied with the proof being wrong than finding it himself :). Anyway best wishes to him for solving the problem himself and the next fields medal

Thanks

Gopal

Comment #82 August 15th, 2010 at 1:05 am

@Gopal, on the other hand, seems to be more satisfied with Dr. Scott being wrong than Mr Deolalikar’s proof being right …

Comment #83 August 15th, 2010 at 1:13 am

And Gopal cannot deduce the simple fact that it is Dr Deolalikar and not Mr Deolalikar. Gopal is hereby advised to go read the resume if he/she cannot read the proof!

Comment #84 August 15th, 2010 at 1:24 am

Resume for lazy @Gopal

Principal Research Scientist, HP Labs

Education

Ph.D. (Electrical Engineering, jointly with Mathematics) University of Southern California, May 1999. Thesis: On splitting places of degree one in extensions of algebraic function fields, towers of function fields meeting asymptotic bounds, and basis constructions for algebraic-geometric codes)

Masters in Electrical Engineering (5-year) Indian Institute of Technology, Bombay, July 1994. Dissertation: New approaches to learning and classification in feedforward neural networks

Comment #85 August 15th, 2010 at 1:52 am

Let’s assume Neither Alice nor Bob have read the proof.

If Bob has read the proof and approves of it, then he should be able to convince the reviewers with a technical argument on the Bobside blog.

So obviously Bob above has not read the proof. Just like Alice.

Both Alice and Bob have the equal right to probabilistically believe either way of the correctness of the proof, until a definitive answer is found by Bobside.

Bobside blog maintains its integrity by betting on the proofs correctness, and Aliceside blog maintains its integrity betting on the proofs wrongness.

Either way, its just a bet. Neither Alice nor Bob should be hurt in the quest of P versus NP! But Bobside sure seems to be on the losing side as of now … resigned to perpetually processing updates.

Chill!

Comment #86 August 15th, 2010 at 1:58 am

Hypothetically, need somebody to assume that the Bobside blog is an infinite time Turing Machine, perpetually processing updates.

Does it have the Halting Problem? Can it be decided in P?

Will it ever stop with a Yes or No answer?

Comment #87 August 15th, 2010 at 4:04 am

@Nice , I am sorry for not being clear in my post. It is indeed testing times for Dr Deolalikar and his proof.

I find Dr Scott’s blog very interesting to read and also his presentations to be very lively.

If it means anything, it was not my intention to cause any confusion, and i am indeed not in a position to make any real statements on the subject. I humbly apologise for the same.

Regards,

Gopal

Comment #88 August 15th, 2010 at 4:58 am

Dear Gopal (#88),

I humbly and sincerely apologize for my outburst towards you. I understand the pressures faced by VD, but diamonds are sometimes made under pressure!

Scotty’s wildest speculations can have no effect on the inherent validity or invalidity of VD’s paper. Hopefully, a new field of knowledge may even emerge out of this gambling exercise, and the credit will someday go to both Scotty and Vinay for brashly venturing into the “Wild West” of knowledge, even if that day P/NP is still elusive …

Regards,

Nice

Comment #89 August 15th, 2010 at 5:25 am

“It’s worth stressing that, even if P=NP was independent, there’s either a polynomial-time algorithm for 3SAT or there isn’t!”

So undecidability would imply that there are algorithms whose run time cannot be analyzed? We could even have P=NP, but you’d never be able to prove that any algorithm pulled it off.

Comment #90 August 15th, 2010 at 5:34 am

So, we’d could be Gods, we just never be sure of it.

Comment #91 August 15th, 2010 at 6:16 am

Come to think of it, proving P=BPP would probably have to involve proving the Reimann Hypothesis along the way, because that’s been shown to just be a derandomization result. And for that matter, it should also be possible for the same result to make short work of the Collatz Conjecture (or at least the part of the Collatz Conjecture stating that none of them ever grow forever, without clearly ruling out other loops). Boy is *that* a depressing thought, that one of the most important long-standing conjectures in mathematics is a lemma on the way to a lemma to start showing P != NP, and that the recreational math problem which feels so utterly hopeless that it generally isn’t even presented in high school mathematics should be an easy casualty along the way.

As for the possibility some commenters suggested that maybe P=NP but with a very high exponent, there’s a simple counter to that:

Do you believe that 4SUM is quadratic?

Do you believe that 6SUM is cubic?

Do you believe that 8SUM is quartic?

…

Comment #92 August 15th, 2010 at 6:17 am

Chris:

So undecidability would imply that there are algorithms whose run time cannot be analyzed?It would mean that either there’s no poly-time algorithm for SAT but also no proof of that, or there

isa polynomial-time for SAT, but no proof that it works.Note that, if you just want

somealgorithm whose runtime is independent of ZF set theory, it’s easy to construct that unconditionally. For example, consider the algorithm that terminates after n^{2}steps, unless there’s a proof of ZF’s inconsistency with at most log(n) bits, in which case it runs for an additional n^{3}steps.Comment #93 August 15th, 2010 at 6:27 am

Bram: I completely agree that the Riemann Hypothesis is “essentially a derandomization conjecture.” But because it talks about one

specificapproach to derandomizing primality testing (and since 2002, we’ve known there are others), it’s certainly not known to be implied by P=BPP.In the case of the Collatz Conjecture, I’m afraid I see even less connection: sure, both Collatz and P=BPP are about “deterministic processes that behave like random ones,” but the details are so different that I can’t see how a solution to one would give any insight into the other. (Maybe I’m wrong though.)

Comment #94 August 15th, 2010 at 7:11 am

Re. Immerman’s 1986 proof that FO(LFP) captures P:

Can someone please check again whether his proof is correct? I think his proof is incorrect, incomplete. The likely error is at the end of Immerman’s proof, as I have explained here:

http://rjlipton.wordpress.com/2010/08/08/a-proof-that-p-is-not-equal-to-np/#comment-4818

If the bug in Immerman’s proof is confirmed, then VD’s proof will not stand either.

Comment #95 August 15th, 2010 at 7:27 am

both Collatz and P=BPP are about “deterministic processes that behave like random ones,” but the details are so different that I can’t see how a solution to one would give any insight into the other.Yes, it’s all about “deterministic processes that behave like random ones”. The Reimann Hypothesis states that one very natural pseudo random number generator has good statistical properties. The Collatz Conjecture states that a slightly quirkier one has good statistical properties. P=BPP states that no matter what problem you’re dealing with there is a pseudo random number generator which works well for it. The applied cryptographer in me finds this a very compelling sequence of steps, even though I have no formal relationships between the different parts. My intuitions are a lot less rigorous than yours

Comment #96 August 15th, 2010 at 7:41 am

Anon, No, I have NO desire to defend Rafee Kamouna’s “outstanding” achievements

Comment #97 August 15th, 2010 at 7:49 am

Bram cohen,

love bittorrent!

Comment #98 August 15th, 2010 at 7:53 am

@Sameer, What do you think, is there any difference between Rafee and Vinay, except of course the degree of exposure received

Comment #99 August 15th, 2010 at 8:39 am

http://smarkets.com/current-affairs/clay-prize/next-winner

Comment #100 August 15th, 2010 at 9:25 am

If you think you’re having a bad weekend, Guess whose having the absolute worst weekend ever with the world and its best computer scientists breathing down his neck … V..y D..r

Comment #101 August 15th, 2010 at 9:52 am

Scott asks:

“Well, how hard could a white rabbit in front of a cave possibly be to kill?”Here Scott is referring to a scene in the film

Monty Python and the Holy Grailin which King Arthur and his brave knights encounter a fluffy white rabbit—the Rabbit of Caerbannog, as it is called—that evinces a dismaying aptitude for mortal combat.The Deolalikar episode is teaching us that many mathematicians regard themselves as noble Arthurian Knights who, collectively and in the long run, have a reasonable likelihood of defeating even the most fearsome Caerbannog Rabbits. Furthermore, individual Arthurian Knights are bravely willing to risk decades of effort in hand-to-rabbit combat, against odds of individual victory that are exceedingly low: of order (roughly) 10^-6 against.

In sharp contrast, we engineers regard Python Knight strategy as being nearly optimal: “RUN AWAY !!!” This strategy is low-cost and low-risk … especially because (in the film) it turns out that the Holy Grail never *was* in the rabbit’s cave.

As a concrete example, if P is hard to separate from NP, then a natural Python Knight strategy is to seek a modified class P’ that is similarly useful to P for engineering purposes, and then separate P from P’ … in which eventuality it’s problem solved! … at least, from an Python Knight’s point of view.

Similarly, if quantum simulation is in BQP, then a natural Python Knight strategy is to pullback the dynamics onto a tensor network state-space, for which quantum simulation *is* in P … and then again it’s problem solved! … at least for an exceedingly broad class of practical problems that Python Knights care about.

The fleeing Python Knights leave behind them two caves (and more!) that are still guarded by fearsome Mathematical Rabbits of Caerbannog: the Cave of P≠NP and the Cave of BQP Simulation. But perhaps these two caves aren’t as interesting as the Python Knights first thought … because gosh! those rabbits are fierce! … and after all, it’s entirely possible (even likely!) that the Holy Grail isn’t in either cave.

These natural Python Knight strategies—which in effect are pragmatic

engineeringstrategies—are offered in answer to Gil Kalai’s tough questions.Comment #102 August 15th, 2010 at 11:21 am

Scott,

I suggest you start a new blog entry with an unrelated topic. May take the pressure off. Anyways a couple of days maybe for changes to appear, this is a lull time.

Comment #103 August 15th, 2010 at 11:42 am

John Sidles #101: “strategy is to seek a modified class P’ that is similarly useful to P for engineering purposes, and then separate P from P’ … in which eventuality it’s problem solved! … at least, from an Python Knight’s point of view.”

Let me outline why this plan is not not likely to go anywhere.

The thing that makes P interesting, and allows grand concepts such as NP-Complete, is that it contains problems O(n^k) FOR ALL k. Any subset P’ cannot have this property, or it would be just as hard to deal with. So, any P’ is O(n^k) for SOME specific k, which is finite.

Big O approximation works exactly OPPOSITE in TCS than in applied math. In applied math, say power series or fourier coefficients, you have an infinite series, but each term is getting a lot smaller, so at some point, you ignore the rest. The nuances of this have been worked out in painful detail over the last 400 years.

On the hand, working in “big O”, O(n^(k+1)) is vastly harder than O(n^k) for each k, so you ignore “lower order terms”, not “higher order terms”.

For those new to the game, in practice anything O(n^k) for k bigger than 10 or 20 is basically useless, certainly for k = 100, 1000, 1000000. Anyone arguing this point just likes to argue.

A key point is that:

(1) P vs. NP IS a great theoretical problem, BUT

(2) P vs. NP has little (probably 0.0) practical (or, engineering) use.

Finally, there is little point in an engineering approximation P’ to P, because (1) it can’t be done, and (2) it wouldn’t be useful for anything.

Comment #104 August 15th, 2010 at 1:01 pm

RO asserts:

there is little point in an engineering approximation P’ to P, because (1) it can’t be done, and (2) it wouldn’t be useful for anything.LOL … Raoul, historically such intuitions have been embraced more confidently by mathematicians than by engineers.

For example, when Gauss in 1833 famously “feared the outcry of the Boeotians,” he was not concerned with outcries from engineers, surveyors, and navigators … because Gauss appreciated that these professions had

alreadyincorporated a mathematically sophisticated understanding of non-Euclidean geometry into their workaday mathematical toolset.Rather, the Boeotian outcries that Gauss feared were those of his own mathematical colleagues who were loath to cease their age-old battle with the Rabbit of Euclidia. And Gauss was right … it wasn’t until Riemann provided a natural mathematical toolset for proving theorems, that in non-Euclidean geometry was enthusiastically embraced as natural by the mathematical community as a whole.

Analogously, nowadays it is not

quiteevident—to us engineers, anyway—that P and NP are the most natural complexity classes to try to separate … or that Hilbert space isquitethe best state-space upon which to calculate quantum simulations.We engineers comfort ourselves with the notion that we provide to mathematics what von Neumann called “the rejuvenating reinjection of more or less directly empirical ideas” … and yet … uhhh … it might be that instead our role is what Tim Allen called in the film

Galaxy Quest“plucky comic relief.”To find out, we’ll all have to watch the movie!

Comment #105 August 15th, 2010 at 1:53 pm

John, there are many candidates for P’. In fact, P contains infinitely many of them, by the deterministic time hierarchy theorem.

Raoul Ohio said the same things I would have said at this point, but let’s draw a different conclusion: P’ is whatever makes sense for the application. To manipulate extremely large databases, algorithms with runtime above polylog(n) are irrelevant, whereas for proving theorems about structures with up to 30 elements EXPTIME algorithms can be preferable to those with high degree polynomial bounds. Many applications want quasilinear or perhaps quadratic runtime, with reasonable constants.

P is a nice class because it is has many cute properties (perhaps like the Rabbit). To disagree with Cobham (at least in part), P isn’t necessarily a good model for problems we care about in real life.

Comment #106 August 15th, 2010 at 2:28 pm

As a mostly ignorant layman (a software developer with an interest in/some rudimentary knowledge of programming language-related math like type theory and category theory etc.) I find the notion of a deep connection between statistical physics and complexity theory very intriguing. I get the impression this idea didn’t originate with Deolalikar, but how developed and promising do you think this direction is? (not for purposes of proving P=?NP, but in general). If I’m thinking of trying to learn about both complexity theory and statistical (and other) physics over the next few years, is it worth investigating the connections or should that be left to more seasoned folk?

Comment #107 August 15th, 2010 at 2:50 pm

András Salamon, thank you for that fine reply!

For engineers, it sometimes appears that the formal separations associated to the time hierarchy theorem aren’t particularly useful or natural. This is partly for the pragmatic reason that (as is widely acknowledged) coefficients matter in practice, ie., O(n^3) in practical problems can be preferable to O(n^2), depending on the coefficient.

But documentation and reliability matter also to engineers. When I try (clumsily) to express these considerations in the unfamiliar language of complexity theory, it comes out something like this:

“We engineers are content to live in a universe … let us call it the

ingeniariusuniverse … in which upon each world, a fixed procedure validates all legal-to-run algorithms. Specifically, the validation procedure of any giveningeniariusworld accepts algorithms as input and (1) produces either a certificate the input algorithm runs in PTIME and produces a correct answer, or a declaration that it is unable to produce such a certificate, and (2) the validation procedure itself provably runs in PTIME and produces a correct certificate.”“Moreover, on every world of the

ingeniariusuniverse, it is a strictly-enforced law that only validated algorithms can be used; the set of such algorithms is called P’.”Then

~~three~~, wait a second,fournatural questions are: (0) Are there algorithms in P that are illegal on everyingeniariusworld? (1) Is there aningeniariusworld(s) for which P’ reasonably approximates our own world’s notions of P? (2) Is there a universal separation between P’, P, and NP, and if so, what is it? In particular … and just for fun … (3) Is there a P’ for which Deolalikar’s proof strategy establishes a separation from NP?Here the point being made (clumsily) is along the lines of Reinhard Selten’s remark that “Game theory is for proving theorems, not for playing games.” To us engineers, it sometimes seems that Russel Impagliazzo’s celelebrated 1995 article

A personal view of average-case complexity—which is a wonderful article that I warmly recommend to every student interested in complexity theory—nonetheless describes a universe of worlds, fromalgorithmicatocryptomania, that in subsequent years has proven itself to be more congenial for proving theorems than for doing practical engineering.It is evident that upon at least some of the (law-abiding) worlds of the

ingeniariusuniverse, the citizens are at least as dumb as the citizens of Impagliazzo’scryptomania… perhaps dumber … perhaps even uselessly dumber. It all depends on the strength of the validation software.The bottom line is that Impagliazzo’s worlds are perhaps too smart for engineers to prosper in. Yeah … “dumb, dumber, uselessly dumb” … that describes our engineering conception of computational complexity all right!

Comment #108 August 16th, 2010 at 2:32 am

Sign #9 that a claimed P versus NP proof is wrong

If you see it on this blog, run for your life!

http://rjlipton.wordpress.com/2010/08/08/a-proof-that-p-is-not-equal-to-np/

Comment #109 August 16th, 2010 at 4:55 am

Raoul Ohio Says:

The thing that makes P interesting, and allows grand concepts such as NP-Complete, is that it contains problems O(n^k) FOR ALL k. Any subset P’ cannot have this property, or it would be just as hard to deal with…. [Thus] any P’ would O(n^k) for SOME specific k, which is finite.András Salamon Says:

For proving theorems about structures with up to 30 elements EXPTIME algorithms can be preferable … [that is why] P isn’t necessarily a good model for problems we care about in real life.Just to be clear, Raoul and András are expressing two strong intuitions … and yet (AFAICT) there are engineering-level counterexamples to both of them.

(1) The worlds of the

ingeniariusuniverse includesomeworlds whose validators accommodate at leastsomeO(n^k) members of P for all k; these particular worlds are counterexamples to Raoul’s intuition.(2) On every world of the

ingeniariusuniverse, young computer scientists are taught that P is an unrealistic model for computation, not because P is too weak, but because P is so strong that the validity of its algorithms cannot be validated and verified; this intuition is the opposite of András’.The broader point is that we are free to conceive of complexity-theoretic hierarchies that do

notnaturally fit into the phylogenetic hierarchy of the Complexity Zoo, but rather are Archaea to its Bacteria. Whether theingeniariusuniverse constitutes a useful example of such a hierarchy, who knows?In any case, we are free to paraphrase Steve Martin’s character in the film

The Jerkas follows:“Waiter, take away those old hierarchies; bring us somenewhierarchies!”Comment #110 August 16th, 2010 at 11:18 am

Dear All,

I believe Sameer (Comment #94) is raising 2 important issues:

a) Sameer and Rafee *seem* to doubt the standing validity of the Immerman-Verdi theorem.

b) Rafee may need help with his proofs.

I have almost always felt sorry for Rafee since I learnt about him on the internet. As he is trying to do science in his own way from a remote country.

So considering the above 2 main issues, with a view towards the future, I think automated theorem verifiability is the way to go. It may take a few decades.

But any computing device in any corner of the universe should be able to download a digital theorem or proof or conjecture file and run a verification on its own. Something like when your anti-virus software downloads an update, it’s smart enough to run a checksum verification!

This would mean that somebody’s creative output – whether coming from Immerman-Verdi, Sameer, Rafee, me or a spaced out alien cockroach; can always be tested beyond reasonable doubt by anybody who has a computing device.

Otherwise knowledge and its understanding and agreement would be with an elite few, potentially doubted by the others.

Scott, I would like to thank for contributing in your own unique way towards an understanding of this event.

Sorry for the rant

Cheers – Milz

http://amzn.com/1452828687

Comment #111 August 17th, 2010 at 12:34 am

@Scott’s comment #76:

Thanks. I meant reading proofs in general, not specific to famous problems and not for verification.

There are plenty of resources for how to write mathematics (same for programs), but few for how to read mathematics (programs).

So, say you are a mid-level CS graduate student with a basic background in math. You are interested in reading a STOC / CCC / FOCS etc. level paper that you just found.

Can you provide tips on what to pay attention to? Take notes? Skim first, or don’t move before you understand the definition completely? Work out examples? Talk to others?

I think there is much advice that could help students, and I don’t think you’ve ever blogged about it.

Comment #112 August 17th, 2010 at 12:39 am

@Anonymous #74:

Umm …? Did you mistype the comment #? Because I don’t see how I “heckled” Scott in that comment. I was not aware of the slides, where he does advertise the progress made etc. I don”t think my comment heckled Scott, or has even an iota of intention of doing that.

So I am confused and irritated by your comment. Please, papa, don’t preach.

Comment #113 August 17th, 2010 at 12:41 am

@Anonymous 74:

Oops, I meant that in a serious way. I want Scott to write a post on how to read math / proofs / complexity papers. I was hoping that the tone of the post being extra-sweet would compel him to write one, him being only human.

I don’t see how this is heckling. ??

Comment #114 August 17th, 2010 at 1:26 am

Comment #113

You have to also ask the other website, what was the criterion that decided Vinay’s paper was more important than any other paper to consider.

As of now. the results indicate that they might as well have chosen a random sequence of characters filling 100 pages for review.

Since Scott is in a vulnerable position because he is the only one with a strong contrary view, he can easily be heckled by one poster after another into accepting he made a mistake by not reading Vinay’s paper. It is called the Tocquevillian Tyranny of the majority. Reverse elitism. Like a bunch of heckling hyenas. Not that I’m against hyenas, just making a point that without diversity of opinion everybody thinks the same and Houston, we have a problem. Beam me up, Scotty! Scotty should not join the herd.

But since the other website did not pick up on, let’s say Proof #58 on

http://www.win.tue.nl/~gwoegi/P-versus-NP.htm

# [Equal]: In June 2010, Carlos Barron-Romero established P=NP. His paper “The Complexity Of The NP-Class” presents a novel and straight formulation, and gives a complete insight towards the understanding of the complexity of the problems of the so called NP-Class. His main result is a polynomial time algorithm for the two-dimensional Euclidean Travelling Salesman Problem. The paper is available at http://arxiv.org/abs/1006.2218.

(Thanks to Jean Baylon and Dirk Gerrits for providing this link.)

Is it because,

Carlos Barron-Romero does not sound like a stereotypical Computer Science theorist name? He is not a reputable researcher at HP? He does not have a Phd from a good school? He does not come from a country whose specialty is software? He does not come from Indian Institute of Technology? He does not sound that he can speak-a an English?

What ARE the real criteria to read a person as a potential successful theorem prover?

Normally, the criteria would be based on acquired respect and being from a respected educational institution and working as a top researcher in a top school, known personally or being referred to by somebody you respect like Stephen Cook.

Comment #115 August 17th, 2010 at 2:43 am

QTR is surely unaware of Le’ Affaire Bogdanov

http://en.wikipedia.org/wiki/Bogdanov_Affair

Comment #116 August 17th, 2010 at 2:40 pm

??? I don’t understand where these comments are coming from.

My question is independent of the Deolalikar affair.

I want Scott to write a blog post about reading mathematics. This has absolutely nothing to do with P/NP or Deolalikar or proofs of famous problems.

I am talking about simple STOC/FOCS/journal level papers and tips on reading these. Again, nothing to do with verifying them … simply tips Scott might have for students.

Comment #117 August 17th, 2010 at 9:01 pm

Dear Scott,

Would you bet your 200,000 on this?

Proof of: “P=NP iff P!=NP” – Syntactic.

The proof is by diagonalization as follows:

NP={L_1,L_2,L_3,…}

L_1={w_11,w_12,w_13,…}

L_2={w_21,w_22,w_23,…}

L_3={w_31,w_32,w_33,…}

Each string w_ij can be written as the Turing machine describing it which can be written as two substrings, the first its input and the second is the computation configurations:

= w_iw_j

Hence we have,

L_1={w_1w_1,w_1w_2,w_1w_3,…}

L_2={w_2w_1,w_2w_2,w_2w_3,…}

L_3={w_3w_1,w_3w_2,w_3w_3,…}

Note that both the diagonal and its complement can easily encode the Kleene-Rosser paradox as:

KR={kk_1,kk_2,kk_3,…}={Not kk_1, Not kk_2, Not kk_3…}

Use the encoding e(kk_i)=w_iw_i and e(Not kk_i) = \bar w_i\bar w_i

You obtain: e(KR) in NP iff e(KR) NOT in NP, of course e(KR) irreducible to SAT iff it is reducible to SAT.

Hence, P=NP iff P!=NP.

This should not be surprising as the lambda calculus is equivalent to Turing machines. By direct encoding of the lambda calculus paradox, you obtain it on Turing machines.

This proof was syntactic, the same result can be reached via independently via semantics.

P.S. submitted to the Theory of Computing Journal 6.5 months ago.

The line of argumentation of Cook’s proof of CNF SAT being NP-complete is as follows:

“Let A be a language in NP accepted by a non-deterministic Turing machine M. Fix an input x, we will create a 3CNF formula \phi that will be satisfiable if and only if there is a proper tableau for M and x”

1. Let w be the encoding of the Kleene-Rosser paradox, thus e^-1(w)=Not e^-1(w), where e is an encoding function.

2. M accepts w.

3. SAT is NP-complete, Cook’s theorem.

4. There must exists \phi(w):\phi(w) is satisfiable.

5. Then \phi(w) is satisfiable iff M accepts w iff e^-1(w)=Not e^-1(w)

6. M accepts w is “True”.

7. Then: \phi(w) is satisfiable iff e^-1(w)=Not e^-1(w) – Logical Contradiction.

8. Then There must be no \phi(w) which is satisfiable.

9. Then SAT is (NOT) NP-complete.

Comment #118 August 18th, 2010 at 12:04 am

@Rafee Kamouna

Scott responded with this offer only because of the sudden publicity surrounding Vinay Deolalikar’s paper and adoptin of VD by the TCS community completely.

I think your proofs have to go in line here:

http://www.win.tue.nl/~gwoegi/P-versus-NP.htm

there are dozens like yours, and Scott does not have 60 homes yet. If Scott won 1 home for certain each time he won a bet, the case would have been different and he would have gladly bet 200,000 on your stuff too! Otherwise Scott is running a 1 sided thankless operation.

Comment #119 August 18th, 2010 at 12:14 am

But I guess Scott’s website can be used as secondary to arXiv and journal. For redundant publication in case of everything is destroyed in an ice age. You can also submit hard copies of your proof in notarized triplicate to your nearest Doomsday Vault for preservation.

http://www.cbsnews.com/stories/2008/03/20/60minutes/main3954557.shtml

I understand that you have already alerted local media.

http://www.saudigazette.com.sa/index.cfm?method=home.regcon&contentID=2008062910436

Don’t worry NY Times and Richard Elwes are hot on their way to you already.

Comment #120 August 18th, 2010 at 1:37 am

@Rafee Kamouna

Hmm “Scientific Discussion” is this what you mean carpet bombing blogs

http://lucatrevisan.wordpress.com/2008/12/24/inducing-percussions-in-all-of-mathematics/

Comment #121 August 18th, 2010 at 6:40 pm

Re: the reference to Richard Dawkins: he actually had done the exact same thing in his Royal Institution Christmas Lectures entitled “Growing Up in the Universe”. The first of these great lectures is at http://www.youtube.com/watch?v=jHoxZF3ZgTo and the rest are available as related videos.

On a humourous note: I think it would be quite funny if an overzealous acolyte decided to repeat the experiment but instead of *releasing* the pendulum *pushed* it away. (Alternately, if the ceiling is not sufficiently rigid, the pendulum could easily turn into a projectile that hits one in the unmentionables.)

Comment #122 August 19th, 2010 at 1:40 pm

@John Sidles

TCS Woodstock is minus the mind-enhancing substances, unless …

Comment #123 August 20th, 2010 at 3:18 pm

Comment #70 Says:

@John Sidles TCS is Woodstock is minus the mind-enhancing substances …Hmmm … that’s not so clear.

“A mathematician is a machine for turning coffee into theorems” (Paul Erdos) … and of course, nowadays we have Adderall (a potent prescription amphetamine derivative).

A Modest Proposal:Mathematical articles should receive an asterisk if one or more theorems were proved with prescription pharmaceutical assistance.An unthinkable policy? As cognition-enhancing drugs become more powerful and more widely available—and as with steroids in athletics, their efficacy becomes more indeniable—perhaps eventually the opposite policy will become simply infeasible.

`Cuz heck, the number of mathematicians is expanding hugely … while the number of Fields Medals and Clay Millennium Prizes remains constant.

Obviously there are some tough moral issues here … and for pure libertarians, there are some mighty interesting academic game-theoretic issues too.

Comment #124 August 21st, 2010 at 8:56 am

Stephen Cook finally had his revenge on American academia …

http://en.wikipedia.org/wiki/Stephen_Cook#Biography

But he had to sacrifice his own reputation for it.

Comment #125 August 22nd, 2010 at 5:23 am

“for pure libertarians, there are some mighty interesting academic game-theoretic issues too.”

Interesting, I agree.

Comment #126 August 22nd, 2010 at 6:03 am

I wonder if #3 is inhibited by the Millenium prize, which looks like less and less of a good thing (the only one that’s been awarded so far seems to have driven a very good researcher completely away from math).

Imagine (ha ha ha) that I come up with an awesome new proof technique (involving transfinite induction over the Bellantoni-Cook formulas for languages in P, say) that lets me show that a certain problem X (where X is obviously in PSPACE) is outside of P. So that shows P!=PSPACE, already a breakthrough result. But X is an artificial concoction painfully constructed to make the induction step work, and despite that, that step is still a combinatorial mess involving a monstrous computer calculation like the 4-color theorem. And yet despite -that-, X doesn’t really seem to be so special, so it looks plausible that with even more grunt work and huge calculations, but not any particularly brilliant new ideas, I can find a problem Y that lets the same proof go through, except Y is also provably in NP instead of just PSPACE, so P!=NP. And if I publish the P=PSPACE proof, someone else does the obvious but tedious step of finding Y and collecting the Millenium prize.

So do I publish the P!=PSPACE proof? Or do I sit on it quietly for N years while trying to find Y myself? I think these giant prizes and hype around specific open problems are messing up the spirit of scientific collaboration. This is why Perelman turned down the MP, for example–he thought Hamilton should get half of it. I wonder if the Clay foundation thought of taking him up on that.

Meta-question: where are the proof theorists in the P vs NP business? I mean the type of folks who like to read J-Y Girard at bedtime (I’ve tried this but it’s too hard for me). I’d expect someone has tried doing complexity separations like what I described above and it didn’t work, but they’re so quiet about it. I see various papers that look sort of relevant, but none of them even mention P vs NP. Are they even thinking about it? I’d think yes, but they’re maintaining discretion.

Comment #127 August 23rd, 2010 at 7:10 pm

Re. comment # 124 on the denial of tenure to Stephen Cook at Berkeley:

I admire Cook’s achievements very much. However…

Stephen Cook can be GOD himself, but who the bloody hell cares? During the tenure review process, *IF* the decision was made in good faith, after a careful and thorough review, then Berkeley MUST fully stand by its decision to deny him tenure — NO need for Berkeley to feel ashamed or regretful about it.

Of course, many people will say “It was Berkeley’s loss”, however, I must disagree with them. People shine differently at different points in their lives. Some shine brilliantly throughout their lives — some shine brilliantly at some times and lack lustre at other times. As someone mentioned in Llipton’s blog, even brilliant people just have one or two peak “pinnacles” in their career — they DO NOT do brilliant stuff each and every hour.

Comment #128 August 24th, 2010 at 7:41 am

About Gerhard Woeginger’s P-NP webpage: Should a person feel honoured or insulted, if their attempted proof appears on that page?

I mean, this is an extremely hard problem, and even brilliant researchers are bound to make mistakes — so their papers will just remain at the Arxiv.org website, and may never get published in a journal.

It is VERY wrong to ridicule such serious efforts. I don’t know if Woeginger is trying to NAME AND SHAME these people and make fun of them. As anyone knows, important concepts and ideas can be learnt, even from failed attempts. Failure is the first step in the path to success.

I suggest that he go through your list, just keep the serious attempts, and remove the non-serious attempts. Please say very clearly at the top of the webpage that the attempts described on that page are serious. There is NO need to mention the non-serious attempts and ridicule them.

Comment #129 August 25th, 2010 at 5:47 pm

A old, cold post:

“We engineers are content to live in a universe … let us call it theIngeniariusuniverse … “An anonymous poster on Dick Lipton’s blog tipped me to a wonderful (draft) textbook by Oded Goldreich that anticipates the key aspects of the

Ingeniariusuniverse.In particular, Oded’s discussion of “proof-oblivious verification procedures,” together his (immediately following) discussion in

Section 2.3.3: Is it the case that NP-proofs are useless?, was just what I had been looking for.Comment #130 August 25th, 2010 at 9:59 pm

Off-topic, but re: Richard Dawkins/Feynman and pendulums, I first ran across the “scientist standing calmly just out of range of a dangerous pendulum” meme in Carl Sagan’s novel “Contact”. It struck me at the time (high school, I guess?) as a particularly preachy bit of “science as religion” grandstanding in what I consider a pretty horribly written and theologically/scientifically silly novel.

“I resent the idea that we’re in some kind of faith contest, and you’re the hands-down winner. So far as I know you’ve never tested your faith? I’m willing to do it for mine. Here, take a look out that window. There’s a big Foucault pendulum out there. The bob must weigh five hundred pounds. My faith says that the amplitude of a free pendulum–how far it’ll swing away from the vertical position–can never increase. It can only decrease. I’m willing to go out there, put the bob I front of my nose, let go, have it swing away and then back toward me. If my beliefs are in error, I’ll get a five-hundred-pound pendulum smack in the face. Come on. You want to test my faith?”

Now the real question is, how long would the pendulum need to be for the Coriolis effect alone to shift it out of my way by the time it came back? My guess is “way too big”, but *that* would be a *much* cooler demonstration of “faith” in sciency stuff.

Comment #131 August 28th, 2010 at 2:07 pm

Sameer #127

This is what I think. Lee Smolin left the US for Canada too.

A continuous effort is required of any system to solicit and incorporate negative feedback in a healthy manner. This should be used to keep improving the system actively or even to maintain the status quo, else systems end up quickly up with rot or ruin.

If a part is randomly ejected out of your car while running, could mean you forgot to lubricate or tighten the bolts, or the design is poor and needs to be improved.

Things seem to boil over in a flash once they go over a “tipping point”. (Malcolm Gladwell etc) They are ignored or undetected till then.

#128

Also I think Gerhard Woeginger is doing a great job. He is not really explicitly passing negative judgment on the proofs. Sometimes to have the “good”, we need to know what is “bad” too! What approaches to avoid etc.

There is a chance that one of them is even correct someday! Some people may choose not to submit to a journal. It has already happened in 2002-2003. We never can tell. After all, a journal takes months and years and the paper disappears until judgment is passed as with V.D.’s manuscript!

There are different topics on the arXiv, G.W. is collecting only the PNP stuff in a convenient place.

I think we should not judge his intentions as it is his bandwidth, his hobby. He is not physically storing copies of the papers on his webserver – just links and descriptions.