For Not even right’s comment #58, if you wanted to prove directly that P ≠ NP, the most obvious approach would be to try to construct a language L that is *not* in P (but hopefully is still in NP). Traditionally, the way one would go about doing that is through some sort of diagonalization process. What Baker-Gill-Solovay showed was that this approach was doomed to failure. You could also try to work in the other direction, starting with a problem you know is in NP and is believed to be hard (no shortage of candidates for that), and then show that the problem is not in P. Razborov-Rudich has knocked down one line of attack in this direction.

Since attempts at a direct proof of P ≠ NP haven’t been having too much success, I think it’s safe to say that people have been looking at ways to prove this by contradiction (and more rigorously than Craig Feinstein evidently has been). I’m all with you on that point. I just hope that the board of the Clay Mathematics Institute isn’t stacked with a bunch of constructivists.

]]>Deconstruction of a dumb, wrong argument about P vs NP:

http://arxiv.org/PS_cache/arxiv/pdf/0706/0706.2035v1.pdf

Title: Critique of Feinstein’s Proof that P is not Equal to NP

Authors: Kyle Sabo, Ryan Schmitt, Michael Silverman

Comments: 5 pages, 2 definitions

We examine a proof by Craig Alan Feinstein that P is not equal to NP. We present counterexamples to claims made in his paper and expose a flaw in the methodology he uses to make his assertions. The fault in his argument is the incorrect use of reduction. Feinstein makes incorrect assumptions about the complexity of a problem based on the fact that there is a more complex problem that can be used to solve it. His paper introduces the terminology “imaginary processor” to describe how it is possible to beat the brute force reduction he offers to solve the Subset-Sum problem. The claims made in the paper would not be validly established even were imaginary processors to exist.

]]>(1) The # of bright-person-years spent without solving it rivals that of older conjectures, and P vs. NP itself turned 50 last year.

(2) We have not yet even developed tools to prove meaningful super-*linear* lower bounds on general computational models (or super-nlogn for algebraic circuits over Q,R,C…). E.g. no language in E U NP is known to be unsolvable by Boolean circuits of size 5n—and even the Raz-Lachish 4.5n lower bound (you can Google it) “cheats” by not using the full binary basis.

But what really gives people pause are two “structural factors”:

(3) The Baker-Gill-Solovay (et al.!) results giving (“oracle”) languages A,B such that PA = NPA but PB != NPB. Any PSPACE-complete A and “random” B do that. An oracle A allows machines to write down strings z and get the answer “is z in A?” immediately “for free”.

(4) The Razborov-Rudich obstacle, which blunted 15+ years of progress on circuit lower bounds.

Item (3) “smells like” a formal independence result, while (4) comes with actual independence results. Scott himself has an excellent survey Is P versus NP Formally Independent? which covers all of these issues.

Your social observations are indeed remarked on. Jin-Yi Cai observed after Wiles solved Fermat that complexity theory has not yet developed the kind of *tools* that always existed for Fermat, and gave incremental progress that spurred other interesting mathematics for centuries—before Ribet (et al.) cracked the door widest open in 1986. He might say the same of Poincare’. Many of those doing “incremental progress” see themselves as helping along just such a toolkit—though IMHO Jin-Yi is right that the tools are not *mature* yet. My remarks above show I’ve taken at least some kind of big swing at the big questions (most specifically PERM vs. DET), but I might also be held as an example of the dangers—it is hard to publish non-results! The Mulmuley-Sohoni program goes much further in developing tools (especially with new papers since last month), and while I haven’t surveyed my own stuff yet, at least I surveyed a non-negligible initial segment of their work. A field needs people of all kinds to be healthy…

The problem is that CS people hardly invest serious time to solve it, as they need to publish many papers. So they are not effectively interested in big questions, rather in publishing hundreds of small, feasible, increamental mainstream papers with tiny clever combinatorial ideas.

That is also the case in physics. The physics people publish many papers which may contain only a slightly different idea in each paper. But it is also worthwhile to think about the big problems. If you can propose the important theories like Special Relativity, General Relativity or Yang-Mills gauge theory, just to name a few, your work is without any doubt a milestone in the history of physics. Their importance can’t be matched by a hundred papers which make the progress in physics research only by a little amount.

]]>Sorry, why? ]]>

The problem is that CS people hardly invest serious time to solve it, as they need to publish many papers. So they are not effectively interested in big questions, rather in publishing hundreds of small, feasible, increamental mainstream papers with tiny clever combinatorial ideas.

Actually quite a few people have spent a lot of time on big questions in CS and on P=NP.

Mike Fellows comes to mind (one of the two developers of parameterized complexity)

]]>I think you had better revert to lurking.

]]>Oh, that is the only trick I could think of (the trick that my high school maths teachers taught us to disprove something) or some smart mathematicians can attack or disprove it “more directly”.

]]>