Reasons to believe

More often than I can remember, I’ve been asked some form of the following question: “If you computer scientists can’t prove P=NP or P!=NP, then why aren’t we justified in believing whichever one we want? And why is the ‘consensus’ that P!=NP anything more than a shared prejudice — something you repeat to each other so your work won’t seem irrelevant?”

It’s time to assume the mantle of Defender of the Faith. I’m going to give you ten arguments for believing P!=NP: arguments that are pretty much obvious to those who have thought seriously about the question, but that (with few exceptions) seem never to be laid out explicitly for those who haven’t. You’re welcome to believe P=NP if you choose. My job is to make you understand the conceptual price you have to pay for that belief.

Without further ado:

  1. The Obvious Argument. After half a century, we still don’t know any algorithm for an NP-complete problem that runs in subexponential time. For Circuit-SAT, the canonical NP-complete problem, we don’t know any algorithm essentially better than brute-force search. In math, if decades of research fail to turn up an object, and there’s no a priori reason to suppose the object exists, it’s usually a good strategy to conjecture that the object doesn’t exist. We can all list counterexamples to this thesis, but the examples are much more numerous (though usually less famous, for obvious reasons).
  2. The Empirical Argument. While the argument based on decades of mathematical work can stand on its own, in the case of P versus NP we also have half a century of evidence from the computer industry. In a few cases — like linear programming and primality testing — people wanted fast ways to solve a problem in practice, and they came up with them, long before the problem was proved to be tractable theoretically. Well, people certainly want fast ways to solve NP-complete problems in practice, and they haven’t been able to invent them. The best-known satisfiability algorithms — such as DPLL, GSAT, and Survey Propagation — work surprisingly well on certain instance distributions, but croak (for example) on instances derived from factoring or automated theorem proving.
  3. The Bayesian Argument. Why can’t we turn the last two arguments on their heads, and say that, if our failure to find a fast SAT algorithm is evidence that P!=NP, then our failure to prove P!=NP is likewise evidence that P=NP? The answer is, because lower bounds are harder to prove than upper bounds. Assuming P=NP, it’s difficult to come up with a good reason why an efficient algorithm for NP-complete problems wouldn’t yet have been discovered. But assuming P!=NP, we understand in great detail why a proof hasn’t yet been discovered: because any proof will need to overcome specific and staggering obstacles. It will need to “know” how 3SAT differs from 2SAT, how quadratic programming differs from linear programming, and how approximating set cover within o(log|S|) differs from approximating it within log|S|. It will need to “look inside” computations in a way that doesn’t relativize. It will need to argue that NP-complete problems are hard, not because they look like random Boolean functions, but because they don’t look like random Boolean functions. While we have no reason to think such a proof is impossible — indeed, we have proofs satisfying some of the desiderata — we do have reason to think it will be extremely difficult.Whatever your “naΓ―ve prior probability” was that P=NP, the above considerations, together with Bayes’ Rule, suggest revising it downward.
  4. The Multiple-Surprises Argument. Here’s a point that’s not often stressed: for P to equal NP, not just one but many astonishing things would need to be true simultaneously. First, factoring would have to be in P. Second, factoring would have to be as hard as breaking one-way functions. Third, breaking one-way functions would have to be as hard as solving NP-complete problems on average. Fourth, solving NP-complete problems on average would have to be as hard as solving them in the worst case. Fifth, NP would have to have polynomial-size circuits. Sixth, NP would have to equal coNP. And so on. Any one of these statements, by itself, would overturn much of what we think we know about complexity.
  5. The Hierarchy Argument. This argument goes back to the early days of P versus NP. We know that P is strictly contained in EXP by the time hierarchy theorem. It follows that either P is strictly contained in NP, or NP is strictly contained in PSPACE, or PSPACE is strictly contained in EXP. Likewise, since NL is strictly contained in PSPACE=NPSPACE by the space hierarchy theorem, either NL is strictly contained in P, or P is strictly contained in NP, or NP is strictly contained in PSPACE. But if some of these separations hold, then why not all of them? To put the point differently, we know that collapse is not the general rule of the Complexity Zoo: even between P and EXP, there really are infinitely many distinct species. Indeed for some pairs of species, like E and PSPACE, we know they’re not equal even though we don’t know if either one contains the other! The burden of evidence, then, is on those who believe that two seemingly-distinct species are the same, not on those who believe they’re different.
  6. The Known-Algorithms Argument. We do have nontrivial efficient algorithms for several problems in NP, such as matching, stable marriage, minimum spanning tree, matrix inversion, planarity testing, and semidefinite programming. But every one of these algorithms depends, in a crucial way, on some special combinatorial or algebraic structure of the problem being solved. Is this just a fancy way of repeating that we don’t know yet how to solve NP-complete problems? I don’t think it is. It’s possible to imagine a situation where we knew “generic” techniques for achieving exponential speedups, which worked for objects as complicated as Turing machines, and the only problem was that we didn’t yet know how to apply those techniques to prove P=NP. But this is nothing like the actual situation.
  7. The Known-Lower-Bounds Argument. It could be that the dream of proving superpolynomial lower bounds on circuit size is no more than that: a pipe dream. But the fact remains we can prove superpolynomial lower bounds, albeit in weaker models of computation that are easier to analyze. To give some examples, superpolynomial lower bounds have been proven on the sizes of resolution proofs, monotone circuits, constant-depth circuits, read-once branching programs, and multilinear formulas.
  8. The Self-Referential Argument. If P=NP, then by that very fact, one would on general grounds expect a proof of P=NP to be easy to find. On the other hand, if P!=NP, then one would on general grounds expect a proof of P!=NP to be difficult to find. So believing P!=NP seems to yield a more ‘consistent’ picture of mathematical reality.
  9. The Philosophical Argument. If P=NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in “creative leaps,” no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss; everyone who could recognize a good investment strategy would be Warren Buffett. It’s possible to put the point in Darwinian terms: if this is the sort of universe we inhabited, why wouldn’t we already have evolved to take advantage of it? (Indeed, this is an argument not only for P!=NP, but for NP-complete problems not being efficiently solvable in the physical world.)
  10. The Utilitarian Argument. Suppose you believe P!=NP. Then there are only two possibilities, both of which are deeply gratifying: either you’re right, or else there’s a way to solve NP-complete problems in polynomial time. (I realize that I’ve given a general argument for pessimism.)

There are several questions that the above arguments don’t pretend to address: first, why is P versus NP a reasonable question? Second, even if P!=NP, why should we expect there to be a proof in ZF set theory? Third, even if there is a proof, why should we expect it to be within reach of the human intellect? I’m really not cut out for this C. S. Lewis role, but look for further installments of Mere Complexity as the need arises…

61 Responses to “Reasons to believe”

  1. Bram Cohen Says:

    You didn’t include the example problem reason. Specifically, the 2k-sum problem obviously requires n^k time πŸ™‚

    I haven’t completely given up hope on combinatorial techniques yet. Natural proofs show that any polynomially computable cost function can’t show p!=np, but that may just mean that we need an exponential time cost function (‘number of gates necessary to make this function’ obviously excepted, it needs to be quickly computable for at least some special values). Besides, the non-combinatorial approaches don’t even hold out promise for making distinctions within the polynomial heirarchy.

    The difficulties all seem to be centered around crypto. Specifically, everybody believes that there are cryptographic functions which work, and if you assume such beasts exist then you can use them to foil the obvious techniques for showing p!=np.

    Then there’s that problem that actually finding such mythical cryptographic functions turns out to be far more difficult than it has any right to be…

  2. Greg Kuperberg Says:

    My own view is some combination of 4, 6, 8, and 9, I think.

    Let me ask about what I thought this post would be about. What are the reasons to believe that BQP is a realistic complexity class; in other words that quantum computers are theoretically possible? I have my own standard arguments for this, but I would like to hear what other people say. (Or who knows, maybe there will be arguments on the other side.) In any case it’s a more challenging debate, because there are quite a few very smart people who are at least quietly skeptical.

  3. Scott Says:

    Greg: I’ve always thought it’s entirely possible that there’s some fundamental reason why quantum computing is impossible, but if so, that’s much more interesting than if it’s possible. Of course, the only real way to find out is to try to build a quantum computer, and at the risk of breaking my month-long no-spouting vow, I think that regardless of its outcome, that task is as important for physics as (say) the search for the Higgs boson.

  4. Andy D Says:

    While people should be aware of the many and strange implications of believing P = NP, I think it’s more important to confront your skeptic’s implicit assumption that P = NP would make complexity theory irrelevant.

    Any serious discipline needs a sense of its real limits; your Reason #5 points out that such limits do exist for algorithmics, someplace in the hierarchy, but where and why remains beyond our ken. Since we’ve got loads of significant positive algorithmic results on the books, getting even one significant (unconditional) negative one ought to be top priority. P vs NP is as good a place to look as any.

    Also, complexity theorists are hardly unable to contemplate a world where P = NP; come the day of a dream-algorithm for SAT, they’ll be the first ones out the door learning circuits with equivalence queries and playing optimal k-round generalized Parcheesi (for constant k).

  5. icedmelonsalad Says:

    “Everyone who could appreciate a symphony would be Mozart”

    Is finding a r-clique in an
    n-graph really as rich creativity in general ?

  6. Scott Says:

    Is finding a r-clique in an
    n-graph really as rich creativity in general ?

    It’s as rich as any sort of creativity the fruits of which can be rapidly checked by computer. That includes not only proving theorems (the usual example), but also coming up with succinct models or theories to explain one’s observations. I agree that Mozart is a tougher sell than Gauss and Buffett.

  7. Anonymous Says:

    With respect to “The Obvious Argument”: decades of research have failed to turn up an object, namely, a language in NP that cannot be computed in polynomial time. Is it thus a good strategy to conjecture that this object doesn’t exist?

  8. Scott Says:

    With respect to “The Obvious Argument”: decades of research have failed to turn up an object, namely, a language in NP that cannot be computed in polynomial time.

    I disagree. They have turned up such a language, namely 3SAT. πŸ™‚

  9. Anonymous Says:


    I disagree. They have turned up such a language, namely 3SAT. πŸ™‚

    So they have already proved P!=NP. And I missed that…

  10. Greg Kuperberg Says:

    Scott: I’ve always thought it’s entirely possible that there’s some fundamental reason why quantum computing is impossible, but if so, that’s much more interesting than if it’s possible.

    I agree. But what you are basically saying is that the arguments that QC is realistic are pretty good, or that existing arguments for skepticism are pretty bad. That is, when you say that impossibility would be “interesting”, it would be interesting because it would be surprising, like Bob Beamon’s long jump, not interesting because it would be harmonious, like Roger Federer winning Wimbledon for the nth time.

    So I would still be curious to see the top 10 reasons (or top 5 maybe) to be biased in favor of QC, as you and I obviously are. It is pat just to argue against specific skeptics such as Levin, Laughlin, or (allegedly) Gromov. Brilliant though these people are, they are not necessarily well-prepared to argue this specific question, just because they have a set opinion.

  11. Scott Says:

    So they have already proved P!=NP. And I missed that…

    We have a candidate hard problem. We don’t have a candidate fast algorithm. Why are you unable to make such a basic distinction?

  12. Scott Says:

    Side note: We do know a specific algorithm that finds satisfying assignments in polynomial time if P=NP. (Exercise: What is that algorithm?)

    Unfortunately, if you want that algorithm to decide SAT, then it will run in slightly superpolynomial time and fail on finitely many instances.

  13. Anonymous Says:


    We have a candidate hard problem. We don’t have a candidate fast algorithm. Why are you unable to make such a basic distinction?

    You claim (as being fact!) that 3SAT is a language in NP that cannot be computed in polynomial time. But you cannot prove it. Is that your scientific method? I am very well aware that 3SAT is a *candidate* hard problem.

  14. David Says:

    Scott,
    I don’t mean to be negative but the obvious argument could have been applied to many conjectures some of which turned out either way. The other arguments are much better reasons for continuing work instead of just stopping.
    Best wishes,
    David

  15. Cheshire Cat Says:

    Anonymous, going by your argument, for any question that’s been open a long time, we should believe equally in the answer being “yes” and the answer being “no”. That’s just silly.

  16. Osias Says:

    Scott… Maybe I’m too moron, but… Your list seemed to me to also fit as a “Top Ten List to believe P=NP, but the expoent is large”

  17. Douglas Knight Says:

    I don’t think #4 is very strong.
    Here’s how I read #4: many of the reasons for believing separation also point towards cryptomania and against algorithmica. To believe P=NP is to go to the extreme point opposite the current.

    While that’s all true, the arguments seem much weaker applied to the other gaps than to PvNP. Yes, P=NP would have lots of consequences, but those consequences are so much less suprising than P=NP, that they hardly seem relevant.

    Also, the pessimism argument yields pessiland, not cryptomania. I take factoring by quantum computers more as evidence that factoring is in BPP than as evidence of a separation. How popular is this view?

  18. Scott Says:

    Your list seemed to me to also fit as a “Top Ten List to believe P=NP, but the expoent is large”

    Osias: You raise an interesting point! As I see it, arguments 2, 8, 9, and 10 would indeed fit in such a list, since they talk about the consequences if we could actually solve NP-complete search problems. But arguments 1, 3, 4, 5, 6, and 7 are reasons to believe P!=NP specifically, since they talk only about the theoretical problem of finding or ruling out a polynomial-time algorithm, and the theoretical consequences of one existing.

  19. Scott Says:

    I don’t mean to be negative but the obvious argument could have been applied to many conjectures some of which turned out either way.

    David: Often the obvious argument is pretty much all mathematicians have to go on, and they consider it a good enough basis to conjecture something is true (experience having taught them that they’ll be right more often than wrong). My point was that, in the case of P vs. NP, we also have at least nine other arguments.

  20. Scott Says:

    anonymous:

    You claim (as being fact!) that 3SAT is a language in NP that cannot be computed in polynomial time. But you cannot prove it. Is that your scientific method?

    Yes. As a firm believer in science and reason, I strive to distinguish clearly between what I can prove and what I merely know is true.

  21. Scott Says:

    Douglas:

    I don’t think #4 is very strong.
    Here’s how I read #4: many of the reasons for believing separation also point towards cryptomania and against algorithmica. To believe P=NP is to go to the extreme point opposite the current.

    While that’s all true, the arguments seem much weaker applied to the other gaps than to PvNP. Yes, P=NP would have lots of consequences, but those consequences are so much less suprising than P=NP, that they hardly seem relevant.

    As I see it, argument #4 is basically a Bayesian consistency check: your subjective probability for P=NP had better be at most your probability for any of its consequences, and indeed should be at most the product of the probabilities, to the extent that the consequences seem “independent” of each other. If your complexity-theoretic beliefs already satisfy that property, then argument #4 is irrelevant to you.

    Also, the pessimism argument yields pessiland, not cryptomania. I take factoring by quantum computers more as evidence that factoring is in BPP than as evidence of a separation. How popular is this view?

    Not very, though you’re welcome to hold it! πŸ™‚

    The trouble is that we don’t really have a useful notion of dequantizing quantum algorithms, as we do of derandomizing randomized algorithms. I.e. if you look inside Shor’s algorithm, it seems to give you no insight whatsoever into how to design a classical factoring algorithm, and indeed the best-known classical algorithms (the quadratic and number field sieves) look nothing at all like Shor’s.

  22. Greg Kuperberg Says:

    Indeed, the right question in response to Shor’s algorithm is not whether just factoring is in BPP, but rather whether period-finding in general is in BPP. Of course oracular period-finding simply isn’t in BPP, so the only leg left to stand on this direction is that the oracle model is unrealistic in this situation.

  23. Anonymous Says:

    Suppose P = NP and NP-complete problems can be solved by some algorithm X in polynomial time. Do you think that this could happen if the solution was very hard to come by or complex?

  24. Scott Says:

    Do you think that this could happen if the solution was very hard to come by or complex?

    Well, we know the solution would be very hard to come by! As for whether I think it could happen, see the post.

  25. Anonymous Says:

    Side note: We do know a specific algorithm that decides SAT in polynomial time on all but finitely many instances, if P=NP.

    Exercise: What is that algorithm?

  26. Scott Says:

    Side note: We do know a specific algorithm that decides SAT in polynomial time on all but finitely many instances, if P=NP.

    Exercise: What is that algorithm?

    Awesome, awesome problem — thanks so much! Here’s my solution:

    Let M1,M2,… be an enumeration of Turing machines with polynomial-time alarm clocks. Given a formula φ of size n, first run Mn for up to n steps, then run Mn-1 for up to n steps, then Mn-2, and so on. For each machine Mi, ask it to output a proof that machines M1,…,Mi-1 all fail to correctly decide SAT within their specified time bounds. Let Mj be the first machine found that succeeds at this task. (If no such Mj is found then fail.) Simulate Mj on input φ and output whatever it does.

    Why does this work? By the assumption P=NP, there’s some lexicographically-first machine Mj that decides SAT within its specified time bound. For all sufficiently large n, this Mj will pass the initial testing phase (i.e., it will have enough time to prove that its “rivals” M1,…,Mj-1 all fail). Furthermore, for all k>n, Mk will not pass the testing phase, since it won’t be able to prove that Mj fails. So for all sufficiently large n, our algorithm correctly decides SAT by simulating Mj, in whatever the time bound of Mj is plus n2 steps of overhead.

  27. Osias Says:

    >Osias: You raise an interesting point!

    Thanks.

    > But arguments 1, 3, 4, 5, 6,
    > and 7 are reasons to believe
    > P!=NP specifically

    Well, I think they can still be adapted. For example 1: “After half a century, we still don’t know any algorithm for an NP-complete problem that runs in polynomial time with low expoent.”

    Or the hierarchy: Yes, if P=NP, it will collapse, but if the expoent is large enough we will find useful an adapted version, like saying “yes, P=PSPACE, but PSPACE-complete problems are sooo harder that now we call them [some novel name]”

  28. Osias Says:

    BTW, one of the reasons I find most compelling is NP equaling coNP.

  29. Scott Says:

    Note that this result is “optimal”, in the following sense: for every fixed algorithm A, there exists an oracle O relative to which P=NP but A doesn’t decide SAT in polynomial time.

    This O simply encodes an oracle that separates P from NP long enough to make A fail on some instance, and then starts encoding a PSPACE oracle.

  30. Anonymous Says:

    I find this post both interesting and annoying. (interestingly annoying AND annoyingly interesting.)

    “If you computer scientists can’t prove P=NP or P!=NP, then why aren’t we justified in believing whichever one we want? And why is the ‘consensus’ that P!=NP anything more than a shared prejudice — something you repeat to each other so your work won’t seem irrelevant?”

    One answer to the last question is that we do not need the belief that NP=!P so that our work won’t seem irrelevant because there are millions of ways that our work will turn out to be irrelevant even if NP=!P. In fact, unless we strongly believe the scientific endeavor and the value of hittings one’s head in the wall in trying to slightly improve a lower bound in a very very restricted complexity model our work is irrelevant. And if we do believe
    in the value of our strange endeavor then there is no reason to believe anything else.

  31. Anonymous Says:

    Scott, I don’t understand your oracle construction. How do you know when to switch from an oracle wrt which P neq NP to an oracle wrt which P=NP?

  32. Scott Says:

    Once you’ve fixed enough of the oracle string to ensure that A will fail on some instance phi, that’s when you switch. By assumption, later changes to the oracle string can’t cause A to succeed on phi. (Indeed, you can assume w.l.o.g. that A doesn’t even query the later parts of the oracle string given phi as input.)

  33. Anonymous Says:

    But what if A doesn’t fail on any instance phi, what if A is a correct algorithm, just not poly running time? Switching the oracle at some finite stage may cause A to run in poly time although it wasn’t before, no?

  34. Douglas Knight Says:

    Greg Kuperberg, could you elaborate?
    I think you are saying that there exists an oracle relative to which not only is BQP different from BPP, but the problem that separates them is a particular oracular version of period finding.
    Why should this oracular separation be stronger evidence than other oracular separations? Because it’s a statement about all (or many) oracles, rather than a statement about a carefully chosen oracle?

    (I seem to have switched my opinion on whether the machine depends on the oracle. Anyhow the question is why this is better than run of the mill oracle evidence.)

  35. Douglas Knight Says:

    Osias:
    BTW, one of the reasons I find most compelling is NP equaling coNP.

    I assume you mean the implausibility of that equality.
    I, too, find it implausible, but I find NP cap coNP equaling P to be reasonably plausible. For example, the analogy with the arithmetic hierarchy suggests it.

    What is in NP and coNP, but not known to be in P?
    Graph isomorphism is all I can think of (actually, AM cap coAM), but I think I’m forgetting something. Maybe whether a knot can be untied?

  36. Cheshire Cat Says:

    Douglas, factoring πŸ™‚

  37. Scott Says:

    I find NP cap coNP equaling P to be reasonably plausible. For example, the analogy with the arithmetic hierarchy suggests it.

    Beware the specious analogy between computability and complexity! πŸ™‚ Large parts of number theory, group theory, and lattice theory are in NP cap coNP.

    Maybe whether a knot can be untied?

    That’s NP-complete.

  38. Scott Says:

    But what if A doesn’t fail on any instance phi, what if A is a correct algorithm, just not poly running time? Switching the oracle at some finite stage may cause A to run in poly time although it wasn’t before, no?

    Yeah, alright. As stated, my oracle construction only works against algorithms A with polynomial-time alarm clocks. Do you see a way to fix this problem?

  39. Douglas Knight Says:

    I thought that whether a pair of knots are equivalent is NP-complete, but that whether a knot is the unknot was in NP cap coNP.

  40. Scott Says:

    I thought that whether a pair of knots are equivalent is NP-complete, but that whether a knot is the unknot was in NP cap coNP.

    Thanks! According to this paper, deciding unknottedness is at least in AM cap coAM (and hence in NP cap coNP under a derandomization assumption). Do you know a reference for knot equivalence being NP-complete?

    One benefit of having a blog is that, through making a fool of myself daily, I slowly get disabused of misconceptions…

  41. Douglas Knight Says:

    This paper says that computing the genus of knots in general 3-manifolds is NP-complete, and that it’s open whether questions about knots in R^3 are NP-complete:

    math.GT/0205057

    The Computational Complexity of Knot Genus and Spanning Area
    Ian Agol, Joel Hass, William P. Thurston

  42. Douglas Knight Says:

    In case that wasn’t clear, I probably confused the knot-equivalence problem with knot genus in general 3-manifolds.

  43. Scott Says:

    Is there any complexity-theoretic evidence that unknottedness is not in P?

  44. Wim van Dam Says:

    On “We do know a specific algorithm that decides SAT in polynomial time on all but finitely many instances, if P=NP. Exercise: What is that algorithm?” Scott wrote: […] Here’s my solution: Let M_1,M_2,… be an enumeration of Turing machines […]

    Nice problem indeed. I think the following proof is somewhat easier.

    If P=NP then there exists a polytime Turing machine M_j that for each phi in SAT produces a satisfying assignment. Again let M_1, M_2,… be a list of all poytime bounded TMs. On input phi of length n the algorithm tests the outputs of M_1,…,M_n to see if any of them gave a satisfying assignment. If so, the answer is “phi in SAT”, if not “phi is not in SAT”. This algorithm fails on a finite number of inputs, (which are all in SAT).

  45. Scott Says:

    Wim: I don’t think that works. What if M_1 takes linear time, M_2 takes quadratic time, M_3 takes cubic time, etc.?

  46. Wim van Dam Says:

    Wim: I don’t think that works. What if M_1 takes linear time, M_2 takes quadratic time, M_3 takes cubic time, etc.?

    Ouch, you’re right. Never mind.

  47. Anonymous Says:

    >>Wim: I don’t think that works. What if M_1 >>takes linear time, M_2 takes quadratic >>time, M_3 takes cubic time, etc.?

    >Ouch, you’re right. Never mind.

    Consider a countably infinite number of algorithms, each like Wim’s, and each subscripted by a polynomial p. The Algorithm A_p, on input phi, would simulate M_1, …,M_n (where n = |phi|) on phi for p(n) steps each, and try to obtain a satisfying assignment. If one such assignment is found, output `phi in SAT’, else output `phi not in SAT’.

    Now there must be some polynomial p for which A_p decides SAT on all but a finite number of inputs.

    Will the above work?

  48. Cheshire Cat Says:

    If we just wanted a countably infinite set of algorithms, one of which works, we could just take the set of all TMs.

  49. Anonymous Says:

    Scott: “Do you see a way to fix this problem?”

    No.

  50. David Says:

    Scott: “As a firm believer in science and reason I try to distinguish between what I can prove and what I merely know to be true.”

    I wish you’d replace know by believe. I’ve had instances where I thought I knew and was trying to prove and then found a counterexample. Sometimes science & math work that way. Thanks for a very interesting post.

  51. Scott Says:

    Scott: “Do you see a way to fix this problem?”

    No.

    OK, I got it, but it’s evil. Ready?

    Theorem: For all algorithms A, there exists an oracle O relative to which PO=NPO but AO doesn’t decide SAT within any subexponential time bound.

    Proof: O will consist of two infinitely-long strings. The first string encodes the standard oracle separating P from NP. Assume for the time being that the second string is blank, and let L be the standard unary language from Baker-Gill-Solovay. Then there are two possibilities: either (1) AO takes exponential time to decide L, or (2) there exists an input x on which A halts in finite time, having failed to decide whether x is in L. In case (1) we’re done, so assume case (2). Then on input x, A can only have queried finitely many bits of the oracle. On the second oracle string, go past the furthest point where A queried, and then start encoding (i) the standard oracle that makes P=NP, and (ii) an infinitely-long “key” to unlocking the first string (i.e., for each exponentially-long region of the first string, whether or not it contains a ‘1’ and if so where it is).

  52. Bram Cohen Says:

    Scott, I think (and I could easily be wrong) that determining the unknotedness of a loop was an open problem for quite some time, but that the proof of the geometerization conjecture should in principle lead to an algorithm for un-knotting if that’s at all possible. At least, that’s what I read one time. It doesn’t make all that much sense to me because the inverse space of a simple loop is a torus, not a sphere.

  53. Bram Cohen Says:

    3sat is a bad example problem, for the simple reason that finding hard examples of it is nontrivial. ksum is a lot better that way – instances of it are very homogeneously difficult.

    Graph isomorphism, on the other hand, seems to have no hard instances at all, which makes one wonder, how can a problem be hard if it has no hard instances?

  54. scott Says:

    Scott, I think (and I could easily be wrong) that determining the unknotedness of a loop was an open problem for quite some time, but that the proof of the geometerization conjecture should in principle lead to an algorithm for un-knotting if that’s at all possible

    Bram: We know there’s an algorithm for unknotting; Haken proved that in 1961. Indeed, we know it’s in AM intersect coAM. The question is whether it’s in P. I’d be astonished if Perelman’s work had anything to say about that, so do let me know if you find a reference.

  55. scott Says:

    3sat is a bad example problem, for the simple reason that finding hard examples of it is nontrivial.

    You can always start with hard instances of your favorite NP problem and then reduce. πŸ™‚ But you’re right that (particularly because of survey propagation) random 3SAT doesn’t seem nearly as hard as previously thought. Interestingly, random k-SAT for k>>3 seems much harder; apparently survey propagation fails badly on it.

    Graph isomorphism, on the other hand, seems to have no hard instances at all, which makes one wonder, how can a problem be hard if it has no hard instances?

    Indeed, for exactly that reason, many people conjecture that GI is in P.

  56. Bram Cohen Says:

    Scott, I think I’m just babbling about knots – I’m drawing from old memories of pop science articles, which has possible failures at both the memory link and the what was written in the articles link.

    Anyhow, what I know for a fact is that the problem of knot classification is still open, and that it’s intimately linked with the problem of 3-manifold classification, because the inverse space of a knot forms a 3-manifold, and at some point around 1990 or so it was shown that if two knots have the same manifold then they’re the same knot (but that isn’t true of links).

    The possible link to perelman’s work is that articles I read on knots said that the big problem with classification was proving the poincare conjecture. Of course, it’s entirely possible that this was speculative wishful thinking, that it sure seemed like a first step in the direction of classification was to be able to prove poincare, but there wasn’t a formal argument about why that should help.

    Intuitively, given the vague descriptions of perelman’s work which I’ve read, it seems possible that one could take any old manifold and reduce its ricci flow as much as possible, with some kind of surgery thrown in there, and in the end have a very nice algorithm for canonicalization, which could then be used for comparison. Keep in mind, of course, that I have no idea what I’m talking about here.

    Oddly, web searching for knot classification turns up a bunch of nut job web sites, of people who don’t understand the difference between writing a program to compare knots using a combination of classification techniques, and conjecturing that the combination of those techniques never turns up any false positives.

    By the way, an interesting computational complexity result related to a proof of a long-standing theorem (and I’m really just saying this to contribute an actual fact to this thread, not because it has anything to do with the topic at hand) is that the old ‘proof’ of the 4-color theorem (the one which was never actually proven) yielded a quartic time algorithm for finding a coloring, while the newer (and thoroughly accepted) proof yields a quadratic time algorithm. It seems not inconceivable that a linear time algorithm would be found, possibly without even sustantially reworking the proof, since it wasn’t really designed with optimizing for asymptotic runtime.

  57. Bram Cohen Says:

    Scott: my intuition/experience is that 4sat is indeed much harder than 3sat, but the comparison isn’t exactly apples to apples: the number of clauses necessary to generate a hard problem of 4sat is much greater for the same number of variables, so although the problem size is actually a lot larger.

    What is ludicrously hard for not that much larger of a problem size is problems of the form ‘of this assignment to all variables, at least half plus one must be correct’. Anything davis-putnam based falls over laughably on those problems. However, there is a trick which I suspect works: generate a random assignment, use linear programming to find the minimum sum of that assignment, then round up to generate a new restriction (because the sum of a bunch of values which all all 0 or 1 which must be 2.4 must of course be 3) and then throw that back into the pool of restrictions. Repeat until rounding off one of the minima yields a real solution, or you have enough restrictions to show the problem is insoluble.

    This technique superficially appears similar to the thing whose name I forget which uses the same round-up technique to prove things insoluble, but it’s actually quite different in that it’s being used randomly and doesn’t guarantee termination. I suspect that it works quite efficiently on problems which are otherwise completely intractable.

    I’d test it myself if I could get a linear programming package which works (and yes, I’ve tried, several times).

  58. Bram Cohen Says:

    er, I meant ‘is much greater than for 3sat’, not ‘is much greater than the number of variables’

  59. Jan Says:

    What about the uncomputability of Magnus’s 2-generator, 2-word problems? Do the knot problems have any relation to that, or do braid groups have too much structure to exploit?

  60. Shtetl-Optimized » Blog Archive » You know you’ve made it when… Says:

    […] … Lance Fortnow and Bill Gasarch perform a Talmudic exegesis of one of your blog posts, taking more time to do so than you took to write the post. Listen to Bill and Lance dissect my Ten Reasons to Believe P!=NP, and then offer their own reasons that are every bit as flaky as mine are. (Indeed, Lance’s reason turns out to be almost identical to my reason #9, which he had previously rejected.) […]

  61. Shtetl-Optimized » Blog Archive » Reasons to believe II: quantum edition Says:

    […] At Greg Kuperberg’s request, I’ve decided to follow my Ten Reasons To Believe P!=NP with… […]