NP-complete Problems and Physics: A 2019 View

If I want to get back to blogging on a regular basis, given the negative amount of time that I now have for such things, I’ll need to get better at dispensing with pun-filled titles, jokey opening statements, etc. etc., and resigning myself to a less witty, more workmanlike blog.

So in that spirit: a few weeks ago I gave a talk at the Fields Institute in Toronto, at a symposium to celebrate Stephen Cook and the 50th anniversary (or actually more like 48th anniversary) of the discovery of NP-completeness. Thanks so much to the organizers for making this symposium happen.

You can watch the video of my talk here (or read the PowerPoint slides here). The talk, on whether NP-complete problems can be efficiently solved in the physical universe, covers much the same ground as my 2005 survey article on the same theme (not to mention dozens of earlier talks), but this is an updated version and I’m happier with it than I was with most past iterations.

As I explain at the beginning of the talk, I wasn’t going to fly to Toronto at all, due to severe teaching and family constraints—but my wife Dana uncharacteristically urged me to go (“don’t worry, I’ll watch the kids!”). Why? Because in her view, it was the risks that Steve Cook took 50 years ago, as an untenured assistant professor at Berkeley, that gave birth to the field of computational complexity that Dana and I both now work in.

Anyway, be sure to check out the other talks as well—they’re by an assortment of random nobodies like Richard Karp, Avi Wigderson, Leslie Valiant, Michael Sipser, Alexander Razborov, Cynthia Dwork, and Jack Edmonds. I found the talk by Edmonds particularly eye-opening: he explains how he thought about (the objects that we now call) P and NP∩coNP when he first defined them in the early 60s, and how it was similar to and different from the way we think about them today.

Another memorable moment came when Edmonds interrupted Sipser’s talk—about the history of P vs. NP—to deliver a booming diatribe about how what really matters is not mathematical proof, but just how quickly you can solve problems in the real world. Edmonds added that, from a practical standpoint, P≠NP is “true today but might become false in the future.” In response, Sipser asked “what does a mathematician like me care about the real world?,” to roars of approval from the audience. I might’ve picked a different tack—about how for every practical person I meet for whom it’s blindingly obvious that “in real life, P≠NP,” I meet another for whom it’s equally obvious that “in real life, P=NP” (for all the usual reasons: because SAT solvers work so well in practice, because physical systems so easily relax as their ground states, etc). No wonder it took 25+ years of smart people thinking about operations research and combinatorial optimization before the P vs. NP question was even explicitly posed.


Unrelated Announcement: The Texas Advanced Computing Center (TACC), a leading supercomputing facility in North Austin that’s part of the University of Texas, is seeking to hire a Research Scientist focused on quantum computing. Such a person would be a full participant in our Quantum Information Center at UT Austin, with plenty of opportunities for collaboration. Check out their posting!

81 Responses to “NP-complete Problems and Physics: A 2019 View”

  1. Sebastian Oberhoff Says:

    Pardon my ignorance. But what risks did Cook take 50 years ago?

  2. Scott Says:

    Sebastian #1: Just working on complexity theory years before it was considered a serious topic. Legendarily, shortly after Cook wrote “The Complexity of Theorem Proving Procedures” (which formulates the P vs. NP problem and proves 3SAT is NP-complete), Cook was turned down for promotion by the Berkeley math department. That’s when he moved to Toronto, where he’s been ever since.

  3. I Says:

    enjoyed your talk, and learnt a lesson about scientific presentations. Also, the idea of a Zeno computer is both absurd and delightful, so thanks for mentioning that. In light of that, do you know any lists that complile these speculative models of compuation? You’re work contains some of these, like the CTC model, but surely there’s a lot more out there.

  4. Scott Says:

    I #3: Thanks!! I tried to compile speculative models of computation in my NP-complete problems and physics survey, which I referenced in the post. I don’t know who else has tried. However, a fundamental difficulty here is that the more comprehensive you want such a list to be, the more obvious garbage it will need to include! 😀

    (There’s a web page that tries to compile all claimed solutions to the P vs. NP problem, and which faces exactly the same difficulty.)

  5. Bill Kaminsky Says:

    Scott, why does your slide for the AdS/CFT Correspondence include clip art of beloved-but-dim Simpsons character Cletus “The Slack-Jawed Yokel”?

    (SIDENOTE: For those “following along at home,” Scott’s AdS/CFT discussion is between 45:57 and 47:50 in the video and comprises slide 23 of his PowerPoint. Plus, on the off chance you aren’t intimately familiar with all the Simpsons side characters, here’s the relevant Wikipedia entry)

    Personally, I’m imagining Cletus at the blackboard going “Hurmmm… I reckon that certain gauge theories could be dynamically equivalent to type IIB string theory, but… darn… the sign of the cosmological constant is wrong. And… double darn!… I’m going to have to take a large N limit for the gravity-less gauge theory, essentially forcing its gravity-ful dual to its classical limit. And… triple darn!… I can’t really prove the duality is true. Maybe I’ll still get tenure though claiming I’m doing ‘quantum gravity’ nonetheless! And then finally my kids can eat something for dinner… ya know Tiffany, Heller, Cody, Dylan, Dermot, Taylor…”(comically long-continuing list of Cletus’s kids’ names is omitted for brevity’s sake, but I will mention one Cletus’s kids seems to be named ‘Functor’… at least YouTube’s automatic closed captioning for the classic roll-call scene says so: https://www.youtube.com/watch?v=PGNVJV3Imlg)

    Of course, personally I reckon that unfair (or, at least, uncharitable) to thousands of AdS/CFT-phile physicists and mathematicians undoubtedly more adept at QFT and string theory than I am.

  6. Scott Says:

    Bill #5: There’s an excellent reason why my slide about quantum gravity contains an image of Cletus the Slack-Jawed Yokel.

    And that reason is … let me think.

    Oh yes. The reason is that I start the discussion on that slide by pointing out that, since we don’t yet have a full quantum theory of gravity that describes the real world, some people might take the opportunity to project their wildest computational hopes and speculations onto that not-yet-existing theory. To some extent, this is a subject where we’re all slack-jawed yokels.

  7. Michaël Cadilhac Says:

    For those who want to see it, the Sipser/Edmonds discussion is at the 19′ mark in:
    http://www.fields.utoronto.ca/video-archive/static/2019/05/2774-20374/mergedvideo.ogv

  8. Bill Kaminsky Says:

    Thanks for the quick reply, Scott.

    I feel humbled.

    That is, I didn’t realize just how much prejudice I must have been harboring toward “slack-jawed yokels”. Due to this prejudice, I thus simply assumed the depiction of one must be a bit of snark about *someone else* rather than a bit of wisdom how *all of us* are slack-jawed yokels in the face of the outstanding challenges of CS, theoretical physics, and mathematics. 😉

  9. Daniel Says:

    Noob here, but could we translate the P = NP problem itself into a problem that is either P or NP?

  10. Scott Says:

    Daniel #9: That question has come up again and again in this comment section for some reason.

    The short, literal answer is no: P vs NP is about infinite families of questions (“given an arbitrary graph G, is G Hamiltonian?”), whereas P=?NP itself is just a single question with a yes-or-no answer.

    A longer answer would discuss the many cases where it is fruitful to look at complexity questions that arise from complexity theory itself, as well as the Natural Proofs framework, which uses complexity theory itself to (partly) explain the difficulty of P vs NP.

  11. jonathan Says:

    I can’t say I think much of either Edmonds’ interjection or Sipser’s reply. It seems completely obvious to me that theoretical explorations of mathematical structures play a crucial role in development of practical applications, and also that the present and historical development of mathematics has been intimately connected to its applications. This is not to deny that mathematical research has a certain intrinsic value — though I find this not unrelated to its utility, since beautiful mathematical structures are often the most interesting and fruitful.

    Though perhaps I am missing particulars of the context that make the exchange more interesting.

  12. fred Says:

    Given that it took a really long time for carbon based matter to be able to answer a question as simple as 1+4=?, complexity could be quite relative!

    If not, what do you think was the major breakthrough in human brains that made us achieve such a massive jump in problem solving capability? Mastering boolean logic?

  13. Scott Says:

    jonathan #11: I didn’t say there was anything deep or original in the exchange—it was just entertaining to watch.

  14. Scott Says:

    fred #12: Are you talking about the evolution of hominid brains millions of years ago? The birth of civilization and agriculture more recently? Or some other “singularity”? Whatever the case, the answers (if known at all) are probably too big to fit in this comment section.

  15. fred Says:

    Scott #14,

    do you think that the human mind is “big” enough to access/hold all the mathematical tools that are out there, or that we could be fundamentally limited in some ways (and we can’t even realize it).
    It’s already true that the vast majority of all human brains can’t comprehend very advanced math.

    In other words, do you think it’s possible that AIs could one day come up with proofs we will simply never understand (I remember a recent post where you were citing an automatic proof that was doing some really strange manipulations).
    It could just be that there’s just not enough (or the wrong type of) resources in a human brain to represent and manipulate certain mathematical objects.

  16. Andrew Says:

    I just saw this today in the news: https://www.nature.com/articles/s41586-019-1287-z — I’m curious what this means for quantum computing (I had a friend who did her PhD work in quantum error correction in the early 2000’s, basically this sounds like a neat way to correct those kinds of errors), and quantum theory, generally. I presume that the non-deterministic nature of most quantum events is unaffected (i.e. which way a photon will “go” in the double-slit experiment, as an example), but does this mean that, perhaps, there is some sort of “hidden variable” after all that we’ve stumbled onto in a way? I am not smart enough to get the implications of this yet.

  17. Bunsen Burner Says:

    fred #15

    I don’t see this as a very interesting question. I can already produce a program that by permuting axioms, can after a long time, produce a result that will be too complex and convoluted for you to understand. One that requires billions of operations just to check the proof. Mathematics that mathematicians can’t comprehend just isn’t mathematics in any meaningful way. An AI would need to produce deep and meaningful results to be interesting, similar to the insights that Ramanujan produces. Otherwise it will be seen to be producing nothing but noise and switched off.

  18. Zirui Wang Says:

    “SAT solvers work so well in practice.” But CDCL solvers will never make it to P because they rely on resolution. If you use weaknesses of resolution, such as the pigeonhole principle, even the best SAT solvers are painfully slow. P vs. NP is about the worst case, so is algorithm design. Maybe in practice there are too many easy cases, then practice is not about the worst case and therefore not relevant to the discussion at all. It’s like saying quicksort works so well in practice.

  19. Scott Says:

    Zirui #18: I said it’s an argument that one hears all the time, not that it’s correct! Whenever anyone starts telling me how hard modern SAT-solvers are to stump in practice, how they’ve basically proved “P=NP in real life,” I ask them (as I asked in the talk in a slightly different context): so then why aren’t you rich from mining Bitcoin?

  20. fred Says:

    Bunsen #17,

    brute force only gets you so far, even for an AI.

    If we consider alpha go zero, it’s not using brute force, and it found moves that were unknown to human champions, but those moves inspired human champions and they were then able to expend human knowledge of go.
    What is not clear is whether an AI could come up with new moves that humans just can’t “see” (for a lack of a better word).

  21. Nick Says:

    Scott: At the risk of sounding condescending, it seems to me that your public speaking ability has improved in the last few years. So good job on that.

    I’ve been reading a lot of Knuth lately. He thinks that there very well might be a polynomial time algorithm to solve NP-complete problems: “Almost all polynomial time algorithms are so complicated that they lie beyond human comprehension, and could never be programmed for an actual computer in the real world.” It makes sense that he would think that, since he’s an algorithmist. He’s devoted his life to devising and analyzing algorithms, and he’s impressed by the power of algorithms in general and also by their vastness. There’s a whole world of algorithms out there, so why shouldn’t there be one to solve a hard problem efficiently?

    Then I read an interview with Martin Davis (I think it was posted on this blog a while ago). He says, “People say ‘polynomial’, but they mean with an exponent no higher than 3. I sort of see it as a reprise of the situation with Hilbert’s Tenth Problem, where people didn’t have any real imagination about what a polynomial with high degree could do.” And in regards to the Tenth Problem: “The point is that people talked about polynomial equations, but what they were thinking of was equations of degree maybe 3 or 4, and with maybe four or five unknowns. The idea of equations of arbitrary degree and arbitrary number of unknowns — people had no intuition or experience with those. It was too hard.”

    Apparently any algorithm can be encoded in a Diophantine equation of 9 variables. That’s the magic number, 9. How many examples are there of O(n^9) algorithms? Is there a magic number for algorithms?

    Am I nuts for being nudged slightly towards the side of P = NP? These are not novel ideas, of course, just something I’ve been thinking about.

  22. Scott Says:

    Nick #21: No, you’re not nuts, just probably wrong. 🙂 Today it’s not that uncommon to see bounds like n30 or 1/ε50—and of course the people who prove such bounds are criticized for regarding such things as ‘efficient,’ oblivious to real-world constraints. But notice that, if they weren’t proving such bounds, they’d instead be criticized (and probably by the same people) for ignoring a whole universe of possibilities, for the laughable tunnel vision of tacitly identifying polynomial with “quadratic or at most cubic”! In any case, it’s important to understand that, for a problem like CircuitSAT, anything better than 2n would be revolutionary—even 1.99n, to say nothing of 2√n. There are dozens of layers that would have to be breached in the containment building before you could get P=NP, and breaching even the first one or two of those layers would easily be the highlight of someone’s career.

  23. Job Says:

    If P != NP is like the speed-of-light limit, then you can solve NP-Complete problems in polynomial time as long as you don’t transmit any information.

    Basically, if you already know the answer, or never bother to check it, then you can solve NP-Complete problems as fast as you like.

    We’re just missing that clause really.

  24. Scott Says:

    Job #23: It’s actually possible to refine that thought into something more sensible. Namely: there are many ways to solve NP-complete problems in polynomial time “implicitly,” in such a way that you never actually learn the answer. E.g. where the answer is just one component in a superposition, or is an expectation value that takes exponentially many samples to estimate. This sort of thing is actually important to know about when reviewing QC papers: a common failure mode for the bad ones is that they won’t tell you this stuff at the beginning.

  25. Job Says:

    The real test would be if you could show that any version of SAT where you don’t learn the actual answer is easy.

    For example, a version of SAT where you are asked to produce a boolean formula that’s equally satisfiable relative to an input formula.

    And actually, that one is easy.

    I can believe that “implicit SAT” is in P.

  26. Malcolm Says:

    One takeaway from your talk is that there may be a natural principle that NP-hard problems are hard to solve.

    Yet, as you point out, practical problems are often easier to solve than theory would suggest, at least to get a good enough solution. (Problems occurring in cryptography, as with bitcoin, are an exception, maybe because they’re designed to be hard.) Your example of protein folding shows a case where there’s a Darwinian reason that the naturally-occurring instances ought to be easy.

    Could there be an underlying natural principle that practical problems are easy to solve?

  27. Scott Says:

    Job #25: Can’t you solve “implicit SAT” by just outputting whatever SAT instance you got, doing nothing whatsoever to it? Yes, I agree that that task is in P. 🙂

  28. Scott Says:

    Malcolm #26: It’s even better than that. I’d say that there’s a universal principle of logic stating that all easy problems are easy. 🙂 The challenge, of course, is to figure out which problems are easy. Certainly there’s no shortage of examples in crypto, machine learning, quantum simulation, etc. that have given people lots of trouble in practice.

  29. fred Says:

    If there’s a multiverse it’s very easy for anyone to subjectively solve any NP complex problem, you check a random solution and blow your brains out if it doesn’t work.

  30. Job Says:

    Can’t you solve “implicit SAT” by just outputting whatever SAT instance you got, doing nothing whatsoever to it?

    Hopefully you’d at least reorder the clauses, to show effort in some way.

  31. Klaes Ashford Says:

    Fred #29,

    Good point! Actually I did construct a quantum computer (in my garage) that does something exactly like that: it picks a tentative solution at random, check in polynomial time if that’s a correct answer, and either keep it (iff correct) or otherwise pick something else and cycle.
    As every possible solutions are in quantum superposition then I’m sure at least one branch has the correct answer already after one cycle. As each branch can pick another try (iff the first attempt is wrong) then I’m sure the proportion of branching with the correction solution increase with time. So, I just have to wait long enough to be more and more likely to actually measure the solution.
    I’ve tried this with random instances and it worked well. I’ve tried this with instances from cryptographic problems and it didn’t worked. The problem is that the time I need to wait seems to increase exponentially (or high polynomially?) with the size of the instance. Could you suggest some cues to solve this technicality?

  32. Bunsen Burner Says:

    Scott, I have a question on the mathematics possible given restrictions on computability and complexity. I know that Pour-el and Richards showed that you can pretty much get most of applied mathematics using computable analysis. But what happens if we restrict ourselves to mathematics with operations whose time complexity is in P (for example). What kind of mathematical structures do we have to give up then? And does it matter from the perspective of say an engineer, or a theoretical physicist?

  33. Jonas Says:

    What’s your opinion about this?
    https://www.nature.com/articles/s41586-019-1287-z

  34. Scott Says:

    Jonas #33: Very cool experiment by serious people, but zero fundamental change to quantum mechanics, despite the implicit invitation to popular writers to pretend there is one. One of probably hundreds of Science and Nature papers to which such a description could be applied.

    In the future, please, no more drive-by linkings asking me to respond to them.

  35. Scott Says:

    Bunsen Burner #32: That’s a huge, complicated, interesting question. I’m not an expert on these issues, but I know enough to know that the answer is too big to fit in a blog comment.

    Briefly, though, Nash’s theorem that every finite game has a Nash equilibrium is an excellent example of something that has a “computable proof,” but almost certainly no “polynomial-time computable proof” (unless FP=PPAD). Another good potential example, brought to my attention by Dan Roy, could be de Finetti’s theorem in probability theory.

  36. Bunsen Burner Says:

    Yeah, I felt it was too broad a question as well. Its part of a crazy thought I had a long time ago when I read Pour-el and Richards. Basically, is there a minimal effective mathematics that is constrained only by physical reality. Thinking about it now I’m not sure this can ever be stated in a well defined enough way to be useful.

  37. Michael P. Says:

    About “in real life, P=NP” (for all the usual reasons: because SAT solvers work so well in practice”, I wonder whether Smoothed Complexity of practical algorithms for NP-hard problems have been investigated.

  38. Don McKenzie Says:

    Hey Scott. “Quantum computing in one slide”, in your talk, really was for me. Overlain on the section of QCSD that I “almost” got, I think I finally get it. Thanks.

  39. Jr Says:

    Great to hear you want to blog more.

    As a slightly malicious comment, I wonder how many mathematicians applauded Sipser’s statement while having talked grandly about possible applications in grant proposals.

  40. James Warren Says:

    Zirui Wang #18,

    Couldn’t someone create a ‘universal’ family of CNFs which encoded every SAT instance of a given size and use a stronger proof system like extended resolution to search for polynomial length proofs? Perhaps someone could search for patterns in the proofs and stumble upon a polytime algorithm.

    All,

    Time travel seems to be able to solve hard problems. Suppose that that result could be strengthened to show that quantum gravity could only exist if gravity was solving NP-hard (or PSPACE-hard problems) in polynomial time. Then, we would have a good physical argument that P=NP, but we would have no mathematical proof of P=NP.

  41. ibmhal Says:

    Scott,

    the argument, that if anyone had an (almost) polynomial algorithm for NP problems, why wouldn’t he already be rich, e.g. by exploiting/manipulating Bitcoins, is not as convincing as you may think. Did you ever try to formulate a ECDLP (goal: manipulating Bitcoins) or SHA256 function (goal: mining Bitcoins faster) as e.g 3-CNF instance? ECDLP as 3-CNF is almost infeasible, and double SHA2 will still lead to a few million clauses. Now if the polynomial SAT algorithm had e.g. a running time of n^3, it would still be a very tough calculation.

  42. L Says:

    Is there proven advantage of quantum communication and quantum information theory or does any such advantage rely upon speculation like quantum computing and quantum complexity theory?

  43. Scott Says:

    ibmhal #41: Often these people claim to have a linear or quadratic-time algorithm. But even more importantly, typically a dramatic new algorithm will do much, much more in practice than it can initially be guaranteed to do in theory. What’s good for the goose is good for the gander: if, on one side of the argument, people get to appeal to practical considerations like the probable blowup in the number of clauses, then surely they get to appeal to practical considerations on the other side as well.

  44. Scott Says:

    L #42: The answer is yes. In quantum communication complexity (to take one of your examples), there are proven unconditional exponential speedups. See for example this paper by Gavinsky et al., or this one by Raz.

  45. Frank D. Says:

    The cosmological parallelism slide may now be outdated, because asymptotic de Sitter models with constant dark energy have now been falsified at >5 sigma by disagreements of Hubble constant measurements from the CMB and supernova.

  46. nosy snoopy Says:

    Scott, did you watch movie “dark city”(1998) and read Strugatsky’s “A Billion Years Before the End of the World”?

  47. asdf Says:

    Bunsen Burner #32, there are certainly tons of exponential-time algorithms used in daily life. That means sufficiently large instances aren’t feasible to solve, but that is ok. It’s why NP-completeness is interesting, after all: because we currently use brute search on those problems.

    At high complexity, IIRC that the Knuth-Bendix completion algorithm’s proof of termination is non-constructive, equivalent to Kruskal’s tree theorem on graphs if I understand it right, i.e. much worse than exponential.

  48. asdf Says:

    Bunsen Burner #34, regarding computable analysis, there’s actually a complexity theory for real-valued computation (don’t know if it’s active these days) that looks somewhat different from the complexity theory we’re used to. There’s a good book “Complexity and Real Computation” by Blum, Cucker, Shub, and Smale about it. See also: https://en.wikipedia.org/wiki/Blum-Shub-Smale_machine

  49. Scott Says:

    nosy snoopy #46: No.

  50. Yash Says:

    SAT Solvers are very easy to stump in practice. One could very easily hide parity+counting encodings in a CNF and a SAT solver would suffer and wouldn’t even know why.

  51. asdf Says:

    Sorry to make 3 posts in a row on same topic, but it also occurs to me that abstract strategy games like chess and go are supposed to be microcosms of real life, military conflict, etc.; and generalized versions of the games are PSPACE-hard. So I’d infer that we probably really do concern ourselves in real life with problems outside NP, and we’ve evolved something called “intelligence” which amounts to heuristics for solving them. Much of the current hotness in applied math is in stuff like neural networks to implement similar heuristics on computers.

  52. Sniffnoy Says:

    asdf #47:

    Hold on, if it’s equivalent to Kruskal’s Tree Theorem, is that really nonconstructive?

    Now, certainly the original proof of Kruskal’s Tree Theorem was nonconstructive. But my understanding is that, for a WPO X, a proof of “The type of X is at most α” (for some ordinal α) is necessarily also a constructive proof of “X is a WPO”. In which case, Kruskal’s Tree Theorem was proved constructively by Diana Schmidt in 1979, when she proved that the type of the WPO of plane trees is at most the small Veblen ordinal (a matching lower bound was later supplied by Herman Jervell, although annoyingly he didn’t phrase it in those terms).

    Now there is perhaps a bit of slight of hand here — what do we mean by “constructive”? This is where I, not being an actual logician, get a little fuzzy. Like, you’re not looking at the (transfinite) type of the WPO here; you’re looking at some finite function of natural numbers we get that’s somehow associated to the WPO (or perhaps to the statement “X is a WPO”). I’m not entirely clear on how these are related, and if being constructive in one setting necessarily translates to the other. So maybe this isn’t constructive in the relevant sense. Still, I’d at least hope it would be?

    This is probably a good time to ask a related question I’d been wondering about. It seems that, for a number of important computable WPOs X, the type of X, and the proof-theoretic ordinal of the statement “X is a WPO”, are equal. (E.g., Michael Rathjen and Andreas Weiermann proved that the proof-theoretic ordinal of Kruskal’s Tree Theorem is equal to the small Veblen ordinal — and they did so by making use of Schmidt’s methods!) Now obviously this can’t be true for all X (I already mentioned we need to assume computability; another obvious condition would be that, say, o(X) shouldn’t be a successor ordinal), but could it pehaps be true with some minor requirements on X or its type?

    I asked Weiermann about this some time ago, and his response was that yes, he’d also noticed this phenomenon, but not only did he not know how to prove it, but that, due to encoding issues, he wasn’t even sure how to formalize it! That was certainly surprising. But I wanted to put that out there as like — hey, this is something people should be thinking about. Like, there’s been a fair amount of work computing types of WPOs (quite a bit of it by me, though not written up yet 🙂 ). But there’s been a bunch more work computing proof-theoretic ordinals. If these are actually the same (given reasonable conditions), then that means that (implicitly) a lot more WPOs have their type known than I thought!

  53. asdf Says:

    Riffing even further on that idea, maybe the schism between Platonic dialectic and Aristotelian rhetoric (at least in the kiddie version from Zen and the Art of Motorcycle Maintenance) is really about IP vs NP. Dialectic is an interactive presentation of an argument, while rhetoric is presenting the whole argument all at once (like a P-time checkable witness to a proposition). The schism was never really resolved in the Pirsig book (no idea whether philosophers ever came to a conclusion) but these days, we CS geeks know that IP=PSPACE so in all likelihood the Platonic method really is more powerful. Maybe this could even be played into a political ideology thing, wanting to run stuff by a fixed set of rules vs. letting people work stuff out as it happens.

  54. Mitchell Porter Says:

    James Warren #40 said

    “Time travel seems to be able to solve hard problems. Suppose that that result could be strengthened to show that quantum gravity could only exist if gravity was solving NP-hard (or PSPACE-hard problems) in polynomial time. Then, we would have a good physical argument that P=NP, but we would have no mathematical proof of P=NP.”

    P and NP by definition do not involve time travel. It’s the distinct complexity class “P_CTC”, meaning polynomial time in the presence of closed timelike curves (first defined by Scott Aaronson, I think), which is equivalent to PSPACE.

    If you want your statements to be clear, you need, first of all, to have some way of talking about “polynomial physical time, taking advantage of whatever the not-yet-fully-known physics of the world allows”, as opposed to “polynomial time, within the computational paradigm of discrete deterministic computation”.

    If I call that “P_physical”, then what are you saying? In effect, you ask us to imagine, what if quantum gravity “could only exist” if P_physical = NP or even P_physical = PSPACE. But I am confused about what kind of argument you are imagining. Is this an argument about quantum gravity in all logically possible forms, and not just quantum gravity as constrained by empirical data?

    This seems likely to get into a question of semantics, as I am sure there are quantum field theories, whose proponents would regard them as models of quantum gravity, that don’t have the properties you imagine; but perhaps you would not regard them as full-fledged quantum gravity…

    I have a feeling that your line of thought is motivated by some specific hope or idea to do with hypercomputation, possibly fallacious, but I can’t quite make out what it is.

  55. asdf Says:

    Sniffnoy, you know more about this stuff than I do. I hadn’t heard of Diana Schmidt’s proof but I’ll try to find out more about it. Do you know if anything similar has been done for the Robertson-Seymour (graph minor) theorem? That has even higher arithmetic strength than Kruskal’s theorem, and I thought its nonconstructivity was still significant in CS. I.e. (according to Wikipedia) it nonconstructively proves that for a given graph G, there is a P-time algorithm for finding if G is a minor of some other graph H. So it shows that a certain family of problems is in P but doesn’t give any algorithm or say that there is a uniform algorithm.

    Meanwhile (don’t know if it’s relevant) these guys claim you need full second-order arithmetic to prove at least one approach to the Feynman path integral: https://arxiv.org/abs/1711.08939

    That is way outside what is used in traditional analysis or anything where someone has managed to write down an explicit proof-theoretic ordinal.

  56. Zirui Wang Says:

    Replying to James #40, optimistically someone could stumble upon a pattern. Even a conclusion that no pattern is possible is a good result. But it’s easier said than done. To me, there is too much theory in theory and not enough “roll up our sleeves and analyze”-kind of results. I mean SAT instances are all there is to analyze. Can’t we classify them? I’m being more direct than your approach. AlphaGo struck me because it doesn’t traverse the implicit search space but takes the game board at face value and says “I remember through complicated neural nets that this move is correct.” If we could have a precise version to see the solution, of course after finding short certificates of unsatisfiable instances, then P = NP!

  57. Zirui Wang Says:

    It’s not important to stump SAT solvers. There are 2 important problems: 1) Solve important real-world instances; these mean money. I don’t care whether they are easy or hard. The SAT solver may fail miserably on your test problems, but if they solve MY problems, that’s good enough. 2) Work in asymptotic polynomial-time, or come up with a counterexample that falsifies this. This is important in theory.

  58. Sniffnoy Says:

    asdf #55:

    Sniffnoy, you know more about this stuff than I do.

    Well, I wouldn’t assume I know more about this then you; I probably know more than you about computing types of WPOs/WQOs, but probably not the other aspects. 🙂

    Do you know if anything similar has been done for the Robertson-Seymour (graph minor) theorem?

    Regarding the graph minor theorem, no, the type of the WPO of finite graphs under the minor ordering has not been computed (or bounded above). I mean, that’s as far as I’m aware, anyway, but I’m quite certain it hasn’t been done. That said, as you mention, the proof-theoretic strength has been determined, so if we assume for a moment my speculation that these ought to be the same, then the type has been determined implicitly. 🙂 But since apparently no such theorem is actually known, in reality it has not been determined.

    That has even higher arithmetic strength than Kruskal’s theorem, and I thought its nonconstructivity was still significant in CS. I.e. (according to Wikipedia) it nonconstructively proves that for a given graph G, there is a P-time algorithm for finding if G is a minor of some other graph H. So it shows that a certain family of problems is in P but doesn’t give any algorithm or say that there is a uniform algorithm.

    Ah, careful here! Here we’re getting into a different sort of constructivity vs nonconstructivity. Although you seem to have gotten a bit mixed up as to the content of the theorem; there is an explicit algorithm for determining if G is a minor of H, and while it’s not polynomial-time, it is polynomial time for fixed G. This is often discussed together with the Robertson-Seymour theorem, but it in no way depends on it. Right, the relevance of the Robertson-Seymour theorem is that it tells us that finite graphs under the minor ordering form a WPO, so any minor-closed set of finite graphs can be characterized by a finite set of forbidden minors, which then (by the above fact) nonconstructively yields a polynomial-time algorithm for determining membership in that set. But note where the nonconstructivity comes in; it’s purely in determining the forbidden minors, not anywhere else.

    But anyway, the thing is, it doesn’t matter how constructive your proof of WPOness is — that sort of nonconstructivity is, I’m pretty sure, impossible to eliminate. Like, what would it even mean to make that constructive? You’d have to give some procedure for, given an upward-closed set in your WPO, finding its set of minimal elements. And how are you going to do that? And then if you want to turn this into an algorithm, then we get the question of, wait, how is the input being represented? And uhhh I think you see the problem there.

    But the other thing is, this touches on the question of, what do we even mean by WPO, or, for that matter, well-ordering? Because my understanding is, in constructive mathematics, they use a different definition. Like, normally we define a well-ordered set as one where every nonempty subset has a minimum, but in constructive mathematics they define it as a total order where you can do induction instead. And clasically these definitions are equivalent, but constructively they’re not. The problem with using the minimum-based definition in a constructive setting is that it’s too strong, that you can’t constructively prove anything to be well-ordered in that sense (well, unless it has at most one element). Because if 2 is well-ordered, you can recover LEM from that. So the minimum-based definition is no good in a constructive setting!

    So when I said “a proof of an upper bound on o(X) is a constructive proof that X is a WPO”, I meant that assuming some induction-based definition of WPOs that makes sense constructively, not a minimal-elements-based definition, since (in nontrivial cases) those can’t be proved constructively. (I’m assuming all this transfers over from well-orderings to WPOs; I assume all the same reasoning should apply!)

    I mean, if you want to see this more clearly, forget finite graphs and minors, just look at, say, NxN. This is a WPO, with type ω². But given an upward-closed set in this order, how are you going to determine its set of minimal elements? Let alone as an algorithm? Even such small, simple cases of well partial ordering can’t be made constructive in the sense you seem to want.

    But like I said, not being an expert here, I’m not sure how this relates to the sense of constructiveness used to generate those sequences you mention. So uhhh yeah ultimately I guess I’m unsure of the answer to the original question. Or in short, Kruskal’s Tree Theorem, has it been proven constructively?
    1. In the sense of, has the type been determined, yes.
    2. In the sense of, can you constructively get the minimal elements of an upward-closed set, no, and it never will be (barring inconsistency).
    3. In the sense originally under discussion, relating to certain sequences of numbers, I have no idea!

  59. L Says:

    @Scott 44 Quantum communication complexity is not quantum information theory. I would categorize communication complexity as a sub area of computational complexity.

  60. Scott Says:

    L #59: Quantum communication complexity, like quantum query complexity, is similar to information theory in that the number of “internal computational steps” is irrelevant—all that matters is what gets passed back and forth between Alice and Bob, or between algorithm and oracle. On the other hand, it’s similar to computational complexity in that you typically care about whether a complexity measure is asymptotically polynomial or exponential. So it’s really in between the two fields.

    In any case, the examples I gave—which are unconditional quantum communication advantages, no matter how you verbally categorize them—sufficed to answer your original question in the affirmative. If you wanted examples with a more “information-theoretic” flavor—well, information theorists typically care about pinning down what we complexity theorists might call “mere constant factors.” But there are many examples, such as superdense coding and quantum fingerprinting using mixed states, where quantum communication indeed gives (in those cases) a factor-of-2 improvement over any possible classical encoding.

  61. Sniffnoy Says:

    Er, in that last post of mine (#58), at the end, where I wrote “has the type been determined”, what I really mean is “bounded above”, not “determined” (although it’s certainly been determined!), and a better way of phrasing what I mean is, “In the sense of, has a version of the statement that a constructivist would use instead of the standard formulation been proved constructively, yes” (because I’m fairly sure that’s the same thing?). But again note how such a version is (in constructive logic) is weaker than the standard formulation, etc., I’ve already said this. Or another way of saying this is, has it been proved inductively. So, once again, yes to that notion of constructive proof (i.e. proof by induction/upper bound on type), no to the really strong version which is impossible, IDK regarding the finite sequences one.

  62. Bunsen Burner Says:

    asdf #48

    Thanks, that looks very interesting. However, apparently real computation is strong than Turing. What I was after is slightly different. Can computation and complexity constrain the kind of operations we can describe Nature by? For example, how real are real numbers? Does anyone believe that the value of a physical constant could be uncomputable? What would that even mean? Surely an experiment is just just an algorithm for calculating a physical value. So in a sense, I am curious if by restricting ourselves to only physically possible computation, can we deduce any interesting physical laws?

  63. Job Says:

    The SAT solver may fail miserably on your test problems, but if they solve MY problems, that’s good enough.

    If a SAT solver is able to solve all instances that you care about then you can probably do better than a SAT solver.

  64. adsf Says:

    Sniffnoy I’ll try to digest that last post after my next coffee cycle ;). Meanwhile I’ll mention that Levin’s universal search is an explicit P-time algorithm for graph minors, now that we know that a P-time algorithm exists for any given G.

    Bunsen Burner, here is a somewhat modified idea of Leonid Levin. Flip a coin a million times and write down the resulting bit string (or maybe if you want, generate the string by looking at the positions of stars). The string has a Kolmogorov complexity K, which is uncomputable. Quantum mechanics says the coin flips are close to random, so the string is (almost) uncompressible. But we have no way to disprove the unlikely theory that K can be compressed to a few hundred bits, since our universe is actually a simulation running on a computer with an RNG whose state contains only that many bits.

  65. Jon Lennox Says:

    What do cryptographic problems look like for SAT-solvers and the like? Has anything been written about what characteristics they have that make them hard?

  66. MauriK Says:

    Hi:

    Looking at the video, I couldn’t stop laughing about the Zeno computer and the overclocking comparison … Does the energy requirements still hold if the gates are reversible like Fredkin proposed :-)?

    MauriK

  67. I Says:

    Scott #4

    That was a good read. If your goal here was to drive in the audacity of the claim that P=NP to non-computer scientists, then you can add one more to your score.

    A few questions from someone who’s not a computer scientist, or even a scientist yet:

    1) In the second paragraph of section 2, you say collapsing the polynomial hierarchy is almost as bad as P=NP. But doesn’t the former imply P=NP anyway?

    2) Could you expand on the bit about physicists thinking computation in QFT (QFC?) is exactly as powerful as QC? It sounds plausible, but the fact that you can get infinitely many interactions feels like it should change things.

    3) You call P=NP an “almost” metaphysical breakthrough. Is the “almost” there because you’re comparing it to questions like “what is time ?”, “why does the universe exist” and thinking it comes up a touch short? As a comparison, do you think the Incompleteness theorems are “almost” metaphysical?

    4) Related to 3), do you think hyper-computation would be a meta-physical breakthrough? It just seems like it would require so many earth-shaking results to be plausible.

  68. Mike Hanley Says:

    Hey, I like your jokes!

  69. Andrew Says:

    Scott: My apologies, as I did something of a run-by fruiting with the Nature article about observing a quantum “jump” in action too. This was the explanation I was looking for, it’d be really really cool if science news outlets had more of this kind of writing in them:

    https://physics.stackexchange.com/questions/421919/quantum-made-easy-so-what-is-quantum-mechanics-all-about/421939#421939

    Anyways, everything is as it was. Now I get, a bit better at least, why this doesn’t change things.

  70. Andrew Says:

    Er, I meant this link: https://physics.stackexchange.com/questions/484675/does-the-new-finding-on-reversing-a-quantum-jump-mid-flight-rule-out-any-inter

  71. Stella Says:

    Bunsen Burner #32: Sorry to bump this question so late, but that question is addressed by the Ultrafinitist project. Ultrafinitism is a metaphysical position more extreme than constructivism that says not only must all numbers be constructable, but they must be efficiently constructable. All (two) of the ultrafinitists that I know identify “efficiently” with “polynomial time” in this context. They’re more famous for denying extremely large numbers, but the motivation is explicitly that extremely large numbers cannot be efficiently constructed.

    Interestingly, ultrafinitists reject extremely long proofs in addition to extremely large numbers. The paper On Feasible Numbers studies the connection between these ideas.

  72. Zirui Wang Says:

    Job #63: What I have in mind is model checking. For example Intel CPUs need verification. Such SAT instances contain millions of clauses, so you can’t do it by hand. But they are not hard for SAT solvers. What a happy coincidence! Half good algorithms meet simple instances of real-world value. That’s what Turing awards are for.

  73. James Warren Says:

    “P and NP by definition do not involve time travel. It’s the distinct complexity class “P_CTC”, meaning polynomial time in the presence of closed timelike curves (first defined by Scott Aaronson, I think), which is equivalent to PSPACE.”

    True, but if you try to approximate a causal PDE by a CNF, the result should be a Horn formula and thus easily solvable given initial conditions. Doing the same with a PDE describing CTCs should introduce additional constraints representing violations of causality and should describe general hard CNFs. If you add randomness to the model as Deutsch did, then CTCs should be described by Papadimitriou’s ‘Games against Nature’ SSat which is PSPACE-complete. (One would hope that you characterize BPP/BQP with quantified Horn formulas, but there are difficulties with doing so.)

    “I have a feeling that your line of thought is motivated by some specific hope or idea to do with hypercomputation, possibly fallacious, but I can’t quite make out what it is.

    Almost. hypercomputation in general relativity likely requires P=PSPACE for there to be strong enough error correction to prevent ‘hyperstructures’ from being destroyed by noise. My actual thinking is from realist/classical interpretations of quantum mechanics (can we think of QM as broken time travel?) and that any computations should actually take place somewhere in spacetime.
    If PSPACE computations in time travel don’t take place in spacetime, where do they happen? Is it just magic or is there something beyond spacetime where the calculations are done?

    “Is this an argument about quantum gravity in all logically possible forms, and not just quantum gravity as constrained by empirical data?”

    It’s an argument for quantum gravity defined by a spin 2 gauge field and not something that just approximates it. A starting point for a proof might be showing that what makes quantum gravity hard is that conventional methods think P!=NP (and perhaps gravity is finite in string theory because strings “know” that P = NP).

    I imagine that P = NP = PSPACE, but that the simplest algorithms for solving NP problems are significantly more complicated and more vulnerable to noise than the naive algorithms and solving PSPACE problems requires another leap in algorithmic complexity over solving NP problems.

    The “specific hopes” I have are more like this:

    1) QM is formed by “causal turbulence” in the presence of CTCs. There is an energy cascade to smaller and smaller length scales and the resulting fractal structure is exponentially large. None of this complexity can be used for a computational advantage, however, because the difficulty of nature finding a polytime SAT solver means that noise destroys this complexity and you are confined to unitary evolution.

    2) Gravity can solve NP-hard problems, but gauge fields can’t. So, gravity will be much weaker because P=NP algorithms are fragile and the gravitational constant will be zero if P!=NP.

    3) The emergence of spacetime likely depends on P=PSPACE. Simulating different spacetime dimensions or other data structures seems like it would require arbitrarily complex structures with good error correction, which SSat. It seems natural to try to explain the emergence of spacetime with fixed dimension and signature as a product of PSPACE being “harder” (to find a polytime algorithm) than NP. (Imagine an n-dimensional Turing machine simulating a m-dimensional Turing machine. Doing so in spacetime at all length scales seems like it would require really good error correction.) I also suspect that violations of conservation of energy predicted in GR will be destroyed by noise without P=PSPACE algorithms.

  74. Stella Biderman Says:

    The question “what mathematics do we get if we restrict our attention to things that have efficient proofs” is precisely the question of the ultrafinitist program. It’s not quite the same thing as “polynomial time proofs” because ultrafinitists tend to dislike arbitrarily large coefficents, but if that’s of interest to you definitely go read some ultrafinitist mathematics.

  75. anon Says:

    Sniffjoy #58

    >Like, what would it even mean to make that constructive? You’d have to give some procedure for, given an upward-closed set in your WPO, finding its set of minimal elements.

    I would buy an algorithm that eats a formal proof that some set is closed under minors, and spits out its set of minimal forbidden minors. Obviously “constructive” is relative to “proof”: The weaker the permitted proof system / axioms in the input, the easier such a task would be.

    Are you aware of any know results there?

  76. Sniffnoy Says:

    Anon #75: Interesting question. I don’t know of any results of that form, but this is rapidly getting away from things I know well. My expertise when it comes to WPO is really just in computing their types! Algorithms taking proofs as inputs sounds dangerously like actual logic. 😛

    As always, if I did want to look at this, I’d probably start not with trees or with graphs but with N×N… or maybe just with N… I think N itself might be enough to give you trouble here. In short, while this is definitely getting outside of my expertise, I’m skeptical that this is possible.

  77. Sniffnoy Says:

    anon #75:

    Yeah thinking about it more what you’re asking for definitely seems impossible. Consider what this translates to for N; what you’re asking for, in that context, would be an algorithm that takes in a proof that a subset of N is finite, and returns its maximum element. That’s impossible. Like, say, for a turing machine T, we define S(T) = {0} if T halts on empty input and S(T) = {0,1} otherwise. (So we didn’t even need to go to ω, we only needed to go to 2.)

    Or, to make it more like the original context, with the context being 2 rather than ω, our two possibilities could be {1} and {0,1} so that they’re upward-closed instead of downward-closed. Regardless, it’s not possible.

    And if you try to rescue this by requiring that the input be a constructive proof, well, then the algorithm is unnecessary, right?

  78. Sniffnoy Says:

    OK, so, I may have been wrong about the constructivity thing.

    Which is to say: I had assumed that to put an upper bound on a WPO’s type was to constructively prove that it is a WPO, for some appropriate constructive definition of a WPO.

    But, I don’t actually know if there is a standard constructive definition of a WPO! I had assumed there was, but the nLab doesn’t give one, only a classical definition. (Since I’m not actually a logician, I wouldn’t know if there is one if it’s not on nLab. 😛 )

    So maybe at least the analogue is true for a WFPO? That to put an upper bound on a WFPO’s height is to constructively prove it to be well-founded? There is, at least, a standard constructive definition of a WFPO, so at least that isn’t a problem here.

    (Note that a constructive definition of a WFPO does not immediately yield a constructive definition of a WPO via the power set definition, because, well, my understanding is that constructivists don’t like power sets. 😛 )

    But somehow I failed to notice a way that this is trivially false — say you have a countable WFPO (which you have proved to be a WFPO in a nonconstructive manner). Then, obviously, its height is less than ω_1. That’s an upper bound, that is easily proved, but in no way provides a constructive proof that your order was well-founded. Oops.

    IDK; it still might be possible to rescue this. Like, say we restrict to computable orders, and require that the upper bound be a computable ordinal. Maybe it could be true then? IDK, I am really not a logician.

    So, while Diana Schmidt certainly did show in 1979 that the type of the WPO of plane trees is at most the small Veblen ordinal, this may not in fact constitute any sort of constructive proof of any form of Kruskal’s Tree Theorem. I may have just been wrong about that.

    (…I’d kind of hope it does, but…)

  79. TheDividualist Says:

    We outside of academia solve our NP-complete problems by approximating them with genetic algorithms. Solve them in the business sense of “good enough approximation”, not in the mathemathical sense. You just enter what goods you want to deliver to which locations and how many trucks you have, and run it, and when the pace of generating better and better approximation slows down enough so that you run out of patience, you just stop the software and you have a plan generated for how to load the goods in the trucks and what route they should be driving.

  80. Scott Says:

    TheDividualist #79: Wow, that sounds amazing! I’ve been in computer science for 25 years, and I’d never once heard of these “genetic algorithms,” or the possibility that you could solve an NP-complete problem only on some ill-characterized subset of instances or only approximately. It sounds like there ought to be whole branches of CS theory to study these possibilities. You know, questions like whether, given some NP-hard optimization problem, it’s at least in P to get within 10% of the global optimum, or whether that’s already NP-hard. Why didn’t anyone think of that before?

  81. IdPnSD Says:

    The Binary Linear Programming (BLP) problem is considered as NP complete problem. But there is an article that solves the BLP using n-iterations, where n is the number of variables in BLP. If the algorithm is proven correct then it will show the non-existence of NP complete problems. Will someone give some time to check out that paper? I will give the link if you need.