## My 116-page survey article on P vs. NP: better late than never

For those who just want the survey itself, not the backstory, it’s here. (*Note:* Partly because of the feedback I’ve gotten on this blog, it’s now expanded to 121 pages!)

Two years ago, I learned that John Nash—*that* John Nash—was, together with Michail Rassias, editing a volume about the great open problems in mathematics. And they wanted me to write the chapter about the P versus NP question—a question that Nash himself had come close to raising, in a prescient handwritten letter that he sent to the National Security Agency in 1955.

On the one hand, I knew I didn’t have time for such an undertaking, and am such a terrible procrastinator that, in *both* previous cases where I wrote a book chapter, I singlehandedly delayed the entire volume by months. But on the other hand, John Nash.

So of course I said yes.

What followed was a year in which Michail sent me increasing panicked emails (and then phone calls) informing me that the whole volume was ready for the printer, *except* for my P vs. NP thing, and is there any chance I’ll have it by the end of the week? Meanwhile, I’m reading yet more papers about Karchmer-Wigderson games, proof complexity, time/space tradeoffs, elusive functions, and small-depth arithmetic circuits. P vs. NP, as it turns out, is now a *big* subject.

And in the middle of it, on May 23, 2015, John Nash and his wife Alicia were tragically killed on the New Jersey Turnpike, on their way back from the airport (Nash had just accepted the Abel Prize in Norway), when their taxi driver slammed into a guardrail.

But while Nash himself sadly wouldn’t be alive to see it, the volume was still going forward. And now we were effectively honoring Nash’s memory, so I *definitely* couldn’t pull out.

So finally, last February, after more months of struggle and delay, I sent Michail what I had, and it duly appeared in the volume Open Problems in Mathematics.

But I knew I wasn’t done: there was still sending my chapter out to experts to solicit their comments. This I did, and massively-helpful feedback started pouring in, creating yet more work for me. The thorniest section, by far, was the one about Geometric Complexity Theory (GCT): the program, initiated by Ketan Mulmuley and carried forward by a dozen or more mathematicians, that seeks to attack P vs. NP and related problems using a fearsome arsenal from algebraic geometry and representation theory. The experts told me, in no uncertain terms, that my section on GCT got things badly wrong—but they didn’t agree with each other about *how* I was wrong. So I set to work trying to make them happy.

And then I got sidetracked with the move to Austin and with other projects, so I set the whole survey aside: a year of sweat and tears down the toilet. Soon after that, Bürgisser, Ikenmeyer, and Panova proved a breakthrough “no-go” theorem, substantially changing the outlook for the GCT program, meaning yet more work for me if and when I ever returned to the survey.

Anyway, today, confined to the house with my sprained ankle, I decided that the perfect was the enemy of the good, and that I should just *finish* the damn survey and put it up on the web, so readers can benefit from it before the march of progress (we can hope!) renders it totally obsolete.

So here it is! All 116 pages, 268 bibliography entries, and 52,000 words.

For your convenience, here’s the abstract:

In 1955, John Nash sent a remarkable letter to the National Security Agency, in which—seeking to build theoretical foundations for cryptography—he all but formulated what today we call the P=?NP problem, considered one of the great open problems of science. Here I survey the status of this problem in 2017, for a broad audience of mathematicians, scientists, and engineers. I offer a personal perspective on what it’s about, why it’s important, why it’s reasonable to conjecture that P≠NP is both true and provable, why proving it is so hard, the landscape of related problems, and crucially, what progress has been made in the last half-century toward solving those problems. The discussion of progress includes diagonalization and circuit lower bounds; the relativization, algebrization, and natural proofs barriers; and the recent works of Ryan Williams and Ketan Mulmuley, which (in different ways) hint at a duality between impossibility proofs and algorithms.

Thanks so much to everyone whose feedback helped improve the survey. If you have additional feedback, feel free to share in the comments section! My plan is to incorporate the *next* round of feedback by the year 2100, if not earlier.

**Update (Jan. 4)** Bill Gasarch writes to tell me that Lazslo Babai has posted an announcement scaling back his famous “Graph Isomorphism in Quasipolynomial Time” claim. Specifically, Babai says that, due to an error discovered by Harald Helfgott, his graph isomorphism algorithm actually runs in about 2^{2^O(√log(n))} time, rather than the originally claimed n^{polylog(n)}. This still beats the best previously-known running time for graph isomorphism (namely, 2^{O(√(n log n))}), and by a large margin, but not quite as large as before.

Babai pointedly writes:

I apologize to those who were drawn to my lectures on this subject solely because of the quasipolynomial claim, prematurely magnified on the internet in spite of my disclaimers.

Alas, my own experience has taught me the hard way that, on the Internet, it is do or do not. There is no disclaim.

In any case, I’ve already updated my P vs. NP survey to reflect this new development.

**Another Update (Jan. 10)** For those who missed it, Babai has another update saying that he’s fixed the problem, and the running time of his graph isomorphism algorithm is back to being quasipolynomial.

**Update (Jan. 19):** This moment—the twilight of the Enlightenment, the eve of the return of the human species back to the rule of thugs—seems like as good a time as any to declare my P vs. NP survey officially **done**. I.e., thanks so much to everyone who sent me suggestions for additions and improvements, I’ve implemented pretty much all of them, and I’m not seeking additional suggestions!

Comment #1 January 3rd, 2017 at 9:58 pm

I AM ACTUALLY GOING TO ATTEMPT TO READ THIS.

I did tell you the story about John Nash’s trying to strangle a boyfriend of mine, didn’t I? (It’s the only John Nash story I have.)

Comment #2 January 3rd, 2017 at 10:12 pm

Hi Scott,

Awesome!

Minor typo on page 13: “giving order to what might have seemed like .” The sentence just ends with a word missing. (Or is there some clever joke or something that I’m missing here?)

Comment #3 January 3rd, 2017 at 10:21 pm

amy #1: Glad to hear that, and hope you like it!! And if you don’t, no need to hold back. Whatever you say, it could at worst lead to the

secondmost notorious disagreement you and I have had here. 😉No, I don’t think you ever told me your John Nash story—it sounds like a good one.

Comment #4 January 3rd, 2017 at 10:25 pm

Noah #2: Thanks!!! Fixed.

Comment #5 January 3rd, 2017 at 10:45 pm

Scott, I read your contribution to the Nash-Rassias volume several weeks ago and was impressed. Now you tell me you have an improved version! :::grumble grumble:::::

Seriously, I’ll have to read your revised survey ASAP.

Best wishes for the three of you in the new year!

PS Amy, can we hear more about your Nash story?

Comment #6 January 3rd, 2017 at 11:10 pm

I’ve quickly skimmed through Section 6 of the revised survey, especially Section 6.6.6(heh). Your (new?) comments about classifying problems by their natural symmetry groups is interesting. Can you say more about this?

I’m going to shut up now because I’m at a bar in P’ton (hic) and I’m misspelling too much. Bravo on your survey and the work there!

Comment #7 January 3rd, 2017 at 11:39 pm

Thanks, lots and lots of work!!

I guess there are many, many different NP-complete problems, but to me the Manders/Adleman results from their 1978 paper, “NP-Complete decision problems for binary quadratics” always stood out. Couldn’t find them mentioned.

Cheers!

Comment #8 January 4th, 2017 at 1:13 am

Would you mind posting the latex source for this document? I would like to recompile it with larger font size.

Comment #9 January 4th, 2017 at 1:56 am

I greatly enjoyed my first read of the early copy and i’m certain I will be reading this version again and utilizing it’s invaluable compilation of references. Thanks for all the hard work!

Comment #10 January 4th, 2017 at 2:03 am

There are three figure captions, but no actual figures. Did these get accidentally omitted during rendering?

Comment #11 January 4th, 2017 at 2:42 am

Dear Scott,

I was impressed by the 116 number of pages and the contents. However, when I found you saying that a proof of “P!=NP” rules out the stunning consequences of “P=NP”, I wondered why do you confirm this?!?!

Is this the nature of mathematical thought you are addressing?

What do you think of your readers? Do they believe that a correct proof of the conjecture in one way does rule the other?

Did Hilbert die believing this?

Did Godel die believing this?

Turing?

Church?

Please tell us, as I’m continuing to read the first few pages of your paper. I appreciate your precious time in writing it despite the GIANT SHOCK.

Best regards,

Rafee Kamouna.

Comment #12 January 4th, 2017 at 2:57 am

How does one pronounce the “=?” symbol ?

Comment #13 January 4th, 2017 at 4:49 am

killdash #9: Here you go!

Comment #14 January 4th, 2017 at 4:52 am

Taymon #10: You see, this is why I put the paper on my blog before submitting it to the arXiv and ECCC. 🙂 Thanks—I’ll either create those figures tomorrow or else eliminate the references to them.

Comment #15 January 4th, 2017 at 4:57 am

Rafee #11:

when I found you saying that a proof of “P!=NP” rules out the stunning consequences of “P=NP”, I wondered why do you confirm this?!?!

The reason, you see, is that if P≠NP can be proven (in a sound way), then P=NP is not true.

Comment #16 January 4th, 2017 at 4:58 am

Simon #12: I suggest pronouncing it “versus.”

Comment #17 January 4th, 2017 at 5:25 am

Dear Scott,

Any mathematical statement can have both a proof and a disproof. This is a direct consequence of Godel’s incompleteness theorems.

Godel proved that a mathematical system can prove its own consistency if and only if it is inconsistent.

Clearly, a correct proof of P!=NP can never rule-out another correct proof of P=NP. This is mathematical logic.

It is a trivially obvious observation that none of the surveys you mentioned had addressed.

Neither you nor any prover can monopolize the problem. If you pull it towards “Not Equal”, then he will push it towards:”Equal”.

Best,

Rafee Kamouna.

Comment #18 January 4th, 2017 at 5:47 am

Why did you put it at the same address where

Is P Versus NP formally independent?was? You’ve broken the links of everyone who referred to that paper. And that includes yourself.The PostScript version is up, so it’s not lost forever, but still…

Comment #19 January 4th, 2017 at 6:08 am

Rafee #17: I’m going to cut this discussion off, given its limited interest to me (and my preference for discussing actual research on P vs NP), but maybe it’s worth explaining for other readers:

Gödel’s Second Incompleteness Theorem (or technically, Rosser’s improvement of it) says that a consistent, powerful enough formal system can’t

proveits own consistency. It doesn’t say that it can’tbeconsistent—and indeed, we just assumed in the last sentence that it is! The distinction between provability and truth is sort of the whole point with Gödel. And if we’re not willing to grant that our basic methods of mathematical reasoning are sound, then we’d seem to lack a basis for discussing math at all—and indeed, for appealing to Gödel’s Theorem itself, as you did. Certainly the soundness of those methods is not the thing that Gödel’s Theorem challenges, contrary to a widespread and forehead-banging misconception.But there’s an even more basic issue, what the logicians call

definiteness. Even if we imagined, let’s say, that Peano arithmetic or ZF set theory were discovered tomorrow to be inconsistent, there would still either be a polynomial-time algorithm for SAT, or else there wouldn’t be one. (Just like there would either be an odd perfect number, or not.) The trouble would “just” be that the methods of mathematical reasoning everyone had accepted up to that point wouldn’t suffice to establish which.(A fact more people should know: Gödel himself was an arch-

Platonist, who believed in the definiteness not only of arithmetical statements like P vs NP—something he correctly regarded as too self-evident to require much discussion—but even of, e.g., the Axiom of Choice and the Continuum Hypothesis, which is further than I would go.)Comment #20 January 4th, 2017 at 6:19 am

Thank you very much for the great survey! I have recently read it in the book with a great pleasure. As far as I understand, this is a newer version comparing to the book? Could you please advise which sections (except GCT) are significantly updated so it makes sense to re-read them?

Comment #21 January 4th, 2017 at 6:35 am

Anonymous #18: (gasp) sorry about that, and thanks for pointing it out!!

I’d actually noticed a year ago that there was a conflict involving “pnp.pdf”, but then forgot about it when it came time to put up this post. I guess this is the sort of thing that can happen, when you’ve been around long enough to have written

twosurvey papers for which the same filename suggests itself. 🙂Still, since I don’t want to break the link to the new survey, I don’t see any solution except to give the new one the right of way. So I’ve now put the old survey at:

http://www.scottaaronson.com/papers/indep.ps

http://www.scottaaronson.com/papers/indep.pdf

and updated my links accordingly.

Comment #22 January 4th, 2017 at 6:40 am

Bogdan #20: Compared to the version that’s in the book, there’s hardly any new material—mostly just clarification and bugfixes of the old material, and various added references. So sure, reread the GCT section if you want a better and more up-to-date account, but other than that, don’t expect to be blown away by new insights.

Comment #23 January 4th, 2017 at 6:50 am

Footnote 12 “f(n) time”

Comment #24 January 4th, 2017 at 7:05 am

anon #23: Sorry, I’m probably missing something—what’s the mistake there?

Comment #25 January 4th, 2017 at 7:13 am

Taymon #10: Duhhh, the figures were present in the TeX file after all; they just weren’t appearing because I compiled using pdflatex rather than xelatex. They’re there now (but the survey is now

117pages rather than 116 🙂 ). Thanks again!Comment #26 January 4th, 2017 at 7:53 am

Scott #21: I’d suggest doing the opposite. Tim Berners-Lee agrees.

Comment #27 January 4th, 2017 at 8:07 am

Anonymous #26: The trouble is that, by the time I was reminded about the issue, the link to pnp.pdf was already all over social media, in reference to the

newsurvey. In other words, because of my earlier mistake,anychoice that I make requires breaking some links. In such a case, it seems obvious that the better option is to break a few links to a 14-year-old paper, rather than to the new, vastly bigger, and more “mature” work.Comment #28 January 4th, 2017 at 8:25 am

I’ve only briefly skimmed the paper but I’ve found it extremely enlightening, thank you.

I was a bit confused about some of the motivation for section 6.2.1. While it’s true that Haken showed that resolution has exponential lower bounds for the pigeon hole problems, it’s been known for a while that extended resolution techniques overcome this to get back a polynomial sized refutation for the pigeon hole problems.

I’m just investigating this now so forgive me if I’ve got this wrong, but from the survey you link to by Beame and Pitassi, it looks like people are having difficulty finding any super polynomial counter proofs for Frege systems. Doesn’t this suggest, at least empirically, that coNP problems might have short proofs?

Comment #29 January 4th, 2017 at 8:44 am

Abe #29: Yes, of course more powerful proof systems can give polynomial-size proofs for PHP. I think a major reason to care about resolution, besides its simplicity and its connection to DPLL algorithms, etc., is just that it’s a good laboratory for lower bound techniques. I.e., the same reason why we still study constant-depth or monotone circuits, even though we know that more general circuits are stronger.

Also, it’s usually a bad idea to guess the existence of an algorithm based merely on the difficulty of ruling one out (with Grover’s algorithm and a few other things standing as exceptions that prove the rule). My remarks about the “asymmetry of ignorance,” in Section 3, directly apply here.

Comment #30 January 4th, 2017 at 9:11 am

It’s a bit tangential, but let me use this quote as an opportunity to rant about something.

The linear speed-up theorem is nonsense.

I’m not saying it’s not

true. It’s a perfectly valid statement, of course. I just think the name is misleading and I disagree with the usually-drawn ‘philosophical’ implication that with better hardware computations are faster (I recall Papadimitriou’s textbook as saying something like that). When you look at the proof of the theorem, you see that the tape is converted to one in which each cell contains information about the state of several neighbouring cells of the old tape, and then the new machine simulates the old one. You end up with a machine which may perform the computation in fewer steps (since in each step the machine can access more information at once, and perform several ‘old’ steps at once), but in each ‘new’ step more work is performed because the new machine has to maintain coherence of the information on the tape; on physical hardware, this need not even represent a speed up at all. We’ve just traded the number of steps for the complexity of the machine itself.So in my opinion the linear speed-up theorem doesn’t say anything fundamental about the power of improved hardware, but rather about the inadequacy of our definitions (and the tape compression theorem even more so). I’d ditch the usual definition time complexity for (number of steps) × (1 + log₂(size of tape alphabet) + log₂(number of machine states)). This is a meaningful number: it’s the number of bits you’d need to write down a ‘computation record’ which you could then verify for correctness (i.e. for being coherent with the machine’s definition). I don’t know that much about the physics of computing hardware, but I conjecture it could also be used as a coarse estimate of the energy used up by the computation (I’m assuming flipping a bit in a register/on the tape and moving the head take constant energy). I think ‘energy complexity’ could be a good name for it (especially that it gets your mind closer to the answer to the ‘Turing machine travelling at near-lightspeed’ problem).

Similarly, space complexity could be redefined as (number of cells) × log₂(size of tape alphabet) (maybe also plus log₂(number of states)?), which corresponds to the number of bits of memory you need to perform a given computation.

On one hand, this could complicate some things, but on the other, these definitions make growth rates somewhat less relevant, and may give us something closer to representation-independent measures of complexity. I regret that I haven’t really had time to investigate them more deeply.

Comment #31 January 4th, 2017 at 9:11 am

Scott, in recent weeks I have so thoroughly agreed with your

Shtetl Optimizedessays and comments — both scientific and politic — that it’s getting to be darn hard (for me at least) to write a comment that has any reasonable probability of keeping you, or anyShtetl Optimizedreaders, creatively and enjoyably (as I hope) awake at night.With a view, therefore, to inducing an creative and enjoyable state of wakefullness among

Shtetl Optimizedreaders, here is a suggested minimal extension/emendation of of your survey’s “Section 3.1 Independent of Set Theory?”As further sleep-obliterating reading review, I’d like to echo Lance Fortnow and Bill GASARCH in recommending Gideon Lewis-Kraus’ recent

New York Times Magazinearticle “The Great A.I. Awakening” (of December 14, 2016).The Chaitin-esque — even Mysterian? — implication of the research that Lewis-Kraus describes, is that the universe of neural-net algorithms is seemingly too vast for rationalist explanations to encompass it.

As further reading, the (admirable) weblog

Off the Convex Path, in a recent post by Sanjeev Arora and Tengyu Ma “Back-propagation, an introduction” (December 20, 2016), surveys the rapidly developing class of sleep-denying answers to tough questions like “How could we program a neural net, absent rational understanding of its workings?”.Further research directions are suggested by Boaz Barak and David Steurer’s on-line lectures / course-notes / book-in-progress “Proofs, beliefs, and algorithms through the lens of sum-of-squares” — in particular their first lecture “Introduction” — helps us to appreciate how thrillingly math-positive the post-rationalist “lens of Chaitin” can be. As Barak and Steurer put it:

The Barak / Steurer perspective helps us appreciate the Borges quote whose complexity-theoretic implications are so engagingly described by Gideon Lewis-Kraus (op cit)

That neural nets translate Borges’ aphorism so naturally as “You are not what you write, but what you have read” has come as a sleep-denyingly Chaitin-esque algorithmic surprise to many folks (including me).

More broadly, rigorously proving that P can (or cannot) be separated from NP by any member(s) of the enormous and (as it seems) astoundingly capable class of neural-net algorithms — algorithms of whose internal workings we presently have little or no rational appreciation, and whose workings we may perhaps

neverrationally grasp in any useful sense (even in principle) — is a complexity-theory challenge that (needless to say) entirely escapes all of the proof techniques that we presently possess.It’s fun, isn’t it? So perhaps we shouldn’t complain, when this flood-tide of new ideas and results keeps us awake at night! 🙂

Comment #32 January 4th, 2017 at 9:31 am

Anonymous #31: I completely agree with your comments about the speedup theorem.

There’s also the physics perspective: namely, if you kept speeding up your computation by larger and larger constant factors, eventually you’d exceed the Schwarzschild bound and your computer would collapse to a black hole. 🙂

Comment #33 January 4th, 2017 at 9:33 am

Scott #24. Isn’t it supposed to be “f(n) space”?

Comment #34 January 4th, 2017 at 9:50 am

anon #34: Duhhhh, yes! Thanks very much; fixed.

Comment #35 January 4th, 2017 at 9:54 am

OT, but in Italian, “yellow books” are thriller/horror stories and, in Chinese, erotica. Maybe Springer should branch out…

Comment #36 January 4th, 2017 at 10:19 am

Page 17: “a proof of P = NP would mean there were no NP-intermediate problems, since every NP problem would then be both NP-complete and in P”.

If P = NP, not *every* NP problem would be NP-complete: the two trivial decision problems (corresponding to the universal and empty languages) would not be NP-complete.

(I think this is a standard nit-pick and therefore perhaps not worth correcting — anyone who knows enough to spot the nit-pick also knows enough not to worry about it.)

Comment #37 January 4th, 2017 at 10:22 am

Springer should branch out from yellow books so that we can get random access to the books on a bookshelf.

Some more minor things:

* There’s a stray period before P in footnote 25.

* In the discussion after Conjecture 27, I think you mean to say that L’ is NP-hard, not that L’ is in NP.

* “and finding short nonzero vectors in lattices” isn’t quite true, unless you’re willing to base security on assumed quantum hardness. We only know how to base public-key cryptography on the DECISION problem of approximating the length of a shortest non-zero vector in a lattice. Probably the easiest way to make the statement true is to write “finding planted short nonzero vectors in lattices.” This is no harder than approximating the length of a shortest nonzero vector, so we can base crypto on the hardness of this problem, though maybe it doesn’t sound that impressive.

* “reasonable doubt would still remain” -> “reasonable doubt could still remain” ? It seems like the current wording implies knowledge of a potential proof. For all we know (or at least for all I know), P =/= NP could be proven specifically by showing that factoring is hard (maybe via algebraic circuit complexity or something…).

Comment #38 January 4th, 2017 at 10:24 am

“. . . on the Internet, it is do or do not. There is no disclaim.”

I LOVE this.

Comment #39 January 4th, 2017 at 10:30 am

Gareth #37: In my survey, I avoid even that nitpick, because I explicitly state that I’m talking about NP-completeness under polynomial-time Turing reductions, not just many-one reductions.

Comment #40 January 4th, 2017 at 11:43 am

If you are concerned about the information being rapidly outdated, why don’t you just post the plain tex on github and let anyone edit it?

Comment #41 January 4th, 2017 at 12:47 pm

Nick #41: I tried that with the Complexity Zoo—I made it into a wiki. But the result of the experiment was that hardly anyone updates it.

Now with a paper, much like with a book (with a few exceptions like dictionaries and encyclopedias), it’s important to maintain clarity about authorship—otherwise people could add all sorts of claims that I don’t agree with and implicitly attribute them to me.

Still, if (let’s say) ten years from now, people who I trusted as experts offered to update my survey, I’d certainly be all ears!

In the meantime, I did post a link above to the TeX source, in case you want to change the formatting, add your own notes, or whatever (but please get permission from me before circulating any modified version of the article).

Comment #42 January 4th, 2017 at 1:48 pm

Wow! I’ve read this (mostly — skimming a bit) cover to cover, and it’s great. A really modern overview of our understanding, and especially the progress that’s been made over the past 25 years.

Comment #43 January 4th, 2017 at 3:56 pm

On page 91 you write “, given that we can prove P not EXP

but can’t even prove NEXP is not in TC0

, it seems conceivable that uniformity could help in proving

P not NP.”

Isnt TC0 in P known and EXP in NEXP known. So EXP is not in P should imply NEXP is not in TC0? sorry if I confuse

Comment #44 January 4th, 2017 at 4:19 pm

Begin #44: Sorry, I was talking about

nonuniformTC^{0}, which certainly has problems outside P (as all nonuniform classes do)! Thanks; will clarify in the survey.Comment #45 January 4th, 2017 at 4:19 pm

Noah #38: Thanks so much for the extremely helpful corrections! I’ve now implemented all of them and acknowledged you.

Comment #46 January 4th, 2017 at 5:08 pm

Minor thing: On page 23, third sentence (the one starting with “But this means that L \in EXP …”) at the end I think you mean run the algorithm for L’ not L.

Comment #47 January 4th, 2017 at 6:05 pm

Random Oracle #47: Thanks, fixed!

Comment #48 January 4th, 2017 at 6:19 pm

Small nitpick on page 56: “Fundamental Theorem of Algebra: a nonzero, degree-d univariate polynomial has at most d roots”. The fundamental theorem is the fact that nonconstant complex polynomials have at least one root. They have at least d roots, which follows from the factor theorem (it’s not about complex numbers and it’s easier to prove).

Comment #49 January 4th, 2017 at 6:28 pm

g #49: Thanks!! Fixed.

Comment #50 January 4th, 2017 at 7:51 pm

Will it be possible to compile an ebook friendly version?

Comment #51 January 4th, 2017 at 8:15 pm

While we’re nitpicking that particular topic: I think that in most (all?) spots where you claim to use the Fundamental Theorem of Algebra (or now the factor theorem), you’re really using the Schwartz–Zippel lemma.

Comment #52 January 4th, 2017 at 8:18 pm

Marijn #51: How do I do that?

Comment #53 January 4th, 2017 at 8:20 pm

Noah #52: Yeah, but it’s the degenerate special case of Schwartz-Zippel that just says that a nonzero degree-d univariate polynomial has at most d roots.

Comment #54 January 4th, 2017 at 9:07 pm

But wait Scott – hold the presses – no need for a review! – I just got an advert for a book with the highlight:

The book offers a new proof of the equality of the complexity classes “P” and “NP”

The book is:

Advances in Combinatorial Optimization

Linear Programming Formulations of the Traveling Salesman and Other Hard Combinatorial Optimization Problems

By (author): Moustapha Diaby (University of Connecticut, USA), Mark H Karwan (University at Buffalo, The State University of New York, USA)

Sold out on Amazon.

Bummer for all of your wasted time – I hope your ankle is better

anyway.

Dan

Comment #55 January 4th, 2017 at 9:42 pm

Regarding Proposition 38 (Shannon [222]); You might also cite Kolmogorov. Compressing the $2^n$ outputs of the function must take near $2^n$ bits with high probability.

Regarding Babai’s GI algo: I came up with a more optimal subroutine than “When m drops below a threshold l(n) that is polylogarithmic in n, we perform brute force enumeration over the symmetric group S(Γ) at a multiplicative cost of l(n)!. ” See https://oeis.org/A186202 . Ironically Babai wrote the rejection letter when I submitted it for publication.

Comment #56 January 4th, 2017 at 10:03 pm

Chad #56: So what exactly are you claiming—that you can improve the complexity of Babai’s algorithm from exp(2

^{√(log n)}) to something smaller? If so (and given that you seem to want to make it public), providing links to both your submission and the rejection letter you received could help readers evaluate the situation for themselves.Comment #57 January 4th, 2017 at 11:22 pm

The proof sketch for P^B != NP^B defines a language L which references B’s bit representation.

I thought that, in relativization arguments, the oracles were treated as blackboxes, but i guess that’s not true?

Is there also a proof of P^B != NP^B where B is a blackbox? Would a P != NP proof that relativizes only for blackbox oracles also be automatically invalidated by a relativization argument?

Comment #58 January 4th, 2017 at 11:40 pm

Nevermind, i guess i misread “the first 2^n bits of B contain a run of n consecutive 1’s”. It’s the first 2^n values returned by B that contain n consecutive 1s?

Comment #59 January 5th, 2017 at 12:48 am

About the broken link business: I hereby suggest an awkward workaround: in the new thing, put a little note somewhere on the first page saying “if you’re looking for the old thing, go to this URL”. Or vice versa: break links to the new thing, and put a note on the front page of the old thing. Manual redirects!

Comment #60 January 5th, 2017 at 2:20 am

I don’t think you should consider the independence of P=NP to be too farfetched. I can sympathize with assigning it a 10% probability, but not a 1% probability.

Some thoughts:

1. You talk about existing independence results, and rightly point that P=NP would be completely unprecedented. The critical problem with this argument is that “we can prove that X is independent” is a *radically* stronger claim than “X is independent.” Indeed, proving that X is independent normally requires it to sit in some “sweet spot” between two axiom systems, a narrow and somewhat strange band of results. Yet there may be a vast domain of statements beyond the reach of any reasonable axioms.

2. A priori, it seems reasonable for statements to be “true just because,” especially when they contain a quantifier like “for every input x…” That is to say, if we imagine simply generating a massive combinatorial structure (the natural numbers), many conjectures might turn out to be true by chance rather than because they were obligatory. Actually I would expect this to be a very generic outcome. I think it is quite magical that it seems to happen so rarely, that we can just make up conjectures and then so reliably settle them—for example, it is mind blowing that we can prove Fermat’s last theorem (rather than it just happening to be true, with only probabilistic evidence in favor). That said:

(a) I am hesitant to generalize too confidently from our many successes at proving challenging conjectures. We never get to learn that a conjecture can’t be proved (setting aside the provably independent cases, which I argued above ought to be rare).

(b) If there were going to be exceptions to the surprising regularity of “conjectures are provable,” computational lower bounds seem like a relatively natural place for them to be. The absurd difficulty of making any headway seems like it constitutes evidence, if anything should. At the meta level I’m not aware of any area as broad/natural as “computational lower bounds” where the theoretical community has been so thoroughly stuck, and then eventually come unstuck.

3. The fact that there are algebraic oracles relative to which P = NP makes it feel quite plausible that we’d have P != NP “just because.” Namely, if that oracle happened to be sitting somewhere in the natural numbers, we’d end up with P = NP. Such a structure seems to be astronomically implausible (in some sense it’s infinitely unlikely). But I can easily imagine having no recourse other than to say that it seems implausible.

Comment #61 January 5th, 2017 at 3:51 am

Marijn #50, Scott #52: It is apparently a non-trivial process to convert from math tex to epub and/or mobi files. The pdf version won’t convert easily, either. Libre software like Calibre, Pandoc, etc can be used to facilitate the conversion process to epub, but some or a lot of human processing seems necessary in most cases. Conversion to html and mathjax looks like the most promising first step…

I’m not highly experienced with math conversions, but apparently arxiv.org does this kind of conversion on a large scale with some success. You may find a true expert there, somewhere!

Comment #62 January 5th, 2017 at 6:09 am

Any thoughts on this Scott?

https://arxiv.org/abs/1701.01107

Comment #63 January 5th, 2017 at 6:40 am

Chad #56: Do you mean Babai rejected your “The exact classical query complexity of the hidden subgroup detection problem” paper in 2008, and it never appeared in a peer reviewed journal? Or do you mean he rejected your suggestion to include an improvement of his algorithm based on your paper from 2008. You say yourself that your approach is significantly faster than n! but still exponential in n, so why should Babai include it in his algorithm?

Comment #64 January 5th, 2017 at 8:07 am

Can someone clarify whether the issue in Babai’s algorithm is purely in the analysis of the algorithm (and thus it might be still quasipolynomial but Babai’s proof doesn’t prove that much) or whether the issue renders the algorithm itself actually not quasipolynomial?

Comment #65 January 5th, 2017 at 8:18 am

Job #58, #59: Thanks, I’ll clarify in the paper. Yes, “the first 2

^{n}bits of B” just means the “the first 2^{n}bitsreturned byB.” This oracle, like all oracles, is black-box.Comment #66 January 5th, 2017 at 8:31 am

Paul #61: Thanks! In the meantime, Harvey Friedman also reminded me of a couple important independence results that I’d left out:

(i) the Well Quasi-Ordering Theorem from graph minors theory, which is provable in countable set theory (i.e., a small fragment of ZF) but not in PA, and

(ii) statements about measure theory, metric spaces, etc. (e.g. Borel Determinacy) that either require pretty much the full strength of ZF to prove, or actually require large-cardinal axioms.

On reflection, I think you’re right. I stand by the arguments about why a

proofthat P vs. NP is independent would be such an unprecedented development. But regarding independence itself, how about if I tone down the word “farfetched” to “unlikely”? 🙂Comment #67 January 5th, 2017 at 8:37 am

Jonas #63: I haven’t read it carefully yet, but it looks interesting and is on my stack! I could easily see it taking decades to sort out the speculations about complexity that Lenny and his friends have been throwing out for the last few years. 🙂

Comment #68 January 5th, 2017 at 9:18 am

I too greatly enjoyed Paul’s comment (#60), in particular the observation:

This observation provokes a smile, when we reflect that (2.b) is itself a mathematically natural conjecture, which by “the surprising efficacy of mathematics in mathematics”, constitutes suggestive evidence that the independence of PvsNP itself

oughtto be provable.In a nutshell, the belief that the independence of PvsNP is true and provable, is itself — by Paul’s “2b” principle — comparably well-founded to the belief that the separability of P and NP is true and provable. Regarding independence itself, don’t these considerations naturally carry us from “farfetched” \(\to\) “unlikely” \(\to\) “plausible” (or even further)? 🙂

Comment #69 January 5th, 2017 at 10:15 am

John Sidles #69: In Bill Gasarch’s 2012 P vs. NP poll, you confidently predicted that P vs. NP would be proven undecidable by 2014 (“by a graduate student”). Does the failure of that prediction do anything to temper your confidence in making future predictions in this subject? 🙂

Comment #70 January 5th, 2017 at 12:10 pm

It would be very interesting to know what undergrad/grad level math classes you think would be beneficial to an aspiring theory CS student. Complex analysis yes, but algebraic geometry is not such an obvious choice for a CS major.

Comment #71 January 5th, 2017 at 12:23 pm

Hi Scott, referring to “1.2.2 The Polynomial-Time Objection”, could you expand why the closeness of the complexity class over multiplication is important? There are people who are happier with quasi-linear bounds than polynomial ones…

Comment #72 January 5th, 2017 at 12:25 pm

Thanks for taking the time and for your usual candour about your difficulties with the writing process. It’s good to know that one may be prone to procrastination and still produce 100-page documents!

A minor typo, during my first skim-reading: page 71, just before Theorem 69, $\tau(4)$ should be 4 if I’ve understood correctly.

Comment #73 January 5th, 2017 at 12:29 pm

To answer Joshua Zelinsky’s question:

If you take Babai’s algorithm literally as it stands in the version in the arxiv, then it runs in exponential time (or at least you cannot be assured that it won’t take exponential time, and it seems reasonable to believe it actually would, for at least some inputs).

In order to partially get around the problem that was found, Babai changes the internal parameters in a procedure and introduces a simple, but rather expensive, iteration. (He managed to do this overnight.) Then the bound his methods give is subexponential, but not quasipolynomial, and it seems plausible that the running time of the modified algorithm would actually be more than quasipolynomial for some inputs.

Comment #74 January 5th, 2017 at 12:30 pm

Incidentally, here is my statement from yesterday:

https://valuevar.wordpress.com/2017/01/04/graph-isomorphism-in-subexponential-time/

Comment #75 January 5th, 2017 at 1:48 pm

Very nice survey! I’m particularly excited to read up more about the GCT approach. Meanwhile, here are a couple of minor comments about the 2nd section.

In Proposition 4, “for all x in L” should be moved to the middle of the sentence because for each x we can find a (possibly different) witness w. It is too much to ask that the same w works for all x simultaneously! It should also be explained what n is (or just write |x|).

On page 12, when you define the running time of a nondeterministic Turing machine, would it make any difference if you would say “minimum number of steps” rather than maximum? Thinking in terms of witnesses, can’t we always assume that we are given the most helpful witness rather one that is the most cumbersome to verify?

On page 15, you say that P != NP is a Pi_2 rather than a Sigma_3 sentence. But don’t you need to unwind also the definition of PT, the set of poly-time Turing machines? That might involve extra quantifiers (as in: there exists a polynomial p such that for all x the machine halts in at most p(x) steps).

Comment #76 January 5th, 2017 at 2:29 pm

venky #71:

Definitelylinear algebra, abstract algebra, and probability theory. Also potentially helpful: logic, combinatorics, real and complex analysis, representation theory, and number theory.Since we were discussing Laci Babai, let me quote one of his favorite aphorisms: “the only math I never used is math I never learned.”

Comment #77 January 5th, 2017 at 2:35 pm

Michael #72: Closedness under multiplication is nice because then if you have an f(n)-time subroutine that you want to call g(n) times, your overall procedure will be efficient as long as f and g are efficient themselves.

Yes, the quasilinear functions are at least closed under addition and composition, though not multiplication, and they also play an important role in complexity theory, although not as important as polynomials. (E.g., it was a big deal when people first achieved quasilinear-time integer multiplication algorithms, or much later, quasilinear-size PCPs.) There was even a serious attempt, by Gurevich and Shelah, to rebuild complexity theory around the notion of quasilinear time, although I haven’t heard of much that it’s led to.

Comment #78 January 5th, 2017 at 2:47 pm

Minor point, but regarding the comment by Nick #40 and Scott #41: putting the plain .tex source (and compiled pdf) on GitHub wouldn’t exactly be like making it a wiki. You would have it under some url like github.com/scottaaronson/pnp-survey (or whatever) so that authorship was clear, but others would be able to post “Issues”, or offer edits as “Pull Requests” (that would be up to you to accept or not). Anyone could also see a history of “commits” or incremental changes made.

Not saying this is worth doing, just pointing out that it’s not exactly like a wiki. I think people are already pointing out issues in comments here (which is more convenient anyway), and there’s not much need for collaboration, so probably not worth doing.

Comment #79 January 5th, 2017 at 2:50 pm

svat #79: OK, thanks for clarifying. I think I prefer to have readers submit their “Issues” and “Pull Requests” as comments right here, or as emails to me. 🙂

Comment #80 January 5th, 2017 at 2:55 pm

Vijay #73: No, τ(4) is 2, because it’s a circuit, not a formula, so you’re allowed to reuse numbers you’ve already generated. Thus, the first operation gives 1+1=2, and the second gives 2+2=4 (or 2×2=4).

Comment #81 January 5th, 2017 at 3:11 pm

Māris #76: Thanks!

In defining NP, yes, I suppose you’re right, you could also take the minimum time used along any accepting path. Interesting observation, which I’ll add to the survey!

To encode the notion of “polynomial-time Turing machine,” you say something like:

P≠NP iff for all Turing machines M and polynomials p, there exists an input x such that either M fails to decide 3SAT on x, or else M(x) runs for more than p(|x|) steps.

I’ll edit the survey to make this more explicit.

Comment #82 January 5th, 2017 at 4:08 pm

Minor typo, p. 50, second paragraph, second line:

> First, the circuit needed to built

should be

> First, the circuit needed to be built

Comment #83 January 5th, 2017 at 6:02 pm

Arun #83: Thanks!! Fixed.

Comment #84 January 5th, 2017 at 6:06 pm

I’ve skimmed sections of your paper and am excited to sit down with it and read it deeply at some point. Unfortunately due to work obligations this isn’t likely to happen for a week. Well done writing it!

Comment #85 January 5th, 2017 at 8:20 pm

Scott #82:

Are you sure? And in case there is no accepting path, the time is …? (I just remember that it is sometimes explicitly mentioned that one cannot take the minimum time, but I forgot the explicit reason given for that claim.)

Comment #86 January 5th, 2017 at 9:05 pm

Great article! A few questions:

1) Did you coin the phrase ‘ironic complexity theory’ (ICT) right here in this article? If so, do you expect it to catch on?

2) You motivate ICT with the simple (but still new to me!) observation that P != NP follows from the existence of an algorithm for efficiently verifying exponential-time computations. Was this example (or one like it) the actual starting motivation for Williams or Mulmuley?

3) In footnote 75, you say that “GCT played no role in the proof of NEXP !C ACC”. But GCT and ICT ultimately pursue the same strategy, right? Is this a coincidence or what?

4) (Comment, not a question) In the conclusion, you say “if a hands-off, aprioristic approach sufficed for P != NP, then why did it apparently not suffice for all the weaker separations that we’ve surveyed here?” This sounds to me like a circular argument, since someone who thought that they could dive right into P/NP would presumably also believe that they could dive right into the other separations (I don’t find that idea plausible, but still). Maybe it would be better to cast it in non-counterfactual form, eg “if a hands-off, aprioristic approach sufficeS…, then why DOES it apparently not suffice…”.

Comment #87 January 5th, 2017 at 9:29 pm

gentzen #86: Well, we could define NP as the set of languages L for which there exists a nondeterministic Turing machine M, and a polynomial p, such that for all inputs x∈L, there exists a path that causes M(x) to accept in at most p(|x|) steps; whereas for no input x∉L is there a path that M(x) to accept in any number of steps. Implicitly, then, we’ve defined the running time with respect to the “shortest” accepting path. But this clearly gives no more than NP, because we could always simulate M by cutting off all the paths after at most p(|x|) steps.

In other words, the point is that by imposing a specific polynomial upper bound on the length of the

shortestaccepting path, we’ve implicitly imposed the same bound onallpaths, since we’ve promised that if you go that far and you haven’t yet found an accepting path then there isn’t one.Comment #88 January 5th, 2017 at 9:46 pm

Nick #87: Thanks!

(1) Yes, a Google search for “ironic complexity theory” turns up only my survey and extracts from it, so I suppose I did coin that term, and I hope it catches on! But I was certainly inspired by my friend Rahul Santhanam’s “ironic complicity”.

(2) No, that NP=EXP implies P≠NP is just a decades-old folklore observation; I doubt either Williams or Mulmuley drew inspiration from it. I started my “ironic complexity” section with that only for pedagogical reasons.

(3) The fact that Williams and GCT both use “ironic complexity”

mightbe pure coincidence, or it might be convergent evolution—i.e., lower bounds via upper bounds being just objectively a good idea, and something you’ll inevitably hit on if you spend long enough thinking about P vs. NP! 🙂Beyond that one similarity, the two approaches are radically different—e.g., GCT uses a terrifying amount of “heavy-duty math” (geometric invariant theory, representation theory, etc.), but no hierarchy theorems or pretty much any other “traditional complexity theory.” Whereas the Williams approach uses basically no “heavy-duty math,” but a Cirque-du-Soleil level of complexity-theoretic acrobatics.

(4) In that case, an extremely good way for those would-be P≠NP provers to convince the experts that they were on the right track, would be to first show how to use their techniques to reprove all the

knownlower bounds that are strictly weaker than P≠NP! Yet strangely, they never seem to do that…Comment #89 January 5th, 2017 at 11:12 pm

What if the set of oracles for which P=NP is really large and diverse?

In a sense, the P != NP proof will need to know exactly which oracles are in that set, because it has to avoid them?

As an analogy, if oracles were numbers, and prime numbers were oracles relative to which P=NP, then a proof separating P and NP would have to describe, even if only implicitly, an algorithm for primality testing?

Comment #90 January 6th, 2017 at 12:55 am

Job #90:

In a sense, the P != NP proof will need to know exactly which oracles are in that set, because it has to avoid them?

Not really. The proof has to

fail(or not even make sense) for such oracles, but it might also fail or not make sense for additional oracles—even oracles that make P≠NP. And then, of course, it has to work for the world with no oracle.As I explain in the survey, while this sort of requirement rules out many of the “obvious” proof ideas, we know that it’s not an insuperable obstacle, because IP=PSPACE and some other known results already get around it.

Comment #91 January 6th, 2017 at 4:35 am

As it seems to me, Scott’s and my views regarding complexity theory have evolved to be substantially compatible (even nearly identical), differing nowadays chiefly in emphasis. Consider for example our respective evolutions from our 2012 starting-point:

Here my “Maimonides” hyperbole was (by deliberate intent) a parallel construction to Scott’s “Venusian marmoset” hyperbole. Had these two remarks been posted side-by-side in 2012 (rather than separated by 57 intermediate answers) Scott and my parallel hyperbole might have been more evident! 🙂

Removing the hyperbole yields complexity-theoretic considerations that have been meiotically addressed in

MathOverflowquestions/answers such as “For which Millennium Problems does undecidable \(\to\) true?” andTCS Stackexchangequestions such as “Are runtime bounds in P decidable?” (answer: no), and “Do the undecidable attributes of P pose an obstruction to deciding P versus NP?” (answer: maybe), and most recently “Does P contain languages whose existence is independent of PA or ZFC?”.These four

StackExchangediscussions have been well-rated, instructive, and enjoyable. In particular, the complexity classes associated to the fourth-and-final question — a question that has since been promoted to a TCS wiki — were deliberately constructed to provide a concrete, mathematically natural, Hartmanis-inspired parallel to Section \(5\) of Scott’s survey “Strengthenings of the \(\textsf{P} \ne \textsf{NP}\) Conjecture”, namely Section \(5\frac{1}{2}\) “Weakenings of the \(\textsf{P} \ne \textsf{NP}\) Conjecture” (as it might be parallel-titled).Perhaps

Shtetl Optimizedreaders can propose further natural weakenings of \(\textsf{P} \ne \textsf{NP}\) postulate? As motivation, doesn’t the not-inconsiderable cohort of complexity-theory researchers who contemplate that \(\textsf{P} \overset{?}{=} \textsf{NP}\) may be undecidable, find in this conviction ample reason to be comparably interested in the constructions of Section \(5\frac{1}{2}\), as in the constructions of Section \(5\)? Are any strong reasons known, to regard weakenings of the \(\textsf{P} \ne \textsf{NP}\) postulate to be less mathematically natural than strengthenings?Here I will say too, that I owe, equally to Scott and to Bill GASARCH, the insight that weblogs like

Shtetl OptimizedandComputational Complexityserve as a nurturing-ground for goodStackExchangequestions. Thank you both very much! 🙂As for mathematical “Yellow Books”, Scott and I share a long-standing interest in them. In particular, Scott’s 2012 Yellow Book survey-comments are paralleled in his 2016 survey article — and even considerably extended! — as follows:

Hmmm … it’s not hard to identify

GAGA-esque passages in the 2016 survey, which indicate (to me at least) that in the four years since since 2012, Scott has been reading plenty of Yellow Books and/or talked with plenty of Yellow Book authors. After all, what Huck Finn says ofPilgrim’s Progresssurely is true too of Yellow Books: “The statements is interesting, but tough.” Tough enough to keep an entire generation of researchers (definitely including me) enjoyably awake at night! 🙂Fortunately Yellow Books are replete with inspirational passages like mathematician André Weil’s 1940 letter to his philosopher-sister Simone Weil:

Martin Krieger’s article “A 1940 Letter of André Weil on Analogy in Mathematics” (

Notices of the AMS, 2005) discusses Weil’s views in-depth. The phrase “diverse models for computational processes” is my own interpolation; for me this phrase capsulizes theGAGA-centric worldview of “Team Yellow Books” as contrasted with the Turing-centric worldview of “Team Caffeinated Alien Reductions”.More recently, Vladimir Voevodsky’s Heidelberg lecture “Unimath” (of September 22, 2016, the talk-slides being available on Voevodsky’s web page) provides a modern counterpoint to Weil’s worldview, particularly in answer to the natural questions

Please let me say, that I agree entirely with Scott’s Voevodsky-compatible view (as I appreciate their respective writings), namely that “Team Yellow Books” and “Team Caffeinated Alien Reductions” are (1) formally equivalent, yet (2) very different in practice, and (3) pretty evenly matched at present. As for their relative future prospects, here as generally in the STEAM enterprise, “It were not best that we should all think alike; it is difference of opinion that makes horse-races.” Surely in the theory of computation, the race between the Turing-horse and the

GAGA-horse holds universal interest and elicits passionate opinions. Which is good! 🙂In summary, plenty of

Shtetl Optimizedreaders have reason to be grateful that Scott’sShtetl Optimizedessays and comments have so often catalyzed goodStackExchangequestions. Specifically in regard to the present \(\textsf{P} \overset{?}{=} \textsf{NP}\) thread, a great manyShtetl Optimizedreaders (definitely including me) would read with appreciation and enjoyment any and allStackExchangequestions that may eventually arise forShtetl Optimizedcomments (like Paul Christiano’s #60, as a fine example).In a nutshell, the foremost lessons (as it seems to me) that

Shtetl Optimizedreaders can glean from Scott’s magisterial \(\textsf{P} \overset{?}{=} \textsf{NP}\) survey, and from the comments here onShtetl Optimized, are (1) a shared capacity to read the Yellow Book literature without fear, and (2) a joyful and collegial sharing of diverse views upon that literature, and (3) a collective ambition to write our own Yellow Books.In these three crucial respects there is (as it seems to me), an emerging consensus extending throughout the 21st century STEAM community (that compatibly encompasses in particular both Scott’s views and my own in regard to computational issues).

Comment #92 January 6th, 2017 at 7:09 am

Scott,

Thank you very much for new exciting survey.

May I take an opportunity to point out a typo in the old survey on independence of P=NP. The reference [58] should be read “Sazonov”, not “Sazanov” (pages 7,21).

Comment #93 January 6th, 2017 at 8:12 am

[…] There’s a little about the background to the piece and some responses to comments on Aaronson’s blog here. […]

Comment #94 January 6th, 2017 at 8:32 am

Thank you for this survey. It really seems like a good starting point, because it explains succinctly what areas I don’t understand clearly enough but should, such as why NP != coNP is such an important question.

Comment #95 January 6th, 2017 at 10:18 am

Scott #56: Babai didn’t do the review, he merely communicated the rejection. This was back in 2008, I would have to dig it out.

I proved you could do optimal brute force isomorphism checking by only testing primitive permutations. It makes that one subroutine exponentially faster when $n$ is not prime.

https://oeis.org/A186202 is there for all to see, as is my current work extending the technique from groups to semigroups, https://github.com/chadbrewbaker/endoscope.

The results on permutation/transformation polynomials were already shown by Manjul Bhargava in 2000. Hoping the lower bounds on boolean matrix multiply yield something of interest. Anyone wishing to contribute is welcome.

Comment #96 January 6th, 2017 at 1:11 pm

As a follow-up, the above abstract considerations (of comment #91) in regard to “Yellow Book” mathematics are illustrated concretely in a comment upon tensor network rank-jumping that is posted on Lance Fortnow’s

Computational Complexity“Learning About Learning” essay (of Jan 05 2017).I will attempt to compose these two perspectives — abstract-with-concrete and Yellow Book-with-pragmatic — in a

MathOverflowand/orTCS StackExchangequestion regarding “Yellow Book” descriptions of rank-jumping in practical computational simulations. Such attempted compositions — from me or anyone — can rightly be appreciated as tributes to a small-yet-vital community, namely the proprietors of mathematical weblogs.Math weblogs require of their proprietors a sustained personal commitment that (as it seems to me and many) crucially nourishes the vitality of the 21st century’s diverse STEAM enterprises. In particular, math weblogs crucially nourish the hopeful enthusiasm of Yellow Book Era STEAM-students — hundreds of millions of 21st century STEAM-students, YIKES! 🙂 — who will inherit and, as we can hope and even reasonably foresee, apply fresh Yellow Book understandings in marvelously extending our 21st century’s great STEAM-enterprises.

This New Year’s appreciation of math weblogs, and heartfelt gratefulness for the sustained efforts of their oft-underappreciated proprietors (including Scott especially), is therefore extended.

Comment #97 January 6th, 2017 at 2:02 pm

p. 71 I am confused by the sentence “At present, no implications are known among the P ?= NP, P_R ?= NP_R and P_C ?= NP_C questions, although it is known that P_C \neq NP_c implies NP \not \subset P/Poly.” Since P \subset P/Poly, doesn’t this show P_C \neq NP_c implies NP \not \subset P and this NP \neq P?

Comment #98 January 6th, 2017 at 2:07 pm

PS I would replace all instances of \not \subset in your paper by \not \subseteq . But obviously that is an issue of style.

Comment #99 January 6th, 2017 at 2:52 pm

David Speyer #98: Duhhhh, sorry! It should have read: P

_{C}=NP_{C}implies NP⊂P/poly, which is consistent with the first part of the sentence. Thanks; will fix.Comment #100 January 6th, 2017 at 3:02 pm

(As of #99) I count 2 ?= symbols and 9 ?= symbols.

I have always used ?= (due to the analogy with !=, which I think is good).

Like Scott in the paper, when writing on a whiteboard, I put the ? on top of the =. This can often be done with some dramatic pause as you present a topic to a class.

Anyway, I am in favor of ?= over =?. All in favor? All opposed?

How about a good snappy name for ?=.

John Sidles, I am counting on you to produce some ideas and/or references from Russell/Whitehead, Lemony Snicket, or somewhere like that.

Comment #101 January 6th, 2017 at 3:17 pm

p. 90 Probably not worth mentioning in a survey, but there is definition of border rank which does not use any topology on the field F: A tensor T has border rank \leq r iff every polynomial which vanishes on tensors of rank \leq r also vanishes on T. The cool theorem is that, over the complex numbers, this is the same as being the limit of tensors of rank \leq r.

Comment #102 January 6th, 2017 at 3:18 pm

(Continued) I just noted that Scott suggest “versus” for a name for ?= or =?. I totally disagree with that, particularly in this context, because people will think you mean “versus”. I do not think that “N versus NP” captures the idea at all.

I understand the symbol to mean “might be equal to”, and either we don’t know, or are presenting to topic to be investigated and intend to find out.

I think if you want to mean “versus” you should write “versus” as opposed to using a two character symbol for a six character word.

Comment #103 January 6th, 2017 at 3:23 pm

John Sidles #96: Why don’t you start a math weblog then, if you believe that they are so crucially important for the vitality of the 21st century’s diverse STEAM enterprises?

Comment #104 January 6th, 2017 at 4:12 pm

(continued yet again) In reviewing this thread, I see that “P versus NP” appears to be in common use. What is the deal with that?

I am pretty sure that in the 70’s you always saw “P = NP?”, which exactly captures in idea in seven characters.

Google offers the following definitions for versus: (1) “against (especially in sports and legal use)”, and (2) “as opposed to; in contrast to”. Using definition (2), “P versus NP” is SLIGHTLY related to the idea, but does not capture the idea anywhere near as well as “P = NP?” or “P ?= NP”.

Thus I am in favor of deprecating “P versus NP”. I’d like to know how it became common. Probably KGB hackers planted the suggestion to throw off American work on the topic. And look what that led to!

Comment #105 January 6th, 2017 at 4:18 pm

dan gusfield #54:

That appears to be a Yellow Book, but World Scientific, not Springer!

Comment #106 January 6th, 2017 at 4:46 pm

Raoul #105: No doubt, if we could just agree to use the word “versus” correctly, we would’ve solved P vs. NP—oops, I mean P=NP?—decades ago.

Comment #107 January 6th, 2017 at 5:22 pm

p. 105 “X. Cheng and X. Deng” -> “B. Chapman and R. Williams”

Comment #108 January 6th, 2017 at 5:34 pm

In what sense does Nash’s paper come close to posing

the P vs.NP problem? Looks like a complete nonsense

claim to me. He indeed noticed that it doesn’t matter

if a code is theoretically breakable if one cannot efficiently

break it. This is not the P vs. NP problem and never was.

Comment #109 January 6th, 2017 at 8:02 pm

I would like to bring this back to the main point. Amy, you must tell this Nash story. I’m beginning to believe that there’s not a lot to it. 😎

Comment #110 January 6th, 2017 at 8:09 pm

Bruce #109: He explicitly raises, as a mathematical conjecture (about which one could at least

askfor a proof), that breaking “sufficiently complex” types of enciphering will require a number of computational steps that grows exponentially with the length of the key, regardless of what algorithm is used for deciphering.That particular conjecture is stronger than P≠NP in several respects, but it certainly implies P≠NP. (From today’s perspective, we’d say it’s more-or-less equivalent to the existence of exponentially-secure one-way functions.)

Comment #111 January 7th, 2017 at 6:04 pm

Could geometric complexity theory be (in some sense) more helpful for proving separations between quantum complexity classes than “classical” classes?

I’m not really familiar with GCT and this survey (which is awesome btw!) is my first true exposure to it. So I only ask because of the very naive (and possibly misleading) observation that GCT deals with symmetries and representation theory and these also play an important role in quantum mechanics.

Comment #112 January 7th, 2017 at 6:31 pm

Random Oracle #112: Well, firstly, to separate interesting quantum classes, you also

mustseparate classical classes—e.g., any proof of BQP≠QMA would imply P≠PSPACE.But secondly, for quantum classes like BQP, as I pointed out in the survey, we don’t have a symmetry-characterization analogous to what we have for P, NP, VP, and VNP—so GCT doesn’t even have the basic ingredient that it needs to get started. The reason for that probably has less to do with quantum mechanics, than simply with the fact that the quantum classes are “semantic” (i.e., they have no known complete languages, only complete promise problems). The same issue arises for probabilistic complexity classes like BPP and AM.

Having said that, yes, some similar representation theory does show up both in GCT and in quantum computing. People who could explain more about that include Jon Yard and Matthias Christandl, and do not include me. 🙂

Comment #113 January 7th, 2017 at 7:41 pm

Interesting observation from Dick Lipton’s blog, which I think is worth repeating here since no one else has: The thing about this new upper bound on graph isomorphism is that, while it is subexponential, it’s still of exponentiality 1. Whereas if it were actually quasipolynomial, it would have been exponentiality 0.

Comment #114 January 7th, 2017 at 8:46 pm

Sniffnoy #114: Yeah, I made that point to Erica Klarreich when she interviewed me about Babai’s walk-back announcement. (Probably wisely, Erica left out that part, but kept in my boxing analogy. 🙂 ) The way I put the point is that the new bound is on the “wrong side” of the half-exponential functions, whereas quasipolynomial was on the “right side.” Anyway, still the biggest progress on GI in 30+ years.

Comment #115 January 8th, 2017 at 8:32 am

What do you think of this idea?

https://golem.ph.utexas.edu/category/2014/08/logically_perfect_theories.html

ndeed, only finite-sized mathematical structures can be ‘nailed down’ up to isomorphism by theories in first-order logic. After all, the Löwenheim–Skolem theorem says that if a first-order theory in a countable language has an infinite model, it has at least one model of each infinite cardinality. So, if we’re trying to use this kind of theory to describe an infinitely big mathematical structure, the most we can hope for is that after we specify its cardinality, the axioms completely determine it.

And this actually happens sometimes. It happens for the complex numbers! Zilber believes this has something to do with why the complex numbers show up so much in physics. This sounds very implausible at first, but there are some amazing results in logic that one needs to learn before dismissing the idea out of hand.

Comment #116 January 8th, 2017 at 11:30 am

On page 53, below Theorem 52, you say that a language L would not be in NLOGSPACE if L would require AC0-circuits of large size. But Theorem 52 also requires the NLOGSPACE-machine to halt in polynomial time. So am I correct in thinking that you assume that this language L is in P? Or am I misunderstanding something (which seems very possible)?

Comment #117 January 8th, 2017 at 11:43 am

I just found out that NL is contained in P (indeed, 2-SAT is complete for NL), so that makes my question moot. Whoops!

Comment #118 January 8th, 2017 at 2:56 pm

Hi Scott, thanks for the excellent survey. Was a fun read over the weekend and I felt it gave me new understanding for some areas I haven’t read much into (like NEXP vs ACC0 and GCT).

One minor point, in footnote 70, usually 0 and 2x would indeed be considered identical when working over F_2[x] (since 0 = 2, the coefficient of x is actually the same). Maybe 0 and x^2+x would be a better example of distinct polynomials that are identical as functions.

Comment #119 January 8th, 2017 at 3:21 pm

Luke #119: Duhhhh, you’re right! Thanks; fixed.

Comment #120 January 8th, 2017 at 7:27 pm

Hi Scott. Re the 3SAT threshold, 4.25 should be 4.2667; and, the most up-to-date work on Survey Propagation algorithms working close to threshold is Marino–Parisi–Ricci-Tersenghi’s “The Backtracking Survey Propagation Algorithm for Solving Random K-SAT Problems”.

Comment #121 January 8th, 2017 at 7:32 pm

Ryan #121: OK, thanks!

Comment #122 January 9th, 2017 at 12:01 am

So now having actually read the thing… wow, that’s quite the survey!

My own small corrections:

Page 11: Normally to say a machine “decides” a language, it has to reject on strings outside it, right? Otherwise it just “recognizes” the language.

Page 67: The gcd should be an lcm, right?

Page 90: I’m guessing the exponent in the definition of E should be q-1 rather than q?

Also… thanks for introducing me to the “circuit” vs “formula” distinction; I’d seen these terms before, but somehow was never quite certain of the distinction. This despite the fact that I’ve done work on circuits and formulae for computing natural numbers! I just never put it in those terms, instead manually explaining, “in this context we’re allowed to reuse things we’ve computed, in this context we’re not”. Hell, just the other day someone was asking me about integer complexity, “So, this could be considered a sort of circuit, right?”, and I replied, “Well, they’d have to be circuits where all the gates had fanout 1…” Well — now I know how to talk about my work if I want to, say, apply for a job in a CS department rather than a math one. 🙂

Comment #123 January 9th, 2017 at 2:32 am

Sniffnoy #123: Thanks!! I hope you found slogging through the whole thing worth your time.

Fixed the errors you caught on pages 67 and 90. Regarding decide vs. recognize, yes, you’re right, but I don’t think I even needed the concept of “recognize” anywhere in the survey.

Comment #124 January 9th, 2017 at 2:34 am

Incidentally, if anyone who’s an expert in TikZ wants to do something super-duper helpful to me … I need the class NLOGSPACE added in between LOGSPACE and P in Figure 3 on page 118. Here’s the TeX source for my survey, which contains all the TikZ code for the figure.

(For some reason, the graphical TikZ editor that I used to create the figure is no longer working.)

Comment #125 January 9th, 2017 at 12:45 pm

Erdös should be Erdős, LaTeX Erd\H{o}s.

Comment #126 January 9th, 2017 at 2:25 pm

Jon #126: Thanks! Will fix.

Comment #127 January 9th, 2017 at 3:38 pm

This is somewhat tangential, but what were the books where you wrote the chapters “Why Philosophers Should Care About Computational Complexity” and “The Ghost in the Quantum Turing Machine”?

Comment #128 January 9th, 2017 at 5:06 pm

Maximilian #128: Here and here.

Comment #129 January 9th, 2017 at 6:16 pm

Babai now claims to have fixed GI, really, this time 🙂

http://people.cs.uchicago.edu/~laci/update.html

Comment #130 January 9th, 2017 at 8:42 pm

Scott #124 or #125:

(Your comment is numbered 124 to me but everyone else seems to be numbering one step larger, not sure why, probably a comment in moderation which hasn’t been posted.)

If you want a tikz example, check out this github gist: https://gist.github.com/DanielTakeshi/565887b3b6b1b443f17b7ff434da9c62

This code should generate the Figure 3 that’s in your paper, except with the NLOGSPACE added where I think you wanted it. Otherwise it should be correct … hope I didn’t make any typos. I

thinkyou should be able to copy and paste this in your document (well, aside from the preamble and package loading, but this example assumes a minimum of these).There’s probably better ways to do this but this is the way I’ve done it before so it was fairly straightforward to adjust. It’s easier to edit in vim which lets me highlight and modify vertical slices of text easily.

Comment #131 January 9th, 2017 at 9:21 pm

Thanks Scott

Comment #132 January 9th, 2017 at 10:13 pm

I’m wondering just what the constant is in that exp (log^O(1) n). Well — this fixed version nobody’s seen yet, so I guess no one can say. I’d ask if anyone managed to extract it from the old version, but I guess that’s really a meaningless question…

Comment #133 January 9th, 2017 at 11:25 pm

Daniel #131: Thanks!! Will try it out shortly.

Comment #134 January 9th, 2017 at 11:30 pm

Sniffnoy #133: For the original version, I believe Laci said in a talk that the exponent on the log was about 8.

I wish him the best with the new version he’s preparing (and once it’s out, will update my survey with Babai/GI news for the third time!)

Comment #135 January 10th, 2017 at 12:41 am

Just a side note for the section three of the paper (Beliefs about P?=NP) – if you want to find at least one another competent mathematician who actually thinks it’s 50-50 check page 568 in the following article: ams.org/notices/200805/tx080500560p.pdf (to actually understand his point one has to read the previous section about diophantine equations as well).

cheers 🙂

Comment #136 January 10th, 2017 at 3:13 am

Pavel #136: Thanks for the link! Of course I disagree with Davis about P vs. NP—and for that matter, with almost everything he says in the last third of the interview!—but I still found it a wonderfully engrossing read, a new window into topic after topic that interests me enormously. I don’t know how I missed it earlier.

(To expand on one of the disagreements a bit: it’s simply not true, as Davis claims, that when theoretical computer scientists say “polynomial” we really mean “at most n

^{3}.” In fact, many people level precisely the opposite criticism at TCS: that we’ll happily publish n^{20}or n^{50}algorithms, if that’s what we can find, as if such things are supposed to be efficient! But that’s rare. Usually itturns outthat what you can do in polynomial time, you can actually do in n^{2}or n^{3}time—if not right away, then maybe next year or the after that, with the lowerings of the polynomials sometimes driven by the conference culture that Davis dislikes so much. In other words, in contrast to the situation with Diophantine equations that he discusses, in this case the preponderance of low-degree over high-degree polynomials is just an observed fact, one that could’ve turned out otherwise, not an aesthetic preference that we get to impose or a choice in what to study.)Comment #137 January 10th, 2017 at 4:10 am

There’s another possibility for the P=NP question and its possible solution. Maybe there is an algorithm for e.g. euclidean TSP, that solves almost all instances in polynomial time, apart from a few instances, that would occur with probability of e.g. O(2^(-n)). So the expectation value of its running time would be polynomial. But this would still mean that P!=NP.

Comment #138 January 10th, 2017 at 4:39 am

I think the question here is whether the fact that most practical algorithms in polynomial landscape finally end up somewhere around O(n^3) imply that no weird creatures exist in, say n^20 realm.

After all Davis was studying very weird problem (if you look at it from the perspective for what DE is usually used) to get to high degree polynomials.

So (to be the devils advocate:) ): Is the reduction you talk about sign that nothing interesting lives in O(n^20) space or that our problems are not ‘weird’ enough to reach there?

Comment #139 January 10th, 2017 at 10:41 am

Pavel #139: Well, the trouble is that we

dohave some problems that are weird enough to reach into enormous-degree polynomials (or that do for all we know—of course we can almost never prove lower bounds for them). See for example this paper by Sergey Bravyi, which gives an O(n^{59}) algorithm for simulating the ferromagnetic transverse Ising model (Bravyi dryly remarks, “Clearly, this leaves a lot of room for improvements”).Yet even when we do encounter these strange beasts, they’re still subject to the “invisible electric fence” phenomenon that I explained in Section 3 of the survey. That is, they still “magically avoid” ever linking up with any of the known NP-complete problems, typically including slight variants of themselves that are

knownto be NP-complete.I can easily explain that fact—i.e., the studious avoidance between the P and the NP-complete regions of parameter space, including even the n

^{59}parts of the P regions—on the supposition that P≠NP. How does Davis explain it on the supposition that P=NP?Comment #140 January 10th, 2017 at 10:48 am

Hansi #138: The difficulty is that “hardness on average” questions are extremely sensitive both to which NP-complete problem you’re talking about, and to the probability distribution over instances.

E.g., there are some NP-complete problems, like 3-Coloring, that are actually solvable in O(1) time on a “random” instance! (Why? Because with overwhelming probability, a “random” graph is so far from being 3-Colorable that you see that after looking at only O(1) vertices.) But then there are other distributions over graphs for which 3-Coloring is believed to be extremely hard.

More broadly, there’s a plausible strengthening of the P≠NP conjecture, extremely important for cryptography, which would imply that for every NP-complete problem, there’s

someefficiently-samplable distribution over instances with respect to which the problem is hard on average. For more see Section 5.3 of the survey (what, you think such things had escaped notice? 🙂 ).Comment #141 January 10th, 2017 at 11:06 am

Daniel Seita #131: Wow, thank you so much!! Incredibly, it compiled on the very first try and it looks perfect. (And the code is so much smaller than the code my thing generated.) You earned a spot in the acknowledgments. 🙂

Comment #142 January 10th, 2017 at 1:18 pm

Rafee #17: are you going to pay Jeffrey Shallit? (http://recursed.blogspot.com/2015/08/rafee-kamouna-owes-me-500-today-but.html)

Comment #143 January 10th, 2017 at 1:53 pm

About high degrees: the claimed proof by Gordeev and Hauesler that NP=PSPACE gives a construction of certificates for QBF membership which can be verified in O(n^210) time. The proof is in submission to a journal, and I think it is unlikely to hold up (although I give it more than a 0.2% chance of holding up and therefore made a bet with Scott on the issue at 500:1 odds), but it’s another data point indicating that degrees can get pretty high quite quickly.

Comment #144 January 10th, 2017 at 3:34 pm

@Polymath: what? I thought Scott’s wife has forbidden Scott to make any more bets about correctness of proofs.

Comment #145 January 10th, 2017 at 4:08 pm

Scott 141:

Thanks, glad it was useful. This is probably the biggest contribution to theoretical CS that I’ll ever make. 🙁

Comment #146 January 10th, 2017 at 4:50 pm

[…] article in Quanta titled Complexity Theory Strikes Back. Scott Aaronson picked a lousy day to release his new book-length survey on the P v NP […]

Comment #147 January 10th, 2017 at 6:24 pm

jonas #145: I seem to remember that I got special permission from her. It was a small amount, and she and I both agreed that the NP=PSPACE claim was exceedingly unlikely to stand.

Comment #148 January 10th, 2017 at 6:28 pm

210, huh? Interesting number. Does it come up because it’s a primorial or something?

Comment #149 January 10th, 2017 at 8:53 pm

Scott #137: I don’t think Davis is claiming that when theoretical computer scientists say polynomial, they really mean O(n^3). In my reading, Davis is saying that pragmatists (not necessarily theorists) would like an algorithm to be O(n^3) to be considered “good”. He agrees it’s very unlikely that there is an O(n^3) algorithm for SAT. In contrast, he feels there is a 50% chance of a poly-time algorithm for SAT, because we don’t really understand polynomials of high degree. In my opinion, 50% is too high, but we’re all speculating.

Comment #150 January 10th, 2017 at 9:08 pm

Ravi #150: Well, he was claiming that we lack experience or intuition with high-degree polynomials, and analogizing that to the mathematicians who had only studied Diophantine equations of degree 2 or 3 so had no idea what degree-8 equations could express. I was simply pointing out that in our case, we don’t

make a choiceto focus on degree 2 and 3; on the contrary, the degrees usually (but not always)turn outto be small, and the fact that they do constitutes a kind of data.Anyway, while I disliked a few parts of that interview, I actually loved the majority of it! Of all the memorable anecdotes that Davis relays, maybe my favorite is the one where he’s teaching a mathematical logic course at UIUC in the 1950s, drawing Turing machines on the blackboard, and one of the electrical engineering students comes up to him and says “you should come to our building across the street! we’ve got one of these there!”

Comment #151 January 10th, 2017 at 9:26 pm

Actually, I misremembered it as 210 but it is actually 270. Here is Haeusler’s quick running time analysis:

“1- Consider a QBF formula A=Qmy_m…Q1y_1B(y_1,…,y_m) in prenex form with only \bot and –> occurring in B. Svejdar reduction produces a propositional formula of size 8*len(B)*m that is intuitionistically valid, iff, A is valid.

Eliminate \bot to get a purely minimal logic formula A^{\star} of size

(8*len(B)*m)^2. This formula is a minimal tautology, iff, A is valid. Observe that A^{\star} has is a purely implicational formula.

2- According to our corollary 12, if A^{\star} is valid, then it has a dag-like proof of size (number of vertexes) O( len(A^{\star})^5). In terms of A=Qmy_m…Q1y_1B(y_1,…,y_m), itself, this dag-like proof is of size O(m^10*len(B)^{10}). The encoding of any dag-like proof G results in a string of size O(||G||^3), lemma 21, in the article. Thus, the size of the dag-like encoded proof is of size O(m^{30}*len(B)^{30}). Finally, a naive algorithm to verify that a dag-like proof of $A^{\star}$, the minimal implicational formula that it is its conclusion, runs in time O(||G||^{9}) , where G is the size of the dag-like proof.

As you see for any QBF formula A of size n, there is a dag-like proof of size n^30 that can be verified in time (n^{30})^9. We hope this answer your question. Of course, the degree 270 is quite a big one !!! It can be reduced on the basis of what is detailed in the article, however, this is the easiest upper-bound to show from what is in the article. This is may not be the best. There is much work to do in this direction…”

Comment #152 January 10th, 2017 at 11:09 pm

Scott #140: My argument was about all possible instances for e.g.TSP, not carefully chosen distributions. Of course you can’t simply list all of them, but it may be possible to estimate the amount of situations, in which you wouldn’t get the optimal solution (without further branching etc).

3-Coloring seems to be one of the most unintuitive SAT-problems, compared to TSP or Clique. I always resort to “How would the reduction to 3-SAT look like?” in such cases.

Comment #153 January 10th, 2017 at 11:28 pm

This reasoning extends naturally as follows.

In the eight years since Davis gave his 2008 interview, one area of computer science has demonstrated capabilities so greatly improved as to deserve to be called “transformational”. That area is (broadly) the specification of data and/or algorithms in terms of polynomials of markedly high order (hundreds or even thousands), whose coefficients are adapted by algorithms whose runtimes in practice exhibit markedly low polynomial degree (commonly \n^2 or n^3\).

There’s no shortage of computational research that exhibits these traits and is directly relevant to the quest to demonstrate Quantum Supremacy. As one example among many, see Andrew Darmawan and David Poulin’s “Tensor-network simulations of the surface code under realistic noise” (2016, arXiv:1607.06460). Every technically plausible path to the demonstration of Quantum Supremacy (that I can imagine) requires the intensive use of comparable simulations to these.

More generally, for reasons not entirely apparent to me or anyone, the STEAM community is entering into a Golden Era in which many research disciplines are “having their cake and eating it too.” Namely, more-and-more algorithms run polynomially fast with small degrees, in yielding well-tuned models having tremendously large built-in polynomial order.

In Scott’s phrase, “the fact that they do [these remarkably useful computations] constitutes a kind of data” … mighty mysterious data! 🙂

Davis’ interview gives provides no shortage of “Yellow Book” reasons to foresee that it won’t be simple or easy to understand

whythe modern computational landscape is proving to be, in practice if not (yet) in theory, “the best of all possible worlds”.Comment #154 January 11th, 2017 at 2:14 am

Polymath #151: Hm, that analysis seems to me like it runs into Scott’s “Chinatown” flag. I’m not much of a logician, but just offhand, passing from classical to intuitionistic to minimal logic just doesn’t seem like the sort of step that should be doing that much work…

Comment #155 January 11th, 2017 at 9:58 am

Student-friendly “Yellow Book” elaborations of Martin Davis’

Notices of the AMSremarks (that were cited in Pavel Sanda’s remarkcirca#135, for which thanks are given) are Andreas Gathmann’s on-line course notesCommutative AlgebraandAlgebraic Geometry. In the latter we read (as Remark 0.7)Gathman’s remarks are reminiscent of a favorite story of Lyndon Johnson, who told of a student teacher who was asked in a Texas job interview: “Do you teach that the world is flat or the world is round?” To which the eager job-seeker answered: “I can teach it either way!” 🙂

In particular, Gathmann’s teaching that “our theory must be strong enough to include singular [algebraic] objects and make statements how things vary when we pass from smooth to singular objects” applies to our various notions of

entropy.Shall we teach entropy:

• the Boltzmann way?

• the von Neumann way?

• the Shannon way?

• the Onsager way?

• the Perelman way?

An article of faith for “Team Yellow Book” — as Scott’s survey article “\(\textsf{P} \overset{?}{=} \textsf{NP}\)” calls this nascent STEAM community — is that the universal “Yellow Book” answer is “These are all the same way.”

However, this conceptual and/or pedagogic unification requires that we understand our various notions of

entropyin light of a “Yellow Book” appreciation of representational singularities and the resolutions of those singularities that thermodynamics provides.Here there are no outstandingly good, outstandingly broad, undergraduate-level STEAM textbooks (that are known to me anyway, suggestions are welcome) and there is an evident need for them.

Comment #156 January 11th, 2017 at 10:23 am

Dang! upon reflection, the very first entry in the entropy-list (of the preceding comment) should have been

• the Gibbs way [ of defining/teaching

entropy]Indeed my own first exposure to the notion of entropy — via Mark Zemansky’s

Heat and Thermodynamics(1968) — presented Gibbs’ ideas and (pretty much)solelyGibbs’ ideas.Few if any undergraduates, now or ever, receive their degree with exposure to more than a small subset of the many ways in which notions of entropy are defined and employed. How have

Shtetl Optimizedreaders been first introduced to notions of entropy? How does that first exposure condition later understanding?And most crucially and contentiously, what initial exposure(s) to the notion(s) of entropy should “Team Yellow Book” endorse? 🙂

Comment #157 January 11th, 2017 at 12:22 pm

Scott #151: Fair enough. Davis was my next-door colleague at NYU in the 1990’s, and I’m kicking myself for not asking him about these fascinating historical anecdotes at that time. By the way, congratulations on your excellent survey!

Comment #158 January 11th, 2017 at 4:33 pm

John Sidles #156:

In undergraduate machine learning when we talk about the basics of information theory.

Comment #159 January 11th, 2017 at 6:38 pm

Polymath #151: Here is Haeusler’s original message with the analysis. I think I can eliminate \bot much cheaper than by squaring the size of the formula. Just introduce a new propositional variable p and prepend the formulas expressing that any propositional variable follows from p before the fomula where \bot is replaced by p:

(p->y_1)->()->…->()->(p->y_m)->(formula where \bot is replaced by p)

This only adds a small multiple of m to the size of the formula. So I have decreased the exponent from 270 to 135, no? Note that p->A implies p->(B->A). Since any possible formula is build up from propositional variables and (…->…) terms, this is sufficient to ensure that p implies any possible formula.

Comment #160 January 12th, 2017 at 4:19 am

Sniffnoy #154: Passing to the purely implicational fragment of intuitionistic logic allows you to get to a PSPACE complete problem with a nice logical structure. Going from intuitionistic logic to minimal logic should change very little in my opinion, except for making the presentation of the PSPACE complete problem even more uniform. Compressing a natural deduction proof tree to a (polynomial size) dag is also easy, but the complexity now hides in the paths through that dag. For any given node in the dag, a path to the root discharges a certain set of assumptions. And a “conjugate” set of paths from a node to the leaves gives a set of undischarged assumptions. Now for each possible path from a node to the root, you have to verify that there is a corresponding “conjugate” set of paths to the leaves, such that the only undischarged assumptions are those discharged by the given path to the root.

You can put various sorts of “road signs” into the dag, to reduce the set of possible paths. Those “road signs” are a sort of lossy compression of the relevant set of paths. Independent of whether one can prove some collapse of the polynomial hierarchy to NPSPACE in the end, it would be interesting whether one can provide enough power to these “road signs” to translate proofs using higher order features like the cut-rule into the dag-like natural deduction proof system without an exponential size explosion.

This would be interesting, because functions are the typical higher order feature, and an implication a->b->c->v just reads like the type signature of a function with parameters of type a, b, and c returning a value of type v. And since compression and higher order features should be closely related in my opinion, I think that such dag-like natural deduction system could turn out to be instructive.

A paper title like “NP vs PSPACE” is a bad idea, even if it would work out in the end. It just prevents even the people who have read the paper to say anything substantially positive about it. Instead of saying something negative myself, I quote Arnaud Spiwack saying

I guess it is an accurate judgment of the presentation given back then. I initially wanted to write something about path through a dag, which pass or don’t pass along certain points labeled A_1, …, A_m. Then you could ask whether there is a path not passing by any of A_1, …, A_m, and that problem would be in NLOGSPACE. But if you ask whether there is a path passing through all of A_1, …, A_m, then the problems is only in NP and in NSPACE(m log n).

Comment #161 January 12th, 2017 at 8:41 am

What a great survey! Thank you very much for sharing it. Just a small comment on an aspect I found confusing: On page 4 you define NP-intermediate languages as those that are in NP but neither in P nor NP-complete. Later (e.g., on p. 35 for discrete logarithm) you more liberally use the term NP-intermediate problem for those problems that are currently not known (or not believed) to be in P nor NP-complete.

On page 37 it gets a bit messy though, since you write that one needs to assume “the hardness” of a “specific NP-intermediate problem” for safe cryptography. It’s not clear to me what “hardness” is supposed to mean here, since if P != NP and thus NP-intermediate problems do exist, none of them can be NP-hard by definition. If you just mean “harder” than any problem in P, then that’s what you already get by assuming it really is an NP-intermediate problem. It could very well be, that I misunderstood something however. Thanks again for sharing this work!

Comment #162 January 12th, 2017 at 12:40 pm

Max #162: Sorry for the imprecision! Of course I meant that one needs to assume that a specific problem (like factoring or discrete log) that’s

thoughtto be NP-intermediate,actually isNP-intermediate (or better yet, that it requires exponential time), as opposed to being in P. Will fix as soon as I’m back at my computer.Comment #163 January 12th, 2017 at 7:43 pm

My view, which I (and others) have remarked upon in SO over the years; is that P ?= NP is a great problem, but NO ONE HAS ANY IDEA if N = NP would be useful or not. Thus the stock answers about applications are somewhere between pure speculation and false advertising.

A frequent counter argument, addressed in the review and the comments above, is based on the observation that many O(n^p) discovered with a high power p have soon been improved to p = 2 or 3. I say SO WHAT?

I don’t know if any statistics have been compiled, but even if this turned out to be true 99% of the time, that is pretty weak evidence. It is very plausible that the only problems that anyone can find a polynomial algorithm for are basically simple (say, optimally p = 3) problems that the early attempts led to a non optimal algorithm, and no one is smart enough to figure out an algorithm for an actual Omega(n^100) problem.

The point is that we are basing our guess on easily solvable examples, because that is all we know. We have no clue if there are truly harder problems with polynomial algorithms, or not. Thus we have no clue if high p algorithms can usually or always be reduced to low p algorithms, or not.

Here is an analogous issue. I have long expressed a contrarian view about numerical quadrature (NQ): Unless you have strong evidence that the data is smooth, there is no reason to think that a high order NQ method is better than a low order method. Thus the simplest method (trapezoid rule) is as good as Simpson’s method, and everything else. Now, everyone has probably done an experiment integrating a smooth function, such as cos(x) and shown that the error is in accordance with the order of the method. BUT, these examples are all smooth functions, with rapidly decaying expansion coefficients. The error term for an order k quad method is of form

ErrorBound = C * M_k * h^k

where C is a constant, M_k is a maximum of the k-th derivative, and h is the step size. But, for nonsmooth data, there are no higher derivatives, so M_k should be taken to be infinity, so the error bound tells us nothing.

Or, consider Gaussian quadrature. Does your intuition really think that evaluating a function at oddball places is better than at regularly spaced places? If you do, you are making a hidden assumption about the smoothness of the data.

My point here is that conventional wisdom about NQ comes from the ultra nice behavior of toy problems, the only ones that can be analyzed analytically.

BTW 1: It would have been better if I had not outlined this view to numerical analysts at job interviews back in the day.

BTW 2: As a grad student, rather than doing my research, I tried to construct some functions where C * M_k * h^k grows instead of shrinking for large k, or is infinite. I encountered some big surprises. My plan was to define f(x) to be the sum from m = 0 to m = infinity of

a^m * cos (b^m x).

You can compute the antiderivative, and by picking values of a and b, you can make high derivatives shrink or grow, and produce a function where different order NQ methods gave various results.

Running some numerical experiments, I encountered bizarre results. These turned out be spurious, and led me to discover a lot about how cos(x) and sin(x) are calculated for large x. Namely, if x is big enough that the ieee 754 representation of x is the same as that of x + PI, you have zero significant digits of accuracy. Obvious in retrospect!

Comment #164 January 12th, 2017 at 9:32 pm

Raoul #164: Is it a fair guess that someone who calls it “N=NP” probably hasn’t spent too many sleepless nights on it? 😉

OK, you’re a valued commenter here, and I apologize for taking such a cheap shot. More seriously: as I pointed out in the survey, if we don’t even know whether a function grows polynomially or exponentially, then far less do we know whether it grows like n

^{1000}or 1.0000001^{n}. So, while of course we ultimately want to know the latter, it makes sense to ask the former first—i.e., to find out which planet we’re on, before the continent, city, and school district, all of which could of course also have huge effects on our quality of life.Comment #165 January 12th, 2017 at 11:47 pm

Raoul #163:

With the advent of computational science and engineering in general and simulation techniques like FEM in particular, some of the aspects you pondered over in your graduate school days have already been incorporated in the standard treatments in text-books and courses.

1. What you are pointing out can perhaps be simplified this way (the by now standard way). Observe that conceptually, numerical integration consists of two stages: (i) first fitting a curve (usually a polynomial) to the given discrete set of data-points, and then (ii) carrying out integration on the expression so obtained for the fitted curve. Now, it’s well known that right in the first stage, viz. of curve-fitting, you always go in for the lowest possible order that you can use; otherwise, you are introducing (a) unnecessary oscillations within the interpolation range, and (b) wilder-and-wilder fluctuations in any extrapolation you might perhaps be tempted to make.

2. For non-smooth data, if the location of the sharp fronts in the solution can be anticipated in advance, the straight-forward advice would be to use the sub-domain collocation, i.e., partition the domain at the jumps and use lower-order quadrature within each of those sub-domains.

3. FEM courses routinely discuss the above-mentioned issue in terms of the h- vs. p-refinement. The h-refinement means making the cell-size smaller, i.e the mesh finer (i.e. increasing the number of nodes) but using lower-order polynomials within each cell; the p-refinement means keeping the mesh coarse but using higher-order polynomials. (h refers to the cell size, p refers to the order of the polynomial; that’s how the names.)

In core engineering, people

used to beunreasonably enthusiastic about the higher-order methods. (That was about 2–3 decades ago.) By now, peoplehavecome to realize that higher-order might mean fewer nodes but not necessarily lower computational cost, because while higher-order does mean fewer nodes, it also means more number of terms for each cell anyway. (Yeah, we core engineers tend to learn our computational complexity lessons at a rather leisurely, personalized, pace!)Surprisingly, no general-purpose or fundamental mathematical guidelines exist to pick one over the other; it all depends on the type of problem there is, and the matter still remains one of “empirical” testing. (None has in fact so far even tried to create even a simplistic map of regimes of suitability.)

Personally, I tend to favor the h-refinement. My reasons are similar to yours.Practically, for nonlinear problems of Navier-Stokes (i.e., for problems in which sharper field changes are likely, unlike, say, in the harmonic analysis problems like the Poisson equation where everything is guaranteed to be smooth), the current favorite is FVM (see the popularity of the OpenFOAM software), which amounts to more or less nothing but using the trapezoidal rule between the adjacent cell centers. FEM (involving higher orders and all) tends to give

artificialjumps in the solutions (at the element boundaries), and so, the trend is to go in for lower-order FVM, exactly for the basic reason you pointed out.3. I didn’t have

anyintuition about GQ. I came to developsomeonly later on,afterusing it.When I read GQ, I simply marveled at how ingeniously it makes the weighting function carry the burden too. In FEM, it’s a standard topic to point out the reduction in the complexity (i.e., reducing the order, roughly speaking, by half) by using the GQ.

Once I developed some intuition about GQ, I spotted the structural similarity between the weights of GQ and the weighting function in the method of weighted residuals (MWR). I tried to find if this similarity had been treated in a formal theory. So far, I have not run into it.

Already too long. [But what to do? This topic is one of so few at SO that I can at all comment on!]… But, no, today, you would no longer be seen as a contrarian. … Anyway, bye for now.

Best,

–Ajit

[E&OE]

Comment #166 January 13th, 2017 at 4:44 am

Scott #163:

Just a quick recap to keep my sanity:

You write a breathtaking 116 page survey on one of the most intriguing contemporary math/compsci problems striking a marvelous balance between deep formal results, transparent explanations and witty commentary and go on to share it for free with the world. And then upon the mentioning of a random minute quibble, you reply with: “Sorry for the imprecision. Will fix it as soon as possible!”

Hmmm…!? I guess, I am now to say: “I accept your apology. Don’t worry, you can take your time with the update!”. Right? 😛

No seriously, thanks again for the brilliant effort!

Comment #167 January 13th, 2017 at 7:06 pm

I think that P=PN is true inside black holes, and false everywhere else. Is it possible that some laws of Nature are neither pure mathematical theorems nor pure physical laws, but a mixture of both? Indeed, we have the Church-Turing-Deutsch thesis which states that physics and computer science is one and the same thing. And then we have the Curry-Howard isomorphism, which states that computer science and mathematics is the same thing too. By transitivity, there shouldn’t be much difference between physics and math, it’s just a matter of scale… One can imagine that gravity is, on the quantum scale, a manifestation of computational complexity and then try to expand General Relativity to computer science! We’re used to having a clearcut distinction between the theorems of math and the laws of physics. But quite possibly, complexity theory requires physicomathematical laws if we’re to explain, for instance, why it seems about as hard to build a quantum computer as it is to find a classic algorithm that factors integers in polynomial time. What do you think of this suggestion?

Comment #168 January 14th, 2017 at 5:27 pm

Serge #167

I think you’re grossly overstating your case. Those theses don’t say that there isn’t much difference between the three fields.

Do you have any reason to believe P=NP holds in black holes and fails outside of them? I would love to see it.

I don’t see any reason to think that factoring numbers and building a quantum computer are equally difficult. Again, evidence please.

Comment #169 January 14th, 2017 at 5:39 pm

Scott, I recently stumbled upon your old blog post about the Sensitivity Conjecture. I thought you might be interested to know that, in the special case of k-Uniform Hypergraph properties, it was recently proven: https://arxiv.org/pdf/1608.06724v1.pdf This is a topic of personal interest to me as well (I’m an author of [1]). IMO, this result isn’t likely to be optimal because the lower bound on s(f) given is the number of vertices of the k-uniform hypergraph, so the exponent of the polynomial grows with k.

Interestingly, the paper is mostly about low-sensitivity example functions, and they find examples of k-uniform hypergraph properties that have a lower sensitivity than my coauthers and Laszlo Babai (who set us on the sensitivity of k-uniform hypergraph properties) had thought possible. Their function neither meets the lower bound for sensitivity that they give, nor does it break an exponent of 2 for the gap between sensitivity and block sensitivity, so I consider the idea that the true exponent to be 2 still quite plausible. There’s definitely a lot of room for future work, but it’s an exciting development!

Comment #170 January 14th, 2017 at 9:49 pm

@Raoul #164

That’s actually really interesting as an idea, and I am quite fond of it. It ties into some recent conversations I’ve had about the philosophy of mathematics. Why is mathematics remarkably applicable to science? Because humans are bad at doing math that isn’t. There are tons of theorems in number theory that I’m sure are fascinating that start “let n > 2^2^2^2^2^2^2 but I don’t know any of them, because theorems like that are very hard to prove.

Comment #171 January 14th, 2017 at 9:52 pm

@Raoul #164

That’s actually really interesting as an idea, and I am quite fond of it. It ties into some recent conversations I’ve had about the philosophy of mathematics. Why is mathematics remarkably applicable to science? Because humans are bad at doing math that isn’t.

A great example is topology. High dimensional manifolds are extremely complicated and we don’t really understand them much. Or polynomial algebra: there is a theory analogous to linear algebra for higher order maps, but no one really knows much about it.

Comment #172 January 15th, 2017 at 8:15 am

Scott #168

The macroscopic laws of physics are the result of quantum phenomena which can be interpreted as quantum processes. The wave-function itself is a sort of program which the particle must obey, and the wave is that process, the execution of that program. Whenever you observe a particle, there’s an interaction between the process of the particle and the observing process on your own mind.

Everything is a program – even roads are. Whenever you’re driving your car, you find it computationally easier to follow the road than to run into the basement. And maybe also, every single body is submitted to complexity theory when moving along the curvature of space-time. I believe that the difference between classical physics and computer science is only in the huge gap between the scale of the phenomenon and that of its observer. On the other hand, quantum physics is at the same scale as the observer: process to process – or, if you prefer, wave-function to mind… For me, the oddities of quantum mechanics are just the consequence of this special circumstance: a congruence of scales between the phenomenon and its observer!

The similarity between computer science and mathematics is even easier to show. A mathematical reasoning is a computer program that you run on your mind. A computer process is a mathematical reasoning that you run on an external, artificial processor. So the difference between CS and math isn’t in the scale but rather in the point of view. And whenever you come to understand something for the first time, in fact you’ve managed to run on your mind a process that had been previously running only in Nature or on someone else’s mind. That’s why I believe the probability of understanding something, and of mathematical discovery, is governed by the very laws of quantum mechanics!

I don’t have evidence for it, but I find it appealing and natural to suppose that the technical difficulties encountered in building quantum computers must belong to the same complexity class – in a suitable sense – as the most difficult problems that are known to be in BQP but not yet known to be in P.

You’ve written somewhere that in would be possible in theory to take advantage of the twins paradox to solve NP-complete problems in polynomial time – but only in theory, because in order to create the necessary acceleration you’d have to make use of so much energy that your spaceship would collapse into a black hole! So I came up whith the idea that, maybe inside black holes, it’s indeed where and only where P=NP holds – and probably even P=PSPACE. In any case, such a possibility would provide us with an elegant explanation for the undecidability of whether those classes are different, without resorting to the axioms of ZFC but to physics only.

Comment #173 January 16th, 2017 at 12:47 pm

@Stella #171 Re: Topology

Actually topology is a particularly tricky example 🙂

A lot is known for manifolds of dimension n >= 5 that is hard for n=3,4. The best-known example is probably Poincare conjecture. The version for n>= 5 was proved in 1961, n=4 was settled in 1982. n=3 is famously harder.

This is by no means an accident – lots of things become easier for n>=5.

Comment #174 January 16th, 2017 at 7:35 pm

Well, the summary of the correction is now up, so now I can truly ask if anyone’s determined what the power on the log is. 🙂

Comment #175 January 17th, 2017 at 4:35 pm

Re section 1.2.7 The Constructivity Objection, I think this may be relevant to note…

If I remember correctly, if P = NP then we already have an example of an algorithm that solves 3SAT in polynomial time. I’m having trouble Googling for the source.

The idea is, if P = NP then some algorithm A can compute satisfying assignments to 3SAT in polynomial time. So our algorithm A’ can just spiral through an enumeration of (algorithm i, timestep j) and attempt to verify each time some algorithm terminates with a solution candidate. It will get to (algorithm A, timestep poly()) in polynomial time and terminate.

Granted, anything we’d call a “constructive proof of P vs. NP” has a good chance of offering up a more useful construction than this trivial one. Still, it seems that knowing a construction ahead of time is another interesting way that P vs. NP is unique among conjectures.

Comment #176 January 17th, 2017 at 4:54 pm

Oh never mind, it’s right there in your next footnote (Levin’s Universal Search).

Comment #177 January 18th, 2017 at 8:17 am

Scott, out of curiosity, have you read Jeff Atwood’s blog post “They Have To Be Monsters” at “https://blog.codinghorror.com/they-have-to-be-monsters/”? If you haven’t, then don’t read it now, postpone it to when your bubble is thoroughly broken.

Comment #178 January 18th, 2017 at 9:43 am

jonas #178: No, I hadn’t read it—I just read it now. I confess that I don’t understand what you mean by “my bubble.” Whatever my other faults,

refusing to read things that I might disagree withhas never been one of them!In any case, though, I found very little to disagree with in that post. Harassing the Sandy Hook parents, with claims that their children weren’t actually murdered, strikes me as probably the most despicable form of trolling that I’ve ever heard of. But why would anyone even imagine that I’d think otherwise?? Partly inspired by my own experience, I’ve spoken out vociferously the past couple years against online harassment and shaming campaigns—regardless of whether they’re directed against women, gays, Muslims, nerdy males, or any other undeserving target. And I’m happy to reiterate that stance today.

Comment #179 January 18th, 2017 at 3:25 pm

Hi Scott,

looking at the section about permanent vs determinant, is there any parallel between the signed term in the determinant formula and the sort of interference (cancellation) we observe in QM algorithm speedup?

Comment #180 January 18th, 2017 at 4:50 pm

Does it have any meaning to define something like the “cyberspace temperature”?

E.g. as the mean temperature of the equipment that are connected and constitute the “cyberspace” or as the “associated temperature” with some form of the information content entropy in each time instant of the “cyberspace”?

Comment #181 January 18th, 2017 at 5:12 pm

In regard to Levin’s Universal Search — as referenced in footnote 4 of Scott’s \(\textsf{P} \overset{?}{=} \textsf{NP}\) survey (see also Liron’s comment

circa#176) — Marcus Hutter substantially strengthened Levin’s algorithm in a very readable article (which is much-cited too) “The Fastest and Shortest Algorithm for All Well-Defined Problems” (2002).An notable innovation of Hutter’s analysis — an innovation essential to his proof — is the following Hartmanis-style restriction

(emphasis as in the original)

A comparably rigorous separation (or not) of the restricted complexity classes \(\textsf{P}’ \overset{?}{=} \textsf{NP}’\), where the “\({}’\)” denotes Hutter-Hartmanis restriction, would generate vigorous discussion of whether the Clay Institute’s Millennium Prize for resolving the “P vs NP Problem” could credibly be awarded for such a weakened separation.

In summary, analyses like Hutton’s provide concrete reason to contemplate, not only strengthenings of \(\textsf{P} \overset{?}{=} \textsf{NP}\) (in the sense of Chapter 5 of Scott’s analysis), but also weakenings to \(\textsf{P}’ \overset{?}{=} \textsf{NP}’\) (in sense of Hutter-Hartmanis restrictions imposed upon the complexity classes \(\textsf{NP}\) and \(\textsf{NP}\)).

—-

Also, greetings to

Shtetl Optimizedreaders from QIP 2017 in Seattle! (don’t hesitate to say hello if you run into me).In particular, Scott’s student Shalev Ben-David just finished a well-received talk “Sculpting quantum speedups”, regarding which perhaps Scott and/or Shalev might consider posting a

Shtetl Optimizedfollowup (which many would read with interest, including me).Comment #182 January 18th, 2017 at 5:27 pm

fred #180:

is there any parallel between the signed term in the determinant formula and the sort of interference (cancellation) we observe in QM algorithm speedup?

I don’t think the explanation for why the determinant is easier than the permanent has anything to do with QM, but there are certainly deep connections here.

In particular, in QM, the determinant arises as the transition amplitude for identical non-interacting fermions, and the permanent as the amplitude for identical non-interacting bosons. (Among many other things, that observation was the starting point for BosonSampling.) And the fact that the determinant has that minus sign in its definition, and the permanent doesn’t, could be seen as the explanation for why fermions satisfy the Pauli exclusion principle while bosons don’t. In particular, in any process that would cause two or more fermions to pile up in the same state at the same time, that minus sign causes all the contributions to the process’s amplitude to interfere destructively and cancel each other out, so that the process never happens.

Comment #183 January 19th, 2017 at 4:05 am

kill yourself kike

Comment #184 January 19th, 2017 at 5:42 am

Scott: re #178, by bubble I meant the bubble you mention in the blog post from 2016-12 http://www.scottaaronson.com/blog/?p=2984 :

> After this […], I plan to encase myself in a bubble, stop reading news, and go back to thinking about quantum lower bounds, as if we still lived in a world where it made sense to do so

The blog articles you have posted since then were about mathematics, so I assumed you’re still in that bubble.

Comment #185 January 19th, 2017 at 8:52 am

“Aaron” #184: Thanks for sharing your thought! For the benefit of my other readers, I’m letting your comment through, since it expresses the same basic sentiment as various other comments stuck in my moderation queue, but has the virtue of being far more succinct.

Comment #186 January 19th, 2017 at 12:41 pm

Scott, I would like to hear your thoughts about Michael Wehar’s dissertation (briefly summarized on Dick Lipton’s blog).

Comment #187 January 19th, 2017 at 12:49 pm

Aula #187: It looks like an interesting piece of work in parameterized complexity and formal language theory, but I haven’t read it carefully enough to have any useful comments.

Comment #188 January 19th, 2017 at 1:00 pm

jonas #185: OK, thanks for clarifying. It turns out that isolating myself from all Trump-related news isn’t possible. The best I can do is to tell myself that:

(1) humanity (or at least, a large enough portion of it) has decided it’s had enough with reason, progress, and enlightenment, and that the Dark Ages were underrated—all they were missing was the destructive technologies we have now,

(2)

but at least none of this is my fault,and(3) while we’re on the way down, I can at least prove some cool theorems, and if I’m lucky, raise children who might contribute in some tiny way to maintaining pockets of civilization (with the chance of their doing so needing to be weighed against the moral questionableness of bringing new people into such a precarious world).

Comment #189 January 20th, 2017 at 12:19 am

Scott:

1. WRT #188, I hope that the Trump era is more like China’s “Cultural Revolution” than the Dark Ages. I think there is a good chance he will be impeached in a couple years. Why? He will not be able to fulfill any of his promises other than to f*ck thing up and help the ultra rich get richer. A lot of voters will feel swindled.

Also, while he might be intentionally selling out to the Russians, he is much more likely to be an over confident buffoon who thinks he can out-deal the Russians, but actually Putin is driving him like a car. People will figure this out. Most Republicans and conservatives in general hate Communists, and post communist fascists. Recall that after Nixon got reelected in a landslide, in about a year those who voted for him figured out that he lies like Pete Rose, and they were pretty p!ssed off, and his support vanished. Of course, by today’s standards, Nixon is much better than many Republicans.

2. It is essential that the Democratic Party get some leaders with some common sense. Supporting fringe progressive, but controversial, topics has to go. For background, an article in the New Yorker, about last September, discusses this, particularly how Hillary lost OH, PA, WV, and KY for the party by stating “We are going to put a lot of miners out of work.”. The reality is that coal cannot compete with natural gas, so coal mining is dying OF ITS OWN ACCORD, so why in the world would you tell the miners that “this is our fault”? Earth to Hillary! There are 100 other examples.

END POLITICS. START NP STUFF.

3. Since Ajit R. Jadhav told “the rest of the story” that my intuition about quadrature is now conventional thinking, I am emboldened to post my intuition about NP in general, which goes back to my reaction to a talk on the topic I attended in the 1970’s. It can be summarized succinctly as follows:

There are a huge number of known NP – complete problems. These problems are all over the place. If the tool you use cannot distinguish between the wild zoo of NP-C problems, this tool is pretty blunt.

Comment #190 January 20th, 2017 at 4:30 am

“I offer a personal perspective on what it’s about, why it’s important, why it’s reasonable to conjecture that P≠NP is both true and provable, why proving it is so hard, the landscape of related problems, and crucially, what progress has been made in the last half-century toward solving those problems.” My guess is that P≠NP is true but unprovable in Zermelo-Fraenkel set theory. I think that the P≠NP problem might be a variant of the difficulty in refuting a clever charlatan. Suppose that a clever charlatan claims to have invented a perpetual motion machine. Can we be sure that the charlatan’s claim is bogus? To find a hidden battery or some other trick might be very difficult. If the machine has more and more components, it might be more and more difficult to refute the charlatan. Similarly, if a clever charlatan claims to have a proof that P=NP, it might be more and more difficult to refute the alleged “proof” as the “proof” is longer and longer. In other words, my guess is that to prove P≠NP might literally require an infinitely long proof.

Comment #191 January 20th, 2017 at 9:08 am

Hi Scott,

you wrote

“Of course, one can also wonder whether the physical world might provide computational resources even beyond quantum computing”.

On that note, is there some formal theory about the computational power of “analog computers”, where the inputs and states wouldn’t be represented and manipulated as bits but directly mapped to physical values, like position?

The difficulty being that error handling has to be intrinsic to the theory (a bit like in QC).

Or is it mostly a matter of saying that all the values are encoded in “unary notation” (e.g. making subset sum look polynomial)?

Comment #192 January 20th, 2017 at 2:32 pm

David Brown #190: It’s seems to me your reasoning would apply equally well to

anyunsolved problem.