## Why Philosophers Should Care About Computational Complexity

**Update (August 11, 2011):** Thanks to everyone who offered useful feedback! I uploaded a slightly-revised version, adding a “note of humility” to the introduction, correcting the footnote about Cramer’s Conjecture, incorporating Gil Kalai’s point that an efficient program to pass the Turing Test could *exist* but be computationally intractable to *find*, adding some more references, and starting the statement of Valiant’s sample-size theorem with the word “Consider…” instead of “Fix…”

I just posted a 53-page essay of that name to ECCC; it’s what I was writing pretty much nonstop for the last two months. The essay will appear in a volume entitled “Computability: Gödel, Turing, Church, and beyond,” which MIT Press will be publishing next year (to coincide with Alan T.’s hundredth birthday).

Note that, to explain why philosophers should care about computational complexity, I also had to touch on the related questions of why *anyone* should care about computational complexity, and why computational complexity theorists should care about philosophy. Anyway, here’s the abstract:

One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong. In particular, I argue that computational complexity theory—the field that studies the resources (such as time, space, and randomness) needed to solve computational problems—leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume’s problem of induction and Goodman’s grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis.

Weighing in with 70 footnotes and 126 references, the essay is basically a huge, sprawling mess; I hope that at least some of you will enjoy getting lost in it. I’d like to thank my editor, Oron Shagrir, for kicking me for more than a year until I finally wrote this thing.

Comment #1 August 8th, 2011 at 4:01 pm

This will be a fun read. Your writing has been making me think along these lines for a while.

Comment #2 August 8th, 2011 at 4:02 pm

Really interesting essay!

It is 1:30 in the morning and I can’t stop reading. I like its organization.

Comment #3 August 8th, 2011 at 5:19 pm

Interesting stuff!

The CTC paradox feels to me like it should have a solution along the same lines as your argument that black hole computers can’t exist. If there were a CTC out there somewhere, is there reason to think that we can push objects into it in time polynomial in the size of the object?

A really speculative way of putting this is that there should be a “force” pushing objects away from a CTC “caused” by the fact that it takes exponential computing resources for the object to find a fixed point. (I don’t know physics, but if I did I want to test this idea by trying to figure out whether the information theoretic properties of black holes “explain” their strong gravitational pull.)

Comment #4 August 8th, 2011 at 5:39 pm

Noah: That’s a strange and interesting speculation!

Unfortunately, I don’t know of any physical reason to believe that if CTCs existed, there would exist a “force” “pushing” objects away from them.

The main

physicalargument against CTCs that I know about is that stable wormholes would apparently require matter with negative energy density, which has never been observed experimentally (note that this would be completely different from antimatter).(Of course, besides “physical” arguments, there’s also the obvious “logical” argument—i.e., that you need to deal with grandfather paradoxes—as well as the “computational” sharpening of the logical argument that I discuss in the essay.)

Comment #5 August 8th, 2011 at 6:16 pm

Great work Scott.

I look forward to study it sometime soon. On this particular occasion, I am grateful.

Comment #6 August 8th, 2011 at 8:27 pm

Hi,

Regarding time travel computation being impossible: you address the case where you get the whole computation ‘for free’ (by giving yourself the answer in the past and remembering to then go back to the past and give yourself the answer), but what about the case where you use travel through time as a way to replace time with space? in this scenario you have a turing machine traveling back and forth through time, rewriting the tape so that it says different things at the same moment (in parallel universes perhaps). Essentially this will allow P to have the power of PSPACE by letting time to be rewritten, and therefore let P=NP. This is nice because it doesn’t allow infinite free computation, but merely an increase in power.

Comment #7 August 8th, 2011 at 8:37 pm

is it a good idea to encourage philosophers to start looking at your field? 😉

Comment #8 August 8th, 2011 at 8:52 pm

I have discussed some related topics,such as Can We Accelerate Computation by General Relativity on my wordpress.

Obviously,we can understand more about the features of computation in the view of phyisics ,especially,relative and quantum mechanics, no matter the computation can be accelerated by General Relativity or not.

Comment #9 August 8th, 2011 at 9:34 pm

pat #6: Both of the things you mentioned—“sending the answer back in time”, and “using time just like another spatial dimension”—are things that I explicitly discussed in the article, in order to explain that both are NOT what I meant by CTC computation! (See footnote 64, and the paragraph above it that begins “It is important to realize…”)

If you want to understand what I

didmean by CTC computation, you’ll have to reread Section 10.3! 🙂 Or check out my paper with Watrous, which goes into more detail.Comment #10 August 8th, 2011 at 9:51 pm

Vlad Levin #7: Well,

wheneveryou build a bridge between disciplines, there’s a risk that someone somewhere is going to drive a truckload of foolishness over it. But as a general rule, I think that the benefits of open borders outweigh the risks. When philosophers start arguing about modern TCS as much as they currently argue about physics or neuroscience, that’s when I’ll feel like we’ve made it as a field.Comment #11 August 8th, 2011 at 10:08 pm

that’s what i get for blabbing my mouth before actually reading your paper – my apologies!

Comment #12 August 8th, 2011 at 10:19 pm

On page 23: “Theorem 2 (Valiant [117]) Fix a finite hypothesis class H, a Boolean function f : S → {0, 1} in H….”

Is “Fix” in that sentence math jargon or a typo? My intuition is on typo, but I would hate to be wrong and miss something, as it appears to be a fairly subtle definition.

Comment #13 August 8th, 2011 at 10:24 pm

Terence Cole: It just means, “

LetH be a finite hypothesis class,letf be a Boolean function…”I suppose it was jargon that I didn’t even realize was jargon.

Comment #14 August 8th, 2011 at 10:38 pm

Ah, interesting — that is the meaning I gave it by substituting in “For” for “Fix” :-).

Comment #15 August 9th, 2011 at 12:43 am

Footnote 26 – about different verbs meaning to know in Hebrew doesn’t address the probable real reason for that distinction- since Biblical times yodeyah when used to a person has generally meant to have sex with.

I think the section on PAC is probably going to be too technical for most philosophers. This may be projection/biases on my part since that’s also the area with which I’m least familiar. In general, you seem to be aiming at a very mathematically knowledgeable set of philosophers, and while they do exist, they seem to be a very small subset of that population.

Overall, this was very well-written and probably provides some decent food for thought even for the people more directly on the math/comp-sci end of things.

Comment #16 August 9th, 2011 at 12:53 am

I can’t sort n integers faster than nlog(n); therefore i am.

Comment #17 August 9th, 2011 at 3:01 am

It’s funny, going into work today I was really craving a bit of philosophy (ideally of the STEM variety) to help alleviate some minor existential anxiety. I visit your blog and BOOM! A whole, heaping helping of exactly what the doctor ordered. So thank you!

Reading about your thoughts on the implications of complexity theory for artificial intelligence, I started wondering what your take would be on a couple of other philosophies of mind.

In particular, I’m very curious to know what you think about the idea of embedded cognition, which as far as I understand it, says we are who we are precisely because of what we are (i.e., agents inextricably bound up with the natural world, to the point that our cognitive processes are as dependent upon the nature of our form and interactions as they are on our particular brains).

I’m also wondering what you would think about the dynamical systems argument, or the “centrifugal governor” analogy for cognition, which in it’s most well articulated form posits a model of cognition that is truly non-computational.

How would complexity theory speak to these (now that I’ve written them down back-to-back somewhat similar) models of cognition and their implications for AI?

Comment #18 August 9th, 2011 at 5:46 am

For the record, philosophers do sometimes think about these issues already.

http://philpapers.org/browse/philosophy-of-computing-and-information

Comment #19 August 9th, 2011 at 7:50 am

Thanks Seamus, that link is awesome. Should keep me going for months.

Comment #20 August 9th, 2011 at 8:58 am

Joshua Zelinsky #15:

about different verbs meaning to know in Hebrew doesn’t address the probable real reason for that distinction- since Biblical times yodeyah when used to a person has generally meant to have sex with.Thanks, I didn’t *know* that (but it makes sense)! I just found it amusing how a language can take an implicit position on the philosophical issue of “what is knowledge?” by how it carves up words. The idea that “knowing” a person is the same sort of thing as “knowing” a fact is so ingrained in us, that it might take a language that distinguishes the two (for whatever reason!) to help us realize how weird it is.

In general, you seem to be aiming at a very mathematically knowledgeable set of philosophers, and while they do exist, they seem to be a very small subset of that population.It’s true that I didn’t want to dumb things down—e.g., I wanted to

showhow PAC-learning relates to Occam’s Razor, not just assert that it does.Also, as pointed out here, I was certainly aiming at analytic philosophers rather than Continental ones. I assumed that the Continental, Heidegger-loving types wouldn’t care about computational complexity no matter how it was explained to them. (Though on the other hand, they might find it more profound the

lessthey understood it, in which case I’d win both ways! 🙂 )Comment #21 August 9th, 2011 at 9:10 am

Ian Finn #17: I confess that I’m not able to make much sense of

eitherof the ideas you mention! But they remind me ofanotheridea I’ve never been able to make sense of: John Searle’s insistence that the brain just “excretes” consciousness in the same way the liver excretes bile, and that a computer simulation of the brain wouldn’t be conscious because it wouldn’t have the bile.As I see it, if you want to go this direction, then like it or not, you’ve now left philosophy and entered the falsifiable realms of neuroscience and physics. I.e., the burden is now squarely on you to explain

what it isabout the physics or biology of the brain, or about the world in which it’s embodied, thatmakesit wrong to think of the brain as a complicated classical computer. It’s to Roger Penrose’s enormous credit that he clearly understands that, even if I find his more specific ideas crazy.Comment #22 August 9th, 2011 at 9:16 am

Seamus #18: Thanks so much for the extremely helpful link! I’ll take a look at the articles there, and if I find ones that are relevant, there’s still time to add discussion of them to my essay.

Comment #23 August 9th, 2011 at 9:19 am

Sorry! I guess I’m pretty terrible at explaining myself. Here are some links that hopefully make a little clearer what I was getting at:

Embodied cognition:

http://en.wikipedia.org/wiki/Embodied_cognition

Cognition without computation:

http://people.bu.edu/pbokulic/class/vanGelder-reading.pdf

Comment #24 August 9th, 2011 at 9:26 am

I read the first 5 pages of your manifesto with something approaching ecstasy; then I saw this sentence: “[e]ssentially all complexity theorists proceed (reasonably, in my opinion) on the assumption that P ~= NP, even if they publicly claim open mindedness on the question.” As you must know, Dick Lipton believes P = NP. He is also one of the biggest names in computational complexity. Hence, your statement strikes me as misleading and possibly academically dishonest. I want desperately to keep reading your manifesto but until this sentence is amended, I simply can’t. Please don’t take offense. I’m a big fan of yours. But for the sake of Dick’s reputation I have to take a stand on this issue.

Comment #25 August 9th, 2011 at 9:44 am

Arnie: With all due respect, Dick Lipton is a

perfectexample of what I was talking about! He’s a brilliant and wonderful person, but he doesn’t have ascientificreason to believe P=NP, and he doesn’t base any actual research on that belief (no one does!). Rather, like the quasi-religious people who “believe in belief,” he thinks it’s good idea tosayhe believes P=NP—that it makes him more open-minded and humble, or something of that kind.Comment #26 August 9th, 2011 at 10:17 am

Ian #23: I wasn’t suggesting that the lack of clarity was

yourfault! 😀Comment #27 August 9th, 2011 at 10:38 am

I read this in about 45 minutes. I didn’t find any new idea and it appears that you are behind the times on some other aspects.

However, it was easy and not extremely boring to read.

Comment #28 August 9th, 2011 at 10:39 am

What is a ‘yellow book’ (p. 5)?

Comment #29 August 9th, 2011 at 11:00 am

Donny, high level books in math often have yellow covers.

Comment #30 August 9th, 2011 at 11:05 am

Donny #27: I’m assuming these are what Scott is referring to — http://en.wikipedia.org/wiki/Graduate_Texts_in_Mathematics

Comment #31 August 9th, 2011 at 11:26 am

Marc Owens: It’s a

survey, not an original research paper. If you already know everything there, no reason to read it!Still, I’m curious:

doyou know of another survey that covers the same ground? If not, then what are you on about? Also, what specifically did you find “behind the times”?Comment #32 August 9th, 2011 at 11:44 am

This was a terrific essay, and (for me) among the most enjoyable sentences was the final one:

The above exchange between Arnie Kott and Scott suggests that Scott’s (very enjoyable) section

“3.1 The Entscheidungsproblem Revisited”might have as its future-essay converse a section titled“[n].[m] The Excluded Middle Revisited.”Here one lesson of philosophy is to heed the base, present, and future cases in which the Fallacy of the Excluded Middle has misled mathematicians and complexity theorists.

Past exclusion:“Mathematical propositions are either true or false.”Present exclusion:“Formal languages are either in P or not.”Future exclusion:“Sampling algorithms are either in P or not.”A thought-provoking lesson from philosophy is that when the words “provably,” “demonstrably,” and “verifiably” are included in the above propositions, the Law of Excluded Middle fails …or at the very least, further clarifications are required …and such clarifications (historically) have been beneficial to philosophers and mathematicians alike.

In particular, while Scott may be correct in asserting “[Dick Lipton] doesn’t base any actual research on that belief [that P=NP] (no one does!),” it is helpful for students to be aware that there

doesexist a rather large body of complexity-theoretic research (and serious researchers like Juris Hartmanis) that rigorously and productively analyzes the “excluded middle” notion that P versus NP is formally undecidable.In summary, a very welcome section in the (future) converse essay would be one that broadly discussed the role of the excluded middle in complexity-theoretic proofs, separations, and verifications … all of which are foreseeably will play a central role, not only in 21st century complexity theory, but throughout the STEM enterprise.

Comment #33 August 9th, 2011 at 12:32 pm

Magnificent achievement, Scott. I would caution you, however, not to venture further into the realm of philosophy. Doing so might expose you to unhealthy amounts of atheism, materialism, and nihilism–each of which could easily derail your ability to make progress in your field. Instead, I recommend you redirect your mental energies towards solving purely mathematical and computer science problems. That’s assuredly a safer and more productive route.

Comment #34 August 9th, 2011 at 12:40 pm

Manfred: And, uh, within math and CS I’ll be

safefrom atheism and materialism? 😀Comment #35 August 9th, 2011 at 1:08 pm

Very enjoyable read! You pointed out issues in several philosophical arguments that have always bothered me, but in a way I could never articulate. I’m particularly happy that I now know a sensible way to respond to variants of the Chinese room.

Comment #36 August 9th, 2011 at 1:25 pm

Don’t wake the sleeping tiger! Let philosophers prate about Goedel, QM and its interpretations, AI but, please, save computational complexity for theorists only. Otherwise in a few months hordes of people will discuss about P vs NP, determinism vs nondeterminism, citing Leibniz and Hume. That’s not what you want to happen.

Anyway, I enjoyed reading your essay very much (even if, well, 70 footnotes are definitely too many 🙂 ). Nothing new for your assiduous readers but a nice reading anyway.

Comment #37 August 9th, 2011 at 1:44 pm

> He’s a brilliant and wonderful person, but he doesn’t have a scientific reason to believe P=NP, and he doesn’t base any actual research on that belief (no one does!).

I believe on balance that P=NP, and I don’t think it’s fair to characterise everyone that feels that way as just being open-minded for the sake of it; the evidence I would put forward is the existence of a polynomial-time algorithm for calculating the determinant (and the FKT algorithm for perfect matchings on planar graphs) – being able to calculate the product of an exponential number of terms like that just seems too powerful to fit in a P!=NP world.

I don’t *base* my research on the belief exactly, but I am actively exploring the area of holographic algorithms/Pfaffian circuits to find an appropriate representation (even if it feels somewhat like looking for a needle in a haystack).

Comment #38 August 9th, 2011 at 1:46 pm

Thanks for the nice paper, Scott. Probably I’m being dense, but why doesn’t Cobham’s composition axiom (#3) immediately imply that recursions of unlimited depth are efficiently computable?

Comment #39 August 9th, 2011 at 2:00 pm

> calculate the product of an exponential number of terms

(I meant sum, obviously!)

Comment #40 August 9th, 2011 at 2:23 pm

Thanks for the paper. This is why I love computer science authors. This further validates my belief that you lot are the most empathetic (among all academics) to lay readers. I’m neither a computer scientist / mathematician / philospher I’m an electrical engineer. Surely, electrical engineering authors have a lot to learn on this front. 🙂 l I enjoyed every word of the article. Hats off!

Comment #41 August 9th, 2011 at 2:40 pm

Seamus #18, just a quick update:

I’ve now gone through all 34 (!) of the papers listed under “computational complexity” in the PhilPapers archive. Thanks again for pointing me there!

Some first impressions:

The vast majority are (often quite interesting) technical papers in logic, semantics, and model theory that incorporate computational complexity issues. Of course, I cited a number of such papers in my essay, but also (knowing that I couldn’t possibly do justice to this area!) pointed readers to, e.g., the books of Fagin et al. and Immerman for more.

Some papers used the

term“computational complexity” in talking about various resource constraints, but had no apparent use for thefield. (Simple litmus tests: no mention of P and NP, no references to any nontrivial results.) Others used the term “computational complexity” in a much more general way than I would, to encompass other fields such as computability theory, Kolmogorov complexity, and information-based complexity (which is basically query complexity for continuous problems).I did, however, find three papers that I’ll definitely want to cite in the final version:

Computational Complexity and the Universal Acceptance of Logic, by Christopher Cherniak.

Epistemic Virtues, Metavirtues, and Computational Complexity, by Adam Norton.

Jakub Szymanik’s PhD thesis (and his various related research papers)

Thanks again!

Comment #42 August 9th, 2011 at 2:44 pm

Stephen #38:

Probably I’m being dense, but why doesn’t Cobham’s composition axiom (#3) immediately imply that recursions of unlimited depth are efficiently computable?Because you only get to apply the axiom a

fixednumber of times, not a number of times that depends on the input size. If you want to define recursions whose depth depends on the input size, you need to use axiom 5.Comment #43 August 9th, 2011 at 3:00 pm

Congrat for this great paper!

Sections 1-4,9 are crystal clear and very fun to read. Section 5 and 6 a bit dry for my taste. Page 32 after the mention to Holevo’s theorem a footnote about superdense coding would be interesting (sorry Luca!).

Imho the two aspects that should most impress a philosopher are the existence of zero-knowledge proofs, and the extra flesh-on-the-bone CT adds about randomness. I guess you did not discussed the latter because Impagliazzo’s five worlds already adress the issue, but I’m sure you’d make a great job at it. 😉

Comment #44 August 9th, 2011 at 3:07 pm

Scott: I suggest you look at http://www.livestream.com/worldsciencefestival and find the ‘Rebooting the Cosmos’ video.

Some or perhaps all of these ideas exist on paper, but I have not read all of the material of all the authors in that room.

Comment #45 August 9th, 2011 at 3:36 pm

James #37:

I believe on balance that P=NP, and I don’t think it’s fair to characterise everyone that feels that way as just being open-minded for the sake of it; the evidence I would put forward is the existence of a polynomial-time algorithm for calculating the determinant (and the FKT algorithm for perfect matchings on planar graphs)If that’s

reallyyour reason for believing P=NP, then maybe I can have the honor of converting you right now! 🙂To me, the fact that there are amazing polynomial-time algorithms (for various special problems like matching and determinant) is rightly understood as an

explanation for why proving P≠NP is so hard. Had therenotbeen such algorithms, one could have said: ah, if P≠NP were true, then someone really ought to have proved it by now! Since no one has, maybe that failure gives us evidence for P=NP.But suppose we’ve internalized just how rich the class P is—how any P≠NP proof would need to explain what it is about the permanent that makes it

differentfrom the determinant, what it is about matching that makes itdifferentfrom max-clique, etc. In that case, our failure to prove P≠NP so far no longer seems particularly mysterious at all!On the other hand,

ifyou believe P=NP, then I claim youdohave a huge mystery on your hands. Namely: considering all the amazing polynomial-time algorithms that have been discovered (including the two you mentioned), as well as all the amazing NP-completeness reduction,why is it that a polytime algorithm and an NP-completeness reduction have never—not once—been found for the same problem?I.e., why have these two enormous blobs of polynomially-equivalent problems, both of which have “unexpected tendrils” in nearly every corner of our field, stubbornly refused to meet for 40 years, if indeed they’re one and the same blob?To summarize, I’m in total agreement with you about the conceptual importance of non-obvious polynomial-time algorithms—but to me, the algorithms we have are evidence

againstP=NP rather than for it!Comment #46 August 9th, 2011 at 3:39 pm

In footnote 14 you assert that Cramer’s conjecture implies the Riemann Hypothesis. Is that true? I don’t think I’ve encountered that claim before.

Comment #47 August 9th, 2011 at 3:43 pm

If you’re trying to reach philosophers you need to do it on their turf. That’s difficult if you’re not academically certified as a philosopher. You might of course find allies in the field (the Churchlands, for instance). But then you’d be carrying their baggage as well as your own.

WRITE A BOOK! For intelligent general reader consumption. Even Einstein wasn’t above that sort of thing. But nothing along the Harel line. Maybe “Improve Your Sex Life Through Computational Complexity, or, How To Make P=NP In Bed If Nowhere Else”. If that’s unfunny it’s because my humor module is still being debugged.

Comment #48 August 9th, 2011 at 4:16 pm

Marc Owens #44: That’s the best you can do—

seriously?A link to a panel discussion about whether the universe is a computer?Look, I have nothing against the four panelists (Seth Lloyd’s a colleague, Fotini Markopoulou-Kalamara’s a great friend, I’ve had stimulating discussions with Fredkin, and I loved Schmidhuber’s essays in grad school). But I confess that personally, I’ve always been allergic to the question of whether the universe “is” a computer—simply because it’s hard to imagine any scientific or mathematical discovery ever shedding light on the question! The notion of computation is so broad that you can regard just about

anythingas a computer, if you choose to.So it seems to me that the first step to wisdom is to ask not whether the universe

isa computer, but whatkindof computer it is. What exactly can it compute? What are its limitations? That’s where complexity theory comes in.Unless I missed it, the panel discussion touched on

computational complexity theoryonly once and briefly, in the context of quantum computing. Fredkin, of course, is a “deterministic diehard,” who’s convinced both that the laws of physics are classical and that quantum computers can be efficiently simulated classically (P=BQP)—a logically-consistent (if perverse 😀 ) position that I discuss in Sections 8 and 12 of my essay.Besides that, I didn’t see much overlap with the issues I was writing about. Do you really not see the difference between

computational complexity theory—i.e., the specific field involving P and NP—andanysort of philosophical discussion involving computers? In the 45 minutes it took you to read my essay, is it conceivable that you missed something? 😀Comment #49 August 9th, 2011 at 4:25 pm

Scott: I don’t see why we should expect poly-time algorithms to be easy to find for all problems if P=NP is the case – PRIMES was hardly an underresearched area before 2002! I think there’s more “texture” to the complexity of problems than just whether they’re polynomially solvable or not.

The reason that SumPerfMatch interests me so much is the sheer expressiveness that constructing Pfaffian graphs allows (note that they don’t have to be planar, and we don’t even have a characterisation of Pfaffian graphs for the non-bipartite case). To me, stating that P!=NP is stating that a vast range of algebraic properties (which we’re only at the start of exploring, let alone categorising thoroughly) don’t hold, over any field. I find that less likely than us simply not being very good at finding algorithms; arguing otherwise feels like just an appeal to simplicity based on the poly/exp-time dichotomy which (while obviously theoretically important) has I think blinded us somewhat.

In short, with my P=NP hat on (I’m not entirely certain – I’m playing devil’s advocate here a little!) my answer to:

> why is it that a polytime algorithm and an NP-completeness reduction have never—not once—been found for the same problem?

is that some things are easy and some things are hard, but that doesn’t mean some things are computationally easy and computationally hard. Perhaps there’s a cognitive issue with defining problems in P as “easy” – expecting them to be “easy” to find poly-time algorithms for! 🙂

> what it is about the permanent that makes it different from the determinant

From where I’m standing, the reasoning is that there are obvious simple and useful operations that can be performed to a matrix that preserve the determinant.

Comment #50 August 9th, 2011 at 4:27 pm

> and computationally hard

“and others are.”

(Sorry, that’s twice now – I should really proofread more thoroughly!)

Comment #51 August 9th, 2011 at 4:30 pm

Joshua #46:

In footnote 14 you assert that Cramer’s conjecture implies the Riemann Hypothesis. Is that true? I don’t think I’ve encountered that claim before.Duhhhh, thanks! On Googling I can’t find a source for that either; I’ll correct it and acknowledge you.

In retrospect, I was misled by the facts that (1) RH is known to imply a weaker form of Cramer’s Conjecture,

p

_{n+1}– p_{n}= O(√p_{n}log p_{n})and (2) RH is

equivalentto a statement reminiscent of the weaker form above,| π(x) – Li(x) | = O(√x log

^{2}x).But of course, the missing step is that one would need to show that (for example) the weaker form of Cramer’s Conjecture implied the statement about π(x).

I’ll be grateful for any other relevant info.

ADDENDUM:

Also, the way I stated Cramer’s Conjecture in that footnote was wrong … will fix after lifting head from desk.Comment #52 August 9th, 2011 at 4:35 pm

James #49: To me, PRIMES is a perfect example of an exception that proves the rule! Long before we knew it was in P, we already knew it was in P assuming the GRH,

andwe knew it was in DTIME(n^{log log n}),andwe knew it was in ZPP,andwe strongly believed that P=ZPP anyway. In short (as I think Juris Hartmanis put it), PRIMES was hanging above P by the tiniest hair. Can you say anything comparable for NP-complete problems?Comment #53 August 9th, 2011 at 4:55 pm

> Can you say anything comparable for NP-complete problems?

Obviously not (other than “similarity” arguments like the determinant/permanent), but that’s the appeal to simplicity again – and I don’t have the confidence that the implications would actually result in a simpler world in the end, despite how it may seem given our current model.

Comment #54 August 9th, 2011 at 5:11 pm

Scott, your essay left me thinking : should Complexity theorists care whether philsophers care about computational complexity?

Comment #55 August 9th, 2011 at 5:15 pm

Anyone care to explain what a computable number is? I always assumed it was the same as efficiently computable or computable in polynomial time.

Comment #56 August 9th, 2011 at 5:23 pm

notaphilosopher: I guess the more basic question is,

are the issues the essay talks about interesting or not?If theyareinteresting, thensomeoneshould care about them—if not philosophers, then presumably complexity theorists themselves! But it seems sporting to give philosophers the first crack. 🙂Comment #57 August 9th, 2011 at 5:45 pm

Clement #55: Well then, “computable” is sort of like “computable in polynomial time,” just without the “polynomial time” part. 😀 It means computable in

anyfinite amount of time: exponential, triple-exponential, whatever.Comment #58 August 9th, 2011 at 5:45 pm

Scott: There are all kinds of implications from the material in that video which you don’t fully comprehend yet.

Given the fact that you reply like an asshole, I will just stop discussing with you, as the only thing you seem to want is to market a dead field like complexity theory to the even more dead field of philosophers.

Comment #59 August 9th, 2011 at 6:09 pm

Marc: Let me get this straight … you write in with insults, and then

I’man asshole for trying to elicit some actual arguments behind those insults (rather than just ignoring or blocking you)?Everyone else: I know I shouldn’t fall for trollbait, but having done so, I’d really appreciate some support if you’d like to offer any. Every time I encounter someone like this, it makes me wonder why I spend time writing for a non-specialist audience.

Comment #60 August 9th, 2011 at 6:34 pm

the only thing you seem to want is to market a dead field like complexity theory to the even more dead field of philosophers.Marc, I’m confused by what you mean here. What do you think is “dead” in complexity theory or philosophy? Philosophy has some amount of a problem in that whenever something becomes rigorous enough we often stop classifying it as philosophy (math, physics, linguistics, economics, psychology- pretty much all of these split off from philosophy). But I don’t know what it means for a field to be dead. Does it mean they aren’t producing interesting results? So that’s an argument that can be made for philosophy, although I’d disagree with it.

It seems much harder to make that argument for complexity theory. You are apparently using a more general notion of “complexity theory” to mean a great deal of areas, and if we use your more general form, it seems trivially false. But if we stay restricted to computational complexity then this seems still deeply problematic. For example, we have Ryan Williams recent result that NEXP is not in ACC^0, which actually gave almost explicit lower bounds on circuit sizes in NEXP. And there are many minor complexity papers all the time.

This isn’t an exception either. There’s been a steady stream of progress since the field was started in the 1970s. And many of the more interesting results connect to other branches of mathematics. For example, the AKS result that PRIMES is in P is in some sense of a number theoretic statement, but one that only seems interesting when one has the broader context of computational complexity. And there have been deep connections between computational complexity and foundations where one connects space and time constraints to what theorems can be proven in various theories weaker than Peano Arithmetic.

I’m forced to conclude that you either don’t know much about the field you are talking about or you are using another, non-obvious meaning of what it means for a field to be dead. Which is it?

Comment #61 August 9th, 2011 at 6:42 pm

(Putting a pretend skeptical philosopher’s hat on)

It is not clear whether philosophically meaningful complexity classes have been developed yet. Throughout the essay you treat P as if it’s _the_ class that stands for all things _efficiently doable_, but in the section titled Criticisms of Complexity Theory you admit that it only makes asymptotic statements.

In other words, the truncated Entscheidungsproblem can be in P but with such astronomical constants that it may not matter. Or there can be significant and pervasive biological processes in nature that are not in P, but also never reach asymptotic behavior.

Thus, I’m not sure whether the current classifications done by complexity theory is as sharp and as meaningful as decidable v. undecidable, or the impossibility results in distributed systems.

In Future Directions, why do you think that the progress humans have made in mathematics is “enormous”?

Comment #62 August 9th, 2011 at 6:45 pm

Scott, in summary, if such trivia becomes the reason for you to stop such a fantastic blog in science, that’d a great loss.

someone resorts to profanity. how does it matter to you? just ignore.

Comment #63 August 9th, 2011 at 6:50 pm

If Marc wants to understand why and how Scott and Computational Complexity can be technically and otherwise interesting I recommend “NP-complete Problems and Physical Reality”. Breath of fresh air. Marc might be forgiven for thinking him not so exciting on the basis of this latest rather orthodox and risk-averse paper.

Scott really should do a book for a general audience. I’d suggest avoiding snarks about people who enjoy white wine and French cheeses however. Some are lefties.

Comment #64 August 9th, 2011 at 7:00 pm

Thanks so much, Joshua (and others)! Here are three things I’d like to understand better:

1. What explains the animus many people seem to feel against complexity theory—a small field that steps on few obvious religious or ideological toes (unless your religion commands P=NP), and that you’d think would just be ignored by anyone who disliked it? Is there a similar animus against (say) representation theory or operator algebras? (An animus against philosophy or physics I can at least

understand, even if I don’t agree with it.)2. What does it mean when your academic colleagues tell you your essay is too mathematical and will go over outsiders’ heads, while many outsiders themselves tell you the essay is too obvious, and they already knew everything in it? Does it mean you hit the sweet spot? 🙂

3. Is there any point trying to avoid angering people with controversial blog topics, if people will even get angry over P vs. NP and grue?

Comment #65 August 9th, 2011 at 7:27 pm

I keep going back to one article of yours, titled “Shor I’ll do it” – time and again. fantastic post. As is each one of your blogs.

My favorite from Aaronson’s collected work, however, is a poem: H(p) = -plogp – (1-p)log(1-p) 🙂

on Q1 : I do remember seeing similar animus on – say, category theory (can’t remember where)

on Q3: doubts, debates, controverises are other common pebbles forming the medow of sciences, alongside the likes logic. so why avoid them?

Comment #66 August 9th, 2011 at 7:33 pm

Koray #61:

It is not clear whether philosophically meaningful complexity classes have been developed yet.One could equally well say: it’s not clear whether any philosophically-meaningful

scientific theorieshave been developed yet. General relativity? Quantum mechanics? Evolution? Who cares! All those theories have well-known problems—so when we do something as exalted asPhilosophy of Science, why not just ignore them and start from scratch like Aristotle?Of course, many philosophers

havetaken that viewpoint, and some still do. But it seems to me like a recipe for irrelevance. If you don’t like the best scientific theory we currently have in some area, replace it! Otherwise, if philosophy isn’t going to engage the best current scientific theories, whatshouldit engage?Thus, I’m not sure whether the current classifications done by complexity theory is as sharp and as meaningful as decidable v. undecidable, or the impossibility results in distributed systems.But you can quibble with those things as well! For example, maybe the halting problem

isdecidable for all input lengths of conceivable human interest—nothing in Turing’s proof says otherwise.As for impossibility results in distributed systems—that’s a weird example in this context, since unlike (say) the halting problem or NP-complete problems, problems such as Coordinated Attack are

extremelysolvable in practice! How to reconcile that with the impossibility theorems is a wonderful question in its own right. But whatever the answer, would you really call two generals’ inability to coordinate a battle plan more “sharp and meaningful” than their inability to factor 10,000-digit numbers?? 🙂In Future Directions, why do you think that the progress humans have made in mathematics is “enormous”?I just meant, enormous compared to what you might naïvely expect, if all you knew was that deciding propositional tautologies was NP-hard.

Comment #67 August 9th, 2011 at 8:02 pm

@Scott 59

Your blog is fantastic, and it’s your blog. Don’t fell obligated to publish offensive posts more than you would invite gross strangers at home, and be aware 99% of those who trill reading your writings will never post any comment.

And avoid French cheeses bashers, they all suffer from dysgeusia.

Comment #68 August 9th, 2011 at 8:12 pm

If there is any enduring effect from this

Shtetl Optimizedpost, it will come not from the occasional angry or abusive post — which IMHO are best ignored and soon forgotten — but from Scott’s sharing of both his preprint, and from his generous willingness to provide wonderful comments upon it like (from #45):We can turn to (who else?) Winston Churchill for one answer:

Here I have taken the liberty of writing “decidable” for “true”, such that the image (adapted from Hartmanis) is that every language in P, that is recognized by Turing machines whose runtimes are decidably in P, is also recognized by an umbra of Turing machines that are in P but undecidably so. And moreover, even

thatumbra is surrounded by a penumbra of languages that are in P, but recognizedonlyby Turing machines whose runtimes are undecidably in P.The hard problem then is to separate the brightly-lit set of (decidably) NP-complete languages from the languages that reside in P’s undecidable penumbra … and (in my imagining) it is this dementor-dark penumbra of undecidability that for many decades now has separated Scott’s two “blobs.”

Does this imagining have any elements of truth? Don’t ask me! But I am very grateful to anyone who (like Scott) takes the trouble to provide

everyonewith raw materials for such imaginings. Because this generosity is (for me) the very best part of what the STEM enterprise is all about. So thank you, Scott!Comment #69 August 9th, 2011 at 9:04 pm

Hi Scott,

I liked the article. (I laughed at how you slipped in a little swipe at creationism.) Don’t sweat the trolls. Never follow bad money with good. If I had more time I’d offer some nitpicky suggestions. But it’s definitely a success… despite how behind the times you are on some other aspects. 🙂 Cheers.

Comment #70 August 9th, 2011 at 9:38 pm

Thanks, everyone!

Aris #63:

Marc might be forgiven for thinking him not so exciting on the basis of this latest rather orthodox and risk-averse paper.I confess that’s a thought that hadn’t occurred to me: the reason this essay is so controversial is precisely that it’s not controversial enough. 🙂

Comment #71 August 9th, 2011 at 9:41 pm

“1. What explains the animus many people seem to feel against complexity theory—a small field that steps on few obvious religious or ideological toes (unless your religion commands P=NP), and that you’d think would just be ignored by anyone who disliked it?”

For the computationally devout it seriously muddies the waters. To paraphrase a trope from another (not unrelated) discipline: Shut Up And Compute. For others — probably for the most part outsiders — complexity theory isn’t the real problem they have with you; it’s your inability or refusal to understand arguments against machine intelligence and the supercilious way in which you dismiss them. I’ve had opportunities to recommend “NP-complete Problems and” to smart people who’ve never heard of you and bitten my tongue at the last moment. I’m not just an asshole: I’m a professional asshole.

Comment #72 August 9th, 2011 at 9:55 pm

Some recent comments Sean Carroll can read on his blog:

“I don’t buy it. There’s no reason to think that water exists anywhere else in the universe.”

“As a scientist and astrophysicist, I’m embarrassed that this is they way my profession was portrayed.”

“Sean is utterly confused about entropy and thermodynamics and time. His confusion seems to be the origin of all these papers, rather than there being a real physical issue to solve.”

Maybe you should meet for some drinks? My point is the behavior that make you upset have in fact nothing to do with complexity theory. And don’t mix up white wine with French cheeses.

Comment #73 August 9th, 2011 at 9:58 pm

Aris #71:

For others — probably for the most part outsiders — complexity theory isn’t the real problem they have with you; it’s your inability or refusal to understand arguments against machine intelligence and the supercilious way in which you dismiss them … I’ve had opportunities to recommend “NP-complete Problems and” to smart people who’ve never heard of you and bitten my tongue at the last moment.I’m honestly not sure what you’re talking about here, or how it relates to the criticisms of my essay that other people leveled. And NP-complete Problems and Physical Reality says next to

nothingabout AI, so I have no idea what that has to do with this either.But OK, I’ll bite: where, exactly, did I demonstrate an “inability or refusal to understand arguments against machine intelligence”? (Do I demonstrate that simply by

not agreeingwith the arguments? 😀 )It’s funny: when I teach the strong-AI debate in my CS classes, I almost always end up

playing devil’s-advocatefor Searle and Penrose, since the students (unlike me) consider their arguments so wrong as not to merit a reply, and probably wonder why I’m wasting their time on such things.Comment #74 August 9th, 2011 at 10:03 pm

Jiav #72: Yeah, you’re right; thanks for that. 🙂 I

didhave drinks with Sean a few years ago, and it was great.But when did I “mix up white wine with French cheeses”?? (I could very well have done so; I just don’t remember.)

Comment #75 August 9th, 2011 at 10:07 pm

You misunderstand. Although it doesn’t deal with Strong AI “NP-complete and” advances an idea (the NP-hardness assumption) that could apply equally to machine intelligence. I simply dislike hypocrisy.

Comment #76 August 9th, 2011 at 10:19 pm

aris, now you’re heading for troll territory.

Of coursethe NP-hardness assumption would (if true) apply to machine intelligence. Thatwhole articlewas about the conjectured limitations of machines, with one or two sentences about the limitations of human problem-solving at the end. I don’t get it. What’s the “hypocrisy”?Comment #77 August 9th, 2011 at 11:51 pm

Is computability theory still an active field?

Comment #78 August 10th, 2011 at 12:12 am

A mouse: Yes, it still exists, but my personal impression is that it’s a victim of its own tremendous success, most of its main open problems having been solved in resoundingly clear ways by the 1960s.

To clarify, there are still lots of wonderful open questions about the computability of

specific problems—solving polynomial equations over the rationals is an outstanding example—and probably also in computable real analysis, computable probability, computable Ramsey theory, etc. Also, here is a list of open problems in Kolmogorov complexity.But I fear that research in “computability theory

proper” might be or become almost as quaint as research in trigonometry. Which is a pity.(If someone wants to dispute me, I’d

loveto proved wrong on this one!)Comment #79 August 10th, 2011 at 2:46 am

A while back i remember testing a clique solver (by a Shtetl-Optimized commenter whose username i forget) that was solving random clique instances of up to 300 nodes without significant slow down.

In terms of nodes, do you have any intuition as to how optimized these algorithms would have to be before we start seeing some of the qualities we assign to P = NP? 500? 1000 nodes?

If the landscape of mathematical theorems is scattered with instances both near and far then a number of these might be within reach.

It reminds me of space exploration in way – if we can now afford to send satellites out into the mathematical solar system, then why aren’t we?

Comment #80 August 10th, 2011 at 3:15 am

> 2. What does it mean when your academic colleagues tell you your essay is too mathematical and will go over outsiders’ heads, while many outsiders themselves tell you the essay is too obvious, and they already knew everything in it? Does it mean you hit the sweet spot?

LOL. It means that your colleges underestimate outsiders and outsiders overestimate themselves, but it doesn’t mean much about your essay. 🙂

Comment #81 August 10th, 2011 at 3:42 am

I am curious, Scott: in your opinion, do you think it will ever be possible for machines and/or robots to develop a full-blown intelligent and emotional consciousness/ be self aware, given enough time and advancements in technology as well as other areas?

Granted, there may never be a test made that can without a shadow of a doubt prove that a machine/robot is conscious or self-aware in the way that we are (or can prove that its intelligence is “programmed” or has developed on its own), but regardless of that again is that notion even a minute possibility in your view? Does it break any known laws or rules of science, physics or plain psychology as we currently understand it?

Obviously, many don’t think this is or ever could be possible…

http://ieet.org/index.php/IEET/more/4803

http://ieet.org/index.php/IEET/more/4813

http://ieet.org/index.php/IEET/more/4805

Comment #82 August 10th, 2011 at 3:44 am

>If someone wants to dispute me, I’d love to proved wrong on this one!

Well, if you follow the papers in computability you will see that it is far from dead. Yes, it was a very successful enterprise and many open problems were solved by amazingly intelligent people, but there are new problems.

The pure part of it is not as popular as it was then since complexity theory took over in US (look at what STOC was and has become over decades), and in Europe type theories and PL took over (look at CiE, LiCS, LC). Both areas share the heritage of computability theory.

Comment #83 August 10th, 2011 at 4:45 am

Scott asserts:

LOL … Scott, not only is computability research still vital, even

trigonometryresearch is still vital! 🙂See for example, the very readable survey by Martin Mohlenkamp and Lucas Monzon titled “Trigonometric identities and sums of separable functions” (

Mathematical Intelligencer27(2), 2005).Is the Mohlenkamp/Monzon article about formal proofs? Or is it about complexity class separations? Or is it about sampling and simulation? The answer is all three … depending upon how one reads the article, and how one applies the ideas in it. Starting from these seemingly stodgy trigonometric ideas, Mohlenkamp and his colleagues have gone on to discover striking practical applications that substantially extend the class of quantum simulation algorithms known to be in P.

The point is that ideals of mathematical universality and naturality nowadays are spanning a steadily broader range of topics, and illuminating these topics to a steadily increasing depth. Paradoxically, what’s becoming quaint is the old-fashioned view that some mathematical disciplines are “quaint.”

Comment #84 August 10th, 2011 at 8:02 am

Scott,

Maybe you should not assume that the reader knows what the “Turing test” is? (This remark may apply to other concepts which are used.)

Comment #85 August 10th, 2011 at 9:05 am

Hi Scott,

As a regular reader of your blog, I have a few questions / comments:

1) Complexity Theory is undoubtedly a deep and beautiful subject, with lots of very interesting open problems. But it seems Complexity Theory researchers often take their model too seriously. P != NP is a great open problem. Still, logically, it is quite possible that nothing will change in the real world even if P=NP. What if it turns out that there are poly-time algorithms for NP complete problems that runs in O(n^{10^9999}) time?

2) Don’t you think theoretical computer science, as a field, should worry a little less on this polynomial / exponential divide, and pay a little bit more attention to exact exponential time algorithms?

3) Philosophers, it seems, have got a bad reputation. For example, major critics of string theory often says that the field is philosophy, not science. Do the Complexity Theorists really need to convert theoretical computer science into philosophy in an effort to publicize its achievements to general public?

Thanks.

An obscure grad student working on Algorithms

Comment #86 August 10th, 2011 at 9:22 am

J. Warner #81: I think that the “default scientific view” on your questions—i.e., the view that has force until someone proves otherwise—is that

weare intelligent, self-aware machines, and therefore we have an existence proof that yes, such machines are possible! 🙂I also think that, if you want to argue against that view, it won’t do to stick to philosophy (as John Searle does), or to talk about various shortcomings of AI in the first 60 years of its existence (as Hubert Dreyfus and most AI critics do).

Instead, you’ll first need to clarify what you mean by “machine,” and then

explain what it is about the physics or biology of the brain that makes us NOT machines in that sense.This is exactly what Roger Penrose tried to do (in my opinion, unsuccessfully). Whether it can be done successfully is a very interesting question, to put it mildly…

Comment #87 August 10th, 2011 at 10:07 am

CS grad student #85:

1) I address exactly that question in the essay.

(Short answer: yes, if it happened, then P vs. NP would’ve turned out to be the “wrong question.” But it almost certainly

won’thappen! And even if it did, most of the insights we’re gaining in TCS don’t have much to do with the precise details of the polynomial/exponential divide—e.g., if you wanted to take n^{5}or n^{log n}as your criterion of efficiency, most insights would carry through.)2) Plenty of attention

ispaid to exact exponential algorithms! (Cf. Ryan Williams’ recent breakthrough.) But sure, more attention would be great—as would more attention on complexity theory inside P, and so many other interesting corners of our field besides P vs. NP (many of which I talk about in the essay).3) I wasn’t proposing to “convert theoretical computer science into philosophy,” just to get analytic philosophers a little more interested in what we do (and maybe vice versa).

Comment #88 August 10th, 2011 at 10:46 am

This is rather like naming an essay ‘Why Philosophers Should Care About Thermodynamics’. If a philosopher hasn’t figured out that it’s important on their own, then they suck.

Comment #89 August 10th, 2011 at 11:08 am

Bram #88: Well, I try to be a little more charitable. 😀 Still, I agree that more accurate titles would have been:

“Why Philosophers Should Care

Much MoreAbout Computational Complexity”“What

WithinComputational Complexity Philosophers Should Care About, And How They Should Use It”But they just didn’t have the same ring.

Comment #90 August 10th, 2011 at 2:35 pm

Scott,

Sure you can quibble with decidability et al as well, but we currently do not know whether halting is decidable on every program that fits on an iPhone, and we do know that P doesn’t necessarily mean efficiently doable.

We also know that even in practice distributed commit does not work.

Comment #91 August 10th, 2011 at 3:26 pm

Personally, I dont oppose the possibility of AI;

But I onder if one can add to (i) and

(ii) you can conjecture an astronomical lower bound on the resource requirements of such a

program

(from p. 13)

also

(iii) you can conjecture an astronomical lower bound on the resource requirements to find an efficient such program.

(iii) is a little different than (ii). They both are related to computational complexity but (ii) is about the complexity needed to simulate the brain and (iii) is about the complexity needed to create a brain (or its equivalent) or to simulate the process of creating the brain.

Option (iii) may have a cryptographic aspect: The process of constructing the precise program our brains use may involve applying some one-way functions based on secrets we can no longer recover.

(While less appealing, this may be of some relevance to (ii) as well; you need astronomical resources unless you know some secrets we cannot recover)

Comment #92 August 10th, 2011 at 3:28 pm

Koray #90:

We don’t know how to solve the halting problem in practice for every program that fits on an iPhone, but we

alsodon’t know how to solve satisfiability in practice for every Boolean formula that fits on an iPhone! P doesn’t necessarily mean “efficiently doable in practice,” but computable doesn’t necessarily mean “doable in practice” either!The two cases really are very analogous. Your argument is fallacious, relying on asking a different question about P than you ask about the computable functions.

And in practice, people (at least people I know) DO agree to meet each other at a certain time and place—even without being 100% sure that the other person knows to meet there, that the other person knows that they know to meet there, that the other person knows that they know that the other person knows to meet there, etc. Sure, sometimes they fail, but not as often as when they try to factor 10,000-digit numbers! 🙂

Comment #93 August 10th, 2011 at 8:09 pm

@Gil Kalai,

That is a very interesting idea. (iii) also looks like Kolmogorov complexity. We have working models to built on (real brains) and working models of how they are built (human reproduction). So we are reverse engineering, do you think that reverse engineering brain will not work because of some cryptographic difficulty? i.e., we may be unable to invert a given one-way functions or understand how it is designed, but we can still copy the structure of a one-way function if we just want to build a one-way function.

Comment #94 August 10th, 2011 at 11:31 pm

An excellent, even thrilling, essay. Thanks for writing it.

It’s approximately the one year anniversary of the Deolalikar uproar that started me reading computational complexity blogs, and yours has certainly been one of the most entertaining and enlightening.

Comment #95 August 11th, 2011 at 1:22 am

Anonymous, as I said I dont oppose at all to the possibility of AI. If we play the game of justifying whi AI could be impossible (which is not so different than the question why AI is difficult), especially towards Turing test that requires not only for the computer to do what humans can do but to do it so we cennot distinguish between a computer and a human being then we can distinguish between two computational complexity reasons.

The first (which is (ii)) in Scott’s paper is that it takes exponential resources to simulate the brain.

The second (which is my (iii) is that the brain runs an efficient algorithm (lets call it A) but it takes exponential resources (in the size of this algorithm) to find an efficient algorithm (lets call it B) so that we cannot distinguish A from B.

In both cases we can talk about computational complexity in terms of polynomial/exponential resources and there is no need to invole Kolgomorov complexity.

The cryptographic issue may apply to both (ii) and (iii). It is possible that the algorithm that the brain runs (which we can assume is efficient) involves a cryptographic ingredient that in order to simulate it (in the “Turing test” quality) we need to “break”. It is also possible, that the (evolutional) process required to reach the efficient algorithm A involves a cryptographic ingredient so that without breaking it creating algorithm B is exponentially hard.

Comment #96 August 11th, 2011 at 2:23 am

One more remark about the Turing test in a different direction is this. Suppose we want to apply the Turing test, not to humans, but rather to a different computer. Say you have a chip (or a set of many chips with similar capabilities) and you want to create a computer program that will act in such a way that you would not be able to distinguish the action of the program from the action of a chip in your set. This looks like a difficult task. In fact, it looks at least as difficult as program-verification and perhaps more. Here you dont want to test if the chips have bugs but rather you want to create a computer program that simulate perfectly the behavior of a chip in your set.

Comment #97 August 11th, 2011 at 5:32 am

Here is a nice story regarding replacing the math jargon “fix” with “consider”. There is a famous paper by Anscombe and Aumann about subjective probability. http://www.econ.ucsb.edu/~tedb/Courses/GraduateTheoryUCSB/anscombeaumann.pdf

In this paper they consider the terms “roulettes”, “horse races” and “horse lotteries”, in certain technical senses. In the original version of the paper there were a few instances of the phrase “Fix a horse race..”. The referee asked that the word “fix” will be replaced and the authors agreed.

Aumann regards not leaving the phrase “fix a horse race” in the paper as the regret of his life.

Comment #98 August 11th, 2011 at 7:43 am

Philosophers in particular might more easily appreciate Scott’s essay

Why philosophers should care about computational complexityif it were edited to respect the traditional philosophical usages of double quotation marks (“[…]”, 351 usages), single quotation marks (‘[…]’, only 3 usages), italic emphasis (several hundred usages), and the word “obvious” (19 usages).The recommendations of

The Chicago Manual of Style(CMS, 14th edition) are specific, philosophically reasonable, and straightforward (but offtimes painful) to follow. For example, “Terms having special philosophical or theological meaning are, by convention, sometimes enclosed in single quotation marks,” and “Key terms in a discussion […] often are italicized on first use.” The use of quotation marks to suggest irony or special word usage is deprecated by the CMS; emphasis arising from sentence structure is preferred (per CMS sections 6.62–6.92The distinctive treatment of words).As example of these considerations, on page 9 we read in Scott’s essay

In this key passage the phrase

lower boundsis italicized, as though it were a technical term (per CMS 6.72) and yet the term “lower bounds” never appears again. Ouch! And why is “obvious” quoted? Is it a technical term too? The reason surely isn’t obvious. As for P!=NP being “well-defined,” there is no consensus among complexity theorists that the conjecture is even decidable (perTCS Stack Exchange’sCommunity Wiki on the topic). Thus a simple characterization of the P!=NP conjecture as obvious and well-defined is potentially misleading to students and philosophers alike. After all, philosophers have long appreciated that there is no shortage of conjectures that are obvious and well-defined, yet irrelevant in consequence of being (in Pauli’s celebrated phrase) “Not even wrong.”For these reasons, an editorial pass through the essay, with systematic (but not slavish) consideration of traditional stylistic conventions relating to the distinctive treatment of words, would (IMHO) substantially improve the essay’s readability, logical flow, and usefulness for philosophers and students alike, without substantially altering its main conclusions.

Complexity theory has one other philosophical aspect that is not touched-upon in the essay, and the essay perhaps might benefit from a paragraph asserting that this philosophical aspect of complexity theory will

notbe discussed. This aspect is the central role that math, science, and technology historically have played, and that complexity theory in particular now plays, in the still-evolving cognitive, moral, and technological enterprise that philosophers and historians call theEnlightenment(calledAufklärungby Kant). What will be/ may be/ could be the various roles of complexity theory in the 21st century’s evolving notions of Aufklärung? The role of complexity theory in Aufklärung might be the topic of a terrific essay, but it isnot(in any explicit sense) the topic of the essay that Scott has written.Comment #99 August 11th, 2011 at 11:10 am

John Sidles #98:

Thus a simple characterization of the P!=NP conjecture as obvious and well-defined is potentially misleading to students and philosophers alike.Even if P≠NP turned out to be independent of (say) ZF set theory, it would still be perfectly well-defined! That is, there would still either exist a Turing machine that solved SAT in polynomial time, or else there wouldn’t exist one. The fact that you couldn’t prove which way it went in ZF has no bearing whatsoever on the question’s well-definedness! In more detail, what makes the P vs. NP question well-defined is that it’s

arithmetical(a Π_{2}-sentence)—i.e., ultimately just a statement about checkable properties of various positive integers.If you would acknowledge this simple point, it would help a lot in facilitating dialogue about your newfound passion for the logical status of P vs. NP…

As for the

Chicago Manual of Stylestuff, I’ll leave that up to my editor. 🙂Comment #100 August 11th, 2011 at 1:12 pm

Maybe John’s question is: what if a proof of the P!=NP conjecture was independant of any sound set of axioms?

Comment #101 August 11th, 2011 at 2:03 pm

@Gil Kalai,

Thanks for the additional explanation.

Comment #102 August 11th, 2011 at 2:28 pm

Scott,

I don’t think you read my comment right. I never said “computable means doable in practice”. You’re the one who posited that maybe halting was solvable for programs of human interest. I said that we didn’t know that, but we did know that halting in general was not solvable and that was one thing we knew for sure while P v NP doesn’t have the same kind of certainty when it comes to “doable in practice”.

In any case I don’t think I am adding anything to this thread any more. Good job on the essay.

Comment #103 August 11th, 2011 at 2:45 pm

Jiav #100:

Maybe John’s question is: what if a proof of the P!=NP conjecture was independant of any sound set of axioms?It can’t be. If P≠NP then you can just add in “P≠NP” as a new axiom to any sound set of axioms and still get a sound set, while if P=NP then you can add in “P=NP.”

The central point here, which maybe I didn’t make clear enough, is that P=NP (like Fermat’s Last Theorem, Goldbach’s Conjecture, etc.) is a statement with a

definite meaningindependent of any axiom system. Namely, if there exists a polytime 3SAT algorithm then P=NP is true, and if there doesn’t exist such an algorithm then P=NP is false. We might or might not be able to prove which one holds within some formal axiomatic theory, but we don’t need any such theory to tell us what the questionmeans! That we already know.For more on this theme see this old post.

Comment #104 August 11th, 2011 at 3:39 pm

there are strong reasons (as much strength an accepted mathematical proof can lend) to be believe that P/NP question cannot be settled by a diagonalization-only line of argument, or by constructing a ‘natural’ (in Razbarov-Rudich sense) quantitative property .. and so on.

is there even a weak, indeed even very weak ones would be still up for consideration, reason, other than the fact that nobody got a viable clue yet, to believe that P/NP question is independent of ZFC etc. ? I am not expert.. so, just asking.

Comment #105 August 11th, 2011 at 3:48 pm

notaphilosopher:

is there even a weak, indeed even very weak ones would be still up for consideration, reason, other than the fact that nobody got a viable clue yet, to believe that P/NP question is independent of ZFC etc. ?Despite the fact that I wrote a whole survey article about this question 8 years ago, my personal belief is that the answer is no! But check out the survey and then form your own opinion if you want. 🙂

Comment #106 August 11th, 2011 at 3:56 pm

Scott:

If you want high-class feedback take a chance and try to interest Evan Thompson at the U of Toronto. His angle of attack is radically different from yours but he’d understand all the issues the paper covers.

You might work it via Dennett. Thompson was a junior associate of DD’s back in the ’90’s. I believe there’s still mutual respect.

Comment #107 August 11th, 2011 at 4:42 pm

Thanks for the pointer, Scott. I’ll indeed read it up.

Comment #108 August 11th, 2011 at 5:29 pm

Scott writes:

If you have time, I’d be interested to hear anything you can say about how to play devil’s-advocate for Searle and Penrose. I admit I had the same reaction as your students, so I’d enjoy hearing the counter-arguments!

Comment #109 August 11th, 2011 at 6:54 pm

Curious: Well, I probably don’t do a very good job of it. 🙂 To my mind, arguments like Searle’s and Penrose’s get off the ground when, and only when, we’re able to point to some concrete

propertyof the physics or biology of the brain that “breaks the symmetry” between it and a computer.If we don’t have a such a property, then it’s obvious that any philosophical argument that anyone gives to show some limitation of machine intelligence, can be repeated verbatim to show the same limitation for

humanintelligence! So then the only way to break the symmetry is by appealing to prejudice:“

I’ma human, andIthink! So I’m willing to grant that other humans think because they seem similar to me. But machines don’t think, because I can’t imagine what it would be like to be one.”Compare:

“

I’ma white male, andIthink! So I’m willing to grant that other white males think because they seem similar to me. But blacks and women don’t think, because I can’t imagine what it would be like to be one.”If, on the other hand, we

dohave a real candidate for a differentiating property between humans and machines, then all the details of Searle’s and Penrose’s arguments seem completely dispensable! At that point, we might as well just talk directly about the property itself.So, discussion in my class often quickly moves to the question of what such a differentiating property could be. For Penrose, of course, the property has something to do with quantum gravity effects in microtubules—but it seems to me that that idea has to overcome 4 or 5

differentsevere problems before one can take it seriously! For Searle, the differentiating property is simpler: it’s the brain’s unique “causal powers,” end of story! And I hope you won’t be such an impertinent moron as to ask what those causal powersare, or how we could tell whether a given physical system has them or doesn’t.To my mind, the best (only?) candidate for a differentiating property we know is that computer programs can be

copied. That has lots of interesting implications: for example, you can predict everything an AI will do ahead of time by running it on a faster computer. You can “murder” an AI and then immediately resurrect it from a backup copy. You can pore over an AI’s source code to see exactly why it made the decisions it did. You can start lots of identical runs of the same AI, so that each run has no possible way of knowing “which one it is.”Now, it’s at least

conceivablethat principles of physics (such as the No-Cloning Theorem) would prevent us from doing the same sorts of things with human brains—and that thislackof ability to copy a brain state is somehow relevant to the strong-AI debate. I actually agreed recently to writeanotheressay, foranotherAlan Turing centennial volume, that will explore that possibility in more depth. If you’re interested, look for it in a few months at a Shtetl near you!Comment #110 August 11th, 2011 at 9:24 pm

Scott #103

Ok I think I understood. In addition we could disguise these axioms using “3SAT does (not) reduce to PRIME”, which would be a not too trivial way to include P(!)=NP as axiom.

Once reading your “Floating in Platonic heaven”, I wonder if finding a proof for P!=NP could be uncomputable? Would this mean both P!=NP and we can’t even prove we can’t prove it?

… but I’m off topic here. Maybe it’s time for the 24 hour Ask-Scott-Anything event? 😉

Comment #111 August 11th, 2011 at 9:36 pm

Joshua Zelinsky,

I’m skeptical that sex that this is the “real reason” Hebrew has two words for “know.” Lots of languages have two words! In particular, German and Spanish do, and they translate the biblical usage as know-a-person. (More precisely, Reina uses “conocer”, while Luther does not directly use “kennen” but adds the prefix “er-“. Modern translations don’t render it “know” at all, much as modern English translations don’t.)

Comment #112 August 11th, 2011 at 10:45 pm

Scott,

why can’t we say that there are AI that can’t be copied exactly? Or why should an AI act in a similar way every time it is run? These seems like assumptions about AI that doesn’t look to have strong support.

Likewise, why not being able to copy a brain exactly is a problem? A brain changes all the time, and damaging a very small part may not have any effect on the actions.

I will be looking forward to read your article.

Comment #113 August 11th, 2011 at 10:48 pm

By the way, is there a reason that you don’t have comment threading on your blog? Your post receive lots of comments on different things and following a discussion is not easy without threading.

Comment #114 August 12th, 2011 at 12:31 am

Doug, that’s a good point. In this case the use of it to mean to have sex with is very old. See for example Genesis 4:1. I don’t know enough about other languages to discuss what is going on with them. And for that matter, for other languages, I don’t know how much of whatever they do might have been influenced by the issues presented by the Biblical translation (English for example has been influenced a lot by the KJV and the decisions its translators made.)

Comment #115 August 12th, 2011 at 2:01 am

Yes, languages change, so I checked that these old translations did have two verbs. They do, but the situation with German is complicated. The two Spanish translations consistently use the two words for “know” the way I was taught for modern Spanish. Everything in Genesis 3 about the tree of good and evil is know-facts. In Genesis 2, when the tree is named, one translation makes it the “tree of good and evil” – no knowledge. Another makes it the “tree of the science of good and evil.”

I said that Luther uses “erkennen” in Genesis 4:1. My dictionary translates this as “recognize,” but says that one can recognize not just people, but errors, which is edging towards knowledge of facts. Moreover, it says that “Erkenntnis” means “knowledge” and even puts “wissen” (know-facts) in parentheses!

In Genesis 2:17, Luther uses “Erkenntnis” for the tree. Hoffnung für alle goes farther and uses the verb – “the tree that lets you recognize.” But in Genesis 3:5, both translations use “wissen,” both for what people who have eaten of the tree and for God knowing what it does. If they use both verbs for the same fruit, they can’t have a clean delineation.

Anyhow, the point is that both languages already had two verbs before these influential translations. They weren’t the first translations, so it’s possible that Hebrew had earlier influence, but I doubt it. Certainly, the sexual usage did not lead to the split in the same way as Hebrew, as the sexual usage was different.

Comment #116 August 12th, 2011 at 2:51 am

Ian Parberry published in 1992 a paper entitled ‘Knowledge, Understanding, and Computational Complexity’, see http://larc.unt.edu/ian/pubs/knowledge.pdf , in which he refutes the Chinese Room argument of Searle in light of complexity theory. It may deserve a reference.

PS You mention going to Norway, may I ask if it is a professional visit?

Comment #117 August 12th, 2011 at 6:23 am

The problematization of the concept of “largest known prime” is awesome! Thinking about it, there probably is an even simpler way of making the problem arise in anybody’s mind, noting that the very expression “the largest known prime number” is meaningful, while “the largest known integer number”, “the largest known even number”, “the largest known power of two” are not… maybe it could be interesting to explore where the line is drawn for the meaningfulness of “the largest number with property P”. For instance, I guess that to all the readers of this blog the expression “the biggest known perfect square” sound pretty silly, but the perception may vary among population… Couldn’t that be work for some experimental psychologist?

Comment #118 August 12th, 2011 at 8:24 am

Doug,

That’s very interesting, thanks. Now I’m wondering if English has less distinction in this regard to other languages. I sort of assumed that Hebrew was the exceptional example but what you’ve shown suggests that it isn’t an uncommon thing to do.

Comment #119 August 12th, 2011 at 9:02 am

Jan Arne Telle #116:

Thanks very much for the Parberry reference! I’ll add it to the essay and acknowledge you.

(It seems the observations I made about the need to rule out exponential-sized tables in the Turing Test and Chinese Room are natural enough that they occurred to lots of people independently! Besides Parberry, someone emailed me this paper by Stuart Shieber, which makes similar points.)

As for Norway: I’ll be attending this FQXi conference, which will take place on a cruise (!) departing from Bergen, Norway and ending in Copenhagen.

Comment #120 August 12th, 2011 at 10:07 am

Re: #76. Sorry if I’m getting into troll country and “hypocrisy” may be a bit excessive. But can Scott understand why a statement like this might induce cognitive dissonance in light of other stuff the writer has said?

“Personally, I see no reason at all why the brain couldn’t be simulated by computer (neuron-by-neuron if necessary), and P≠NP does nothing to challenge that belief.”

http://www.scottaaronson.com/blog/?p=686

Comment #121 August 12th, 2011 at 10:33 am

Scott requests:

I’m happy to oblige, Scott … speaking as an engineer, we cheerfully acknowledge it!

But on the other hand — still speaking as an engineer — in any essay that aims for readability and logical clarity, it’s daunting to discover that ordinary words such as “simple” and “obvious” have been freighted with unusual technical and philosophical meanings, often with no cue to the reader other than the sporadic application of “quotation marks.”

For example, the first five uses of quote-marks are “other,” “big,” “math barrier,” “dissolves,” and “philpapers archive.” Four of these have no special meaning (recognizable by me), but one of them can be read as a oblique reference (intended or accidental) to a celebrated-by-philosophers maxim of Wittgenstein. Gee, now I wonder whether the other four quote-mark usages

alsoare oblique references to celebrated philosophical maxims … is it good to make readers work so hard, by using quote marks so inconsistently?With specific regard to decidability, is it entirely “simple” and “obvious” that truth and decidability have to be carefully distinguished, in both complexity theory and philosophy? If this distinction is entirely simple and obvious, then why is there a vast and still-growing mathematical and philosophical literature upon it?

It is illuminating to reflect that engineers have a centuries-old passion for verifying and validating every kind of computation, including but not limited to decision, function, search, optimization, sampling, and decidability computations. So is it surprising that engineers view decidability as one more concrete computational attribute (namely, the verifiability of claimed truth)? And this verification-centric engineering perspective on decidability extends not only to grand challenges like the decidability of P vs NP, but also to practical challenges like the decidability of claimed membership in P, or claimed sampling from a specified distribution.

From a historical point of view — given that philosophical and mathematical luminaries including Wittgenstein, von Neumann, and Turing all earned engineering degrees and/or worked as practical engineers — isn’t it “simply and obviously” the case that decidability is the computational attribute that engineers traditionally have cared

mostabout?What’s new to 21st century engineering isn’t a passion for understanding decidability; what’s new is the increasingly ability to apply that understanding. That is why engineers are appreciative and grateful for the ever-more-powerful tools that complexity theorists and philosophers are providing, which serve so well to help verify and validate engineering computations of ever-increasing variety, scope and consequence.

Comment #122 August 12th, 2011 at 1:34 pm

Sorry if I’m getting into troll country and “hypocrisy” may be a bit excessive. But can Scott understand why a statement like this might induce cognitive dissonance in light of other stuff the writer has said?“Personally, I see no reason at all why the brain couldn’t be simulated by computer (neuron-by-neuron if necessary), and P≠NP does nothing to challenge that belief.”aris #120: I stand by that sentence, but I’m almost certain that you read more into it than I actually wrote.

Firstly, I didn’t say I know that the brain

canbe simulated computationally neuron-by-neuron, just that I don’t know of any reason why itcan’tbe! And indeed, while people can reasonably disagree here, I do think that the burden of proof rests squarely on the skeptics to explain what it is about the physics or chemistry of the brain that makes simulationnotfeasible.For example, is the brain a quantum computer? an infinitely-accurate analog computer? is it sensitive to quantum-gravity effects in microtubules, as Penrose believes? or spirit forces outside the physical world entirely? in any case, it would need to be

somethingbeyond the well-understood physics and chemistry that current science, rightly or wrongly, considers relevant to brain function!The second point is that the context of remark was

nota discussion about AI—if it had been, I would’ve been more careful to ward off confusion. Rather, it was a discussion of what I called the “forehead-bangingly common misconception” that P≠NP would imply that human creativity can’t be automated. Whatever you think about that question, it has nothing to do with P vs. NP, and that’s the real point I was trying to make.Comment #123 August 12th, 2011 at 2:06 pm

Now, it’s at least conceivable that principles of physics (such as the No-Cloning Theorem) would prevent us from doing the same sorts of things with human brains—and that this lack of ability to copy a brain state is somehow relevant to the strong-AI debate. I actually agreed recently to write another essay, for another Alan Turing centennial volume, that will explore that possibility in more depth. If you’re interested, look for it in a few months at a Shtetl near you!This sounds absolutely fascinating! I hope you aren’t deterred by some of the negative feedback in the above blog comments and feel it necessary to tone down the speculation or controversy in the upcoming write-up. And for every layperson out there who dismisses your work there are a hundred others who are benefitting greatly from your popular writing.

Comment #124 August 12th, 2011 at 6:38 pm

You mentioned in your article Valiant’s paper on evolvability, which is fantastic – I’m blown away that people are even asking such questions, let alone seriously describing answers to them. But Valiant’s model of evolvability seems entirely classical. Is it known whether some nontrivial quantum computation is going on in cells, and if so, have Valiant’s ideas been extended to “quantum evolvability”?

Comment #125 August 12th, 2011 at 7:02 pm

Qiaochu: No and no. (We don’t know whether nontrivial quantum computation goes on in cells, and Valiant’s evolvability model has not been extended to quantum.)

Comment #126 August 13th, 2011 at 1:15 am

Great writing. There is one improvement I’d like to make. 7.2 section, on simplicity, ‘bleen’ and ‘grue’: I was disappointed that this passage wasn’t tied to the work of Kolmogorov and Vitanii, I think that direction would offer a simpler and more convincing argument. Please consider. Thanks for this very interesting work!

Comment #127 August 13th, 2011 at 1:01 pm

Speaking of No-cloning, are you aware of Aaron Fenyes work on this issue in classical mechanics?

Limitations on Cloning in Classical Mechanics

http://arxiv.org/PS_cache/arxiv/pdf/1010/1010.6103v2.pdf

There’s no cloning in symplectic mechanics

http://math.ucr.edu/home/baez/dual/no-cloning.pdf

Comment #128 August 13th, 2011 at 3:00 pm

[…] Aaronson’s new paper Why philosophers should care about computational complexity (there’s a post about it with lots of comments on his […]

Comment #129 August 13th, 2011 at 3:07 pm

You’ve stated the bottom of page 47 (v2) in different terms before. Kind of like it better:

Shor’s factoring algorithm presents us with a choice. Either:

1. the Extended Church-Turing Thesis is false,

2. textbook quantum mechanics is false, or

3. there’s an efficient classical factoring algorithm.

All three seem like crackpot speculations. At least one of them is true!

Comment #130 August 13th, 2011 at 5:25 pm

@Scott #45 & James #37.

The problem with P=NP is that, there is still too little known about polynomial algorithms. For example what are the limits of semi-definite programming. What do we know about minima of quartic polynomials? Only that not all non-negative polynomials are represented as a sums of squares. The later is sufficient to find minima efficient by semi-definite program. That is not constructive statement. For example principal component analysis is quartic problem, and is solved by singular value decomposition. Let me give you another example. Partition problem is asking whether the sequence of integers $a_i$ can be divided onto two parts that have equal sums. That is equivalent to asking whether quartic polynomial $p_p= \sum_{k=1}^n (x_k^2-1)^2+ (a^T x)^2$ is zero or not. If it is zero it is sum of squares, if it is not zero one need to show that for all t when $p_p-t$ is non-negative it is sum of squares. One can see that quartic diagonal $\sum x_k^4$ can be represented by the sum of quadratic forms $\sum x_k^4= \sum (x^T A_i x)^2$. Moreover, one can always choose real matrices $A_i$ that it will span the whole space of quadratic symmetric matrices, i.e. any quadratic form of the same dimension can be written as a linear combination of $A_i$. That means that any polynomial $p_p$-like (ignoring constant/free term for a moment) can be expressed as $p= \sum x_k^4 + x^T Q x+ const= \sum (x^T A_i x – b_i)^2$ (Q is any real symmetric quadratic matrix).

Givens rotation in space of quadratic monomials reads as $ (x^T A_1 x- b_1)^2+ (x^T A_2 x- b_2)^2 =$ $ (x^T A_1 x \frac{b_1}{\sqrt{b_1^2+b_2^2}} + $ $ x^T A_2 x \frac{b_2}{\sqrt{b_1^2+b_2^2}}$ $ -\sqrt{b_1^2+b_2^2})^2$ $ +(x^T A_1 x \frac{b_2}{\sqrt{b_1^2+b_2^2}}-x^T A_2 x \frac{b_1}{\sqrt{b_1^2+b_2^2}})^2$ $ =(x^T A’_1 x – b)^2 +( x^T A’_2 x )^2$, i.e. repeating those rotations ~n^2 times one can write $p= \sum x_k^4 + x^T Q x+ t=\fraq{1}{t}(x^T Q x/2 – t)^2+ \sum_i (x^T A’_i x)$.

Now suppose, that we chose t that min $p_p(x*)=0$. Than, looking at properties of local minima of $p$ $t=-\frac{1}{2} {x*}^T Q x*$, and $p$ is representable as a sum of squares. That is sufficient condition to find t in time log($\epsilon$). Scott, I hope you can see that in worst case scenario, when the difference in the sum is 1, the minimum of $p_p$ is polynomial in $a^Ta$, which means semi-definite program will distinguish between 0 and 1 in polynomial time.

Comment #131 August 13th, 2011 at 7:30 pm

Chris #127:

No, I hadn’t seen that work by Fenyes; thanks for the link!

Unfortunately, in the same paper where he proves the “classical no-cloning theorem,” Fenyes also very helpfully and honestly explains why it’s not physically relevant in the sense that the quantum no-cloning theorem is.

(Namely: the classical version breaks down in the presence of ancilla bits, whereas the quantum version doesn’t.)

Comment #132 August 13th, 2011 at 8:24 pm

Fenyes’ article, and also John Baez’ fine essay upon it, both are nicely complemented by Henry Cohn’s

Random Mathematical Thoughts(hosted on Microsoft Research’ web page) which includes an essay titled “Why symplectic geometry is the natural setting for classical mechanics”What’s particularly nice about Cohn’s essay (apart from its clarity and concision) is that the word “classical” plays no essential role in it, such that the Cohn’s essay can be read as concise geometric description of quantum dynamical flows. That’s why it’s natural that no-cloning theorems exist in both classical and quantum dynamics.

Comment #133 August 13th, 2011 at 10:29 pm

@mkatkov, Comment #130:

Are you implying that you are proving P=NP in that comment?! If so, why don’t you just say it?

Comment #134 August 13th, 2011 at 11:30 pm

@Amirali.

You are from the right group. Look at it and tell me. It should be normal latex, unfortunately unparsed here.

Comment #135 August 14th, 2011 at 2:22 am

@mkatkov,

I hope you don’t find this offensive, but I wouldn’t really expect people to take your “proof” of P=NP seriously, when you don’t seem to even be willing to claim it with conviction yourself (e.g., write a paper with the title P=NP (!!)). (I just Googled and found a paper of yours, where you claim to solve MAXCUT in poly time with SDP. I find that even more speculative because people have been stuck at doing better than a 0.87 approximation forever and you jump all the way to optimality without explaining what happens for those intermediate “easier” problems.)

Also, unlike acclaimed proofs of P\neq NP, there’s a really easy way of getting the world to read a P=NP proof: just factor one of the RSA challenge numbers! (In your case, since the “proof” is based on SDP, it’s not the case that you have a poly time algorithm with a prohibitively large exponent.)

—————————-

Because of these reasons, I didn’t read your argument. Nevertheless, I can give you this general response:

If one gives a (correct!) reduction from an NP-hard problem to the problem of deciding nonnegativity of a restricted class of quartic polynomials, then one of three things can happen:

1. Within this restricted class, there are nonnegative polynomials that are not sums of squares (sos).

2. Deciding feasibility of semidefinite programs is not in P.

3. P=NP.

By all we know, the first option above has always been the case, though it may not be easy to find examples of nonnegative polynomials that are not sos within the restricted class. There are many examples: see e.g. the matrix copositivity problem, or the problem of testing convexity of quartics:

Hardness of copositivity:

http://www.springerlink.com/content/m5457r63016421j8/

Example where sos on quartics fails to check copositivity:

http://www.math.uiuc.edu/~reznick/hil17.pdf

(Horn form on p. 10)

Hardness of convexity:

http://mit.edu/~a_a_a/Public/Publications/convexity_nphard.pdf

Example where sos on quartics fails to check convexity:

http://mit.edu/~a_a_a/Public/Publications/convex_not_sosconvex.pdf

————-

If I had time (which I don’t in the next couple of months), I would look for a counterexample for your class of polynomials too. But from a quick look, it seems like you have a polynomial “positivity” problem instead of a “nonnegativity” problem. So, you have to see when your polynomial is positive, what is a bound on the minimum value in terms of the integers a_i and subtract that from your polynomial. This is a standard “perturbation” argument that is needed and I don’t see it in your comment.

Comment #136 August 14th, 2011 at 6:17 am

Your welcome Scott! Yes, i wasnt sure of the relevance to anything you may be working on (with particular regard to the article you mentioned for the Turing Centennial Volume) but you may want to mention it in passing as it brings some of the subtlety of the issue to light. Always nice for a well known and respected name to cite a young researcher also i would think!

Comment #137 August 14th, 2011 at 6:32 am

P.S. I loved the article by the way! Im fascinated by the intersection of philosophy and science/mathematics and wish more people such as yourself would engage in this kind of dialogue! Im a big believer in the use of both philosophy and science/mathematics to clarify and expand each other horizons. This has happened more readily in foundational issues in quantum theory and general and special relativity (and to an extent quantum field theory). This is also true of the cognitive and neurosciences (Cf. Thomas Metzinger, The Churchlands, Shaun Gallagher, Evan Thompson, Francisco Varela etc.). It would be wonderful for a growing and fascinating area of thought such as computational complexity to interact with philosophical conundrums on a more frequent basis. Have you read any of Jakub Szymanik’s papers?? Hes applied computational complexity to natural language. Might be of some passing interest to an intellectually voracious fellow such as yourself!

Comment #138 August 14th, 2011 at 7:34 am

@ Amirali #135

Dear Amirali, thanks, for long reply. I know part of you work on the sos and convexity. I hope my response is also not offending, in any case, I apologize.

I do not expect people to look at it seriously, especially, given that Pablo group (I think the most reputable in this field) does not believe there is something interesting in looking at. I do not care if someone will look at it, I do not even care about priority, prize, money, fame etc. Every corner people crying it is most important problem. You given a chance to look at the problem from different angle, if you dismiss it, take your long way. Writing a paper on P=NP, given a bias toward the opposite, requires it to be perfect. If you’ve seen max-cut paper, you can see it is far from perfect, it is more or less mumbling. I do not have that skills, and therefore, give it to public in the hope there is another unreasonable person with technical skills to write it perfect, and get the fame (what Scott calls build the city). By the way, the same arguments given in comments applies for max-cut after eq 39, or what ever, depending on the version. I did some simulations for max-cut, and get it working for small dimensions (up to 8). Moreover, I have feeling that I understand practically all aspects of the solution I got from the simulations. I had long and boring (for Noah) discussion with Noah, where he was asking for the explanation why quartic diagonal plus quadratic forms are sums of squares when non-negative, those arguments above are to explain it.

I’m not interested in contests (factoring, also it is hard, since SDP is on the quadratic monomials, and the matrix is dense, i.e. it requires to solve SDP on n^4 dimensions, there are no such solvers for reasonable n, max I can get on my computer n=14 computing overnight), it is your field, you define rules. I’m outsider, that happen to find some regularities. Leave me off learning assembly language that I do not need any more in my life.

As to the last point, it is sufficient to show (by NZ Shor 1987) that polynomial minus its minimum is represented as sums of squares (that is non-negativity) for SDP to find tight bound (that is the third paragraph). Anyway, in the case Scott find Marsian monsters, and you would be ready to, boring for you, explanations to outsider e-mail me, you are the authority, if it does not pass your group, it will not pass anywhere. I think we need to close this discussion, until Scott get better understanding at computations (or meet Marsians).

Comment #139 August 14th, 2011 at 12:27 pm

@mkatkov,

I just want to say that I didn’t dismiss your approach. I think your approach is a legitimate one. I myself have thought about (more like “dreamed about”) similar things in the past; I was just giving you my impression as to why I think this shouldn’t work.

The “problem” with sos relaxation is that in low dimensions and degrees, it’s quite hard to find instances where it doesn’t work. So, I believe you when you say you tried MAXCUT up to n=8 and it always worked. Take a look at this paper which does exactly what you are suggesting for SAT problems:

http://jsat.ewi.tudelft.nl/addendum/SDPpaperforJSATaddendum.pdf

Look in particular at Section 8 where they say they tried 1500 instances with 10 variables and counldn’t find an instance where they could reliably say sos failed. But this doesn’t say that if we could (which we currently can’t because of solver limitations) solve instances with, say, 1000 variables, this would be the case. In fact, Blekherman has shown that when you have “millions” of variables most nonnegative polynomials are not sos:

http://front.math.ucdavis.edu/0402.5158

Unfortunately, I have my defense in 10 days and have to leave this discussion here. I will try to look again at your papers later in September. Let me just leave you with one more reference on a new approach for solving large scale SDPs:

http://math.ucsd.edu/~njw/PUBLICPAPERS/regpop.pdf

They have solved sos problems with 100 variables. You may want to try it on your polynomials.

Comment #140 August 14th, 2011 at 1:30 pm

Thanks, and good luck with defense. Nie and Wang regularization works for block diagonal problems, whereas my case is dense, nevertheless, I will look at it more, thanks. I completely agree with your paragraph 2, and it is unfortunate that we cannot test large dimensions.

Good luck!

Comment #141 August 14th, 2011 at 6:10 pm

Very nice essay, I enjoyed every section, especially the one about logical omniscience.

I’ve always been intrigued by the strong (and missing) links between theoretical physics, th. computer science and metaphysics, thanks for throwing some additional ideas!

As a very minor comment – if I did not misunderstand – at page 25 you seem to suggest that a Bayesian Occam’s razor has to be implemented by a peculiar choice of the prior (e.g. by giving higher prior probability to shorter-description hypotheses).

This is clearly possible, but I think it’s useful to cite that there is also a built-in Occam’s razor in Bayesian model selection which is quite philosophically pleasing and it comes for free (even though it might be very hard to compute in practice).

Namely, the probability of a model M given the dataset D, P(M|D), is proportional to P(D|M) P(M).

It happens then that more complex models are naturally penalized because they can typically explain (fit) more datasets, but P(D|M) as a conditional probability distribution must be normalized. So, roughly speaking, the more datasets a model can explain, the lower probability density it can have on a specific dataset (and this penalizes the overall posterior probability of a complex model).

Notice that this has nothing to do with the (arbitrary) choice of P(M); you could pick a uniform prior probability over the models and still benefit from Bayesian Occam’s razor.

So if by qualitative you mean non- or weakly-informative, you can be qualitative as well in a Bayesian setting, as long as you can set a proper noninformative or weakly informative prior over the models.

Comment #142 August 14th, 2011 at 11:58 pm

Re Trolls: Scott don’t let the little people get you down, you have great blog and interesting community. Don’t forget that if you agree with everyone and have no enemies, you stand for nothing.

Re Philosophy: I think that P=?NP is a philosophic question that primarily rests on solving the problem of induction. However, I would not encourage modern philosophers to enter TCS but to encourage TCS members to think more philosophically. Modern philosophy is a mess and has wrecked other sciences such as physics. It has made physicists indifferent and even hostile to any philosophical analysis that would help resolve the QM/GR integration problem (also a problem of not having a theory of induction).

Comment #143 August 15th, 2011 at 9:07 am

Hi Scott, I just read first three sections but I feel a great urge to say that I love your popular/philosphical writings. You’re my favourit modern philosopher :-). I especially like the discussion about ‘knowing a number’.

I’m really glad you found time to write it and I hope for more of this sort in future.

Comment #144 August 15th, 2011 at 5:28 pm

Congratulations, Scott! This ECCC dowload count is really impressive. Compare your paper’s currently 7100 downloads with the 866 for Reingold’s SL=L paper, and the 598 for Fortnow’s ‘Past Decade’ survey. Are these hits mostly from Reddit? What does your blog’s web-analytics say about the local copy?

Some minor comments on the paper:

Footnote 17, on Cramer’s Conjecture: I think most readers will think that the footnote undermines your point. That is, they will say that p’ should be considered unknown, regardless of the status of the Conjecture. I don’t argue for this position, I am just noting that it is probably a popular one.

IMHO the last paragraph of Section 3.2 (lower bounds are hard) has a slightly uncomfortable place in that Section.

I am sympathetic to ultrafinitism, so I am not sure I can agree with paragraph two of Section 12.0 (Platonism about the halting of TMs, and the irrelevance of complexity considerations in such foundational questions). But regardless of my humble opinion, how come you didn’t incorporate the wonderful anecdote from the blog post titled “And they say complexity has no philosophical implications” into a paper titled “Why Philosophers Should Care About Computational Complexity”?

Comment #145 August 16th, 2011 at 9:58 am

I don’t understand Cobham axioms (and I’m a mathematician). Ad (2): “some standard pairing function” – what does it mean?

Ad (5): g() is a function of one or two variables? Or do you mean that for every k, x \mapsto g() is efficiently computable? In a ny case I don’t see properly defined.

Comment #146 August 16th, 2011 at 12:41 pm

Lukasz #145:

A pairing function is just a function that encodes a pair of natural numbers by a single natural number. (Similar to what’s used in Gödel numbering.)

g is a function of a single natural number y, but that y is a “package” encoding two other natural numbers x and k. This is precisely what we use the pairing function for.

Comment #147 August 16th, 2011 at 2:07 pm

[…] and P=NP gave an update on Deolalikar’s announced proof (has it really been a year?) and Shtetl Optimized released an essay on computational complexity and philosophy — together with the yearly “ask me anything” thread Shtetl Optimized received a […]

Comment #148 August 16th, 2011 at 7:11 pm

Hey Scott,

Great essay, I really enjoyed it and was lucky to stumble upon it via Google+ and my nerdy CS friend(s). I must admit I was especially ignorant of the quantum mechanics aspects of computational complexity, and while I was disappointed that NP problems will not be resolved by quantum computers anytime soon, I was happy to hear that time travel might just still be possible. Ahem.

I hope philosophers and computer scientists will indeed increase collaboration in the future – as a CS researcher myself, I would be happy for any “excuse” to enjoy more philosophy on the job.

All the best,

-Shiri

Comment #149 August 18th, 2011 at 8:29 am

The Turing test is a pretty thin and unsatisfactory of deciding whether someone is human, since there is more to being human than being able to produce text. Even that limited test is now more difficult to pass than it used to be, since we know that computers are able to produce grammatical, idiomatic, and on-topic language up to a point. Thus we now ask for a lot more bits of information. I don’t believe that I’ve ever been fooled on the internet, but if I was I am pretty sure that it was by a computer programmed to simulate a not-very-smart person obsessed by a few hot-button topics (i.e. a winger r=troll).

Usually our decisions that flesh and blood people we meet are human are governed by a lot of previous context: first, we normally meet people through direct or indirect introductions from a thick context (work, family, friendships, communities, school, etc.) Furthermore, in our experience (and even in urban legend) people in these larger contexts so far do not introduce their friends to robots. Thus the additional amount of information we ask for is very small indeed. The second context is that, as far as I know, robotics so far has been lucky to get into the uncanny valley, much less out of it.

These contexts can be stripped out, though. Imagine meeting a real person in a major airline hub, in which case you have no context at all. Second, imagine that they speak English poorly with a foreign accent and seem to have some sort of neurological condition causing strange tics and mannerisms (Tourettes, Aspergers, etc.). In effect, this is an actual person who has the misfortune of being in the uncanny valley.

And if it was the case, or rumored to be, that some agency was sending robots to airports for some sinister purpose, you would be still more suspicious. And even though a foreigner with an accent and a neurological problem might be a perfectly wonderful person in some respect, for example a talented computer scientist, he might end up flunking the airport Turing test. Or, on the other hand, if someone was looking for an Aspergy Romanian computer scientist they might end up accepting a robot as human.

Don’t know if this is relevant.

Comment #150 August 23rd, 2011 at 8:40 am

I was very pleased, for a variety of reasons, to see with footnote #24 referring to my remark. One reason is the opportunity to recommend the wonderful Isreali movie, written and directed by Joseph Cedar : “Footnote”

http://en.wikipedia.org/wiki/Footnote_(film)

Comment #151 August 26th, 2011 at 2:44 am

Space and time have different signatures. Is there a computational way to distinguish them?

Comment #152 August 26th, 2011 at 3:12 am

space vs time: uhh, that you can reuse space but you can’t reuse time?

Comment #153 August 26th, 2011 at 6:16 am

What you are stating is a conjecture isn’t it? Is it on a rigorous pedestal?

Comment #154 August 26th, 2011 at 6:20 am

Is this a sci-fi scenario? Say aliens abducted to and you are at coordinates (x,t). They put you in a black box with a computer with a probe to the outside. Then they move you through a sequence of steps (x,t) -> (x1,t) ->(x1,t1)->(x2,t1)->(x2,t)->(x,t). Each step, the aliens move you through either space or time and bring you back to your starting point. Is there a way to say how many space transitions and how many time transitions you took?

Comment #155 August 27th, 2011 at 10:21 am

space vs time: interesting question. In normal circumstances, you could maybe infer something about the time parameter from the ambient entropy. But if these aliens can move you backwards and forwards in time, maybe they can also violate the Second Law!

In normal circumstances, the distinction is indeed that you can go any direction in space but only one direction in time. Within the framework of our two best theories (relativity and QM), this is not a conjecture but a straightforward fact (in relativity, the difference is that time has the opposite metric signature).

–Scott (on a ship off the coast of Norway)

Comment #156 August 28th, 2011 at 1:20 am

Professor I think pure space transitions cannot be distinguished. But I suspect time transitions can be!!

Comment #157 August 30th, 2011 at 9:50 pm

Regarding comment #144, I actually thought that the footnote sort of goes against the point. But if you use the conjecture to create new primes that we known using the “first prime after … ” statement, your algorithm becomes less and less efficient as you nest the statement more and more times.

I was wondering though about whether people would say that we “know” a prime if instead of having a simple closed formula like the one in the essay, we had a more complicated one, that we couldn’t evaluate so efficiently. I suspect many people would be satisfied with it. In fact, I suspect many people would be more satisfied with a complicated closed formula for the answer of a problem than with a simpler algorithm to compute it.

It seems to me that the possible complexity-theoretic arguments that might be true against AI, might also be made in areas other than AI where something should be possible to build, but we cannot build it. (Of course, a more likely candidate is “engineering problems”, but if there is something deeper behind those problems, then that something might be related to computational complexity, and efficiency in general).

Also, I think there is a typo in the last paragraph of page 14, when it says “then that would certainly strong excellent empirical evidence”.

Comment #158 January 19th, 2012 at 11:54 am

[…] Scott Aaronson’s 53 page Why Philosophers Should Care About Computational Complexity makes a strong case for places where the tree of philosophy can be trimmed by the sheers of […]

Comment #159 June 7th, 2012 at 8:10 am

A philosopher who cared about complexity theory.

http://www.gazettenet.com/2011/10/28/robert-ackermann.

The exception that proves the rule !

Comment #160 June 26th, 2013 at 3:31 am

[…] Philosophical Applications of Computational Learning Theory (1997), and Scott Aaronson’s Why Philosophers Should Care About Computational Complexity (2011) and recent meditation on free-will […]