## The T vs. HT (Truth vs. Higher Truth) problem

From a predictably-interesting article by Freeman Dyson in *Notices of the AMS* (hat tip to Peter Woit):

The mathematicians discovered the central mystery of computability, the conjecture represented by the statement P is not equal to NP. The conjecture asserts that there exist mathematical problems which can be quickly solved in individual cases but cannot be solved by a quick algorithm applicable to all cases. The most famous example of such a problem is the traveling salesman problem, which is to find the shortest route for a salesman visiting a set of cities, knowing the distance between each pair. All the experts believe that the conjecture is true, and that the traveling salesman problem is an example of a problem that is P but not NP. But nobody has even a glimmer of an idea how to prove it. This is a mystery that could not even have been formulated within the nineteenth-century mathematical universe of Hermann Weyl.

At a literal level, the above passage contains several howlers (I’ll leave it to commenters to point them out), but at a “deeper” “poetic” level, Dyson happens to be absolutely right: P versus NP is the example *par excellence* of a mathematical mystery that human beings lacked the language even to express until very recently in our history.

Speaking of P versus NP, I’m currently visiting Sasha Razborov at his new home, the University of Chicago. (Yesterday we had lunch at “Barack’s favorite pizza place”, and walked past “Barack’s favorite bookstore.” Were they *really* his favorites? At a deeper poetic level, sure.)

One of the highlights of my trip was meeting Ketan Mulmuley for the first time, and talking with him about his geometric approach to the P vs. NP problem. Ketan comes across in person as an almost mythological figure, like a man who flew too close to the sun and was driven nearly to ecstatic obsession by what he saw. This is someone who’ll explain to anyone in earshot, for as long as he or she cares to listen, that he’s glimpsed the outlines of a solution of the P vs. NP problem in the far frontiers of mathematics, and it is beautiful, and it is elegant—someone who leaps from Ramanujan graphs to quantum groups to the Riemann Hypothesis over finite fields to circuit lower bounds in the space of a single sentence, as his hapless listener struggles to hold on by a fingernail—someone whose ideas seem to remain obstinately in limbo between incoherence and profundity, making just enough sense that you keep listening to them.

Now, I get emails every few months from people claiming to have proved P≠NP (not even counting the P=NP claimants). Without exception, they turn out to be hunting polar bears in the Sahara: they don’t even grapple with natural proofs, or relativization, or algebrization, or the lower bounds/derandomization connection, or any the other stuff we know already about why the problem is hard. Ketan, by contrast, might be searching for polar bears with a kaleidoscope and trying to hunt them with a feather, but he’s in the Arctic all right. I have no idea whether his program will succeed within my lifetime at uncovering any of the truth about the P vs. NP problem, but it at least clears the lower hurdle of reflecting some of the higher truth.

Comment #1 January 9th, 2009 at 1:06 pm

I only see the really obvious one, that traveling salesman is NP-complete (or the decision version really), so I guess he meant to say that TSP is in NP but not P?

Ketan Mulmuley sounds really interesting too.

Comment #2 January 9th, 2009 at 1:11 pm

That paragraph definitively proves, at least for me, that Dyson is a physicist and not a mathematician. Only a physicists could butcher complexity that badly 🙂

Comment #3 January 9th, 2009 at 1:36 pm

I think you put your finger on why I like reading Dyson so much. On the surface so much to disagree with (in this article: almost anything he writes about ST, and even the basic distinction of the title, as if anyone can be a bird without also being a frog). But, when Dyson says it, it somehow smells right.

Comment #4 January 9th, 2009 at 1:42 pm

… the traveling salesman problem is an example of a problem that is P but not NP.It seems like he reversed P and NP here. Was that really his assertion or is it a typo on Scott’s part?

Comment #5 January 9th, 2009 at 1:59 pm

Yes, I see from the article that he did transpose P and NP, but I do agree that the spirit of the article is right.

Comment #6 January 9th, 2009 at 2:03 pm

someone whose ideas seem to remain obstinately in limbo between incoherence and profundityThat pretty well reminds me of every Paul Thomas Anderson film I’ve ever seen.

Comment #7 January 9th, 2009 at 2:28 pm

This is the first time I come across the name

Hermann Weyland theP, NPsymbols, both in the same paragraph. I must have missed something.Comment #8 January 9th, 2009 at 2:38 pm

Alright, for whatever it’s worth:

1. Yeah, P and NP are switched.

2. “The conjecture asserts that there exist mathematical problems which can be quickly solved in individual cases but cannot be solved by a quick algorithm applicable to all cases.” Only true if you interpret “which can be quickly solved in individual cases”

extremelyloosely, as meaning “for which a solution can be quickly verified.”3. P vs. NP is not about computability. (OK, I don’t know if that even counts.)

Comment #9 January 9th, 2009 at 3:02 pm

The Freeman Dyson article, as with so much of his writing, is deep and superb.

I submit here a shortened and apolitical version of a previous too-long comment.

You are quite right to focus on: “a mathematical mystery that human beings lacked the language even to express until very recently in our history.”

So what is a Mystery, as opposed to a mere puzzle, or routine problem solving instantiation?

Russell Cameron Thomas of Meritology mentioned the difference between puzzle thinking (looking only under the light you know) and mystery thinking (shining a light into unknown areas to see what else is out there).

I think that the distinction applies in Mathematics.

The national-security expert Gregory Treverton has famously made a distinction between puzzles and mysteries.

The Dyson paper sifts through a lifetime of his experience as a great practitioner of math and Science, and his profound biographical knowledge of key unifiers in the fields. He makes me feel as if I’ve been on a golfing trip with Weyl and Godel and Turing and Riemann.

Comment #10 January 9th, 2009 at 3:35 pm

Also, I suppose his definition of TSP is quite laconic; visiting each city/node only once is omitted.

It is wonderful to think on how we have progressed in our understanding such that we have only uncovered and formulated such problems relatively recently. How many more fundamental problems are there to uncover?

So does T = HT? 😉

Comment #11 January 9th, 2009 at 3:41 pm

but at a “deeper” “poetic” level, Dyson happens to be absolutely right: P versus NP is the example par excellence of a mathematical mystery that human beings lacked the language even to express until very recently in our history.That is true too, but to me it seems Dyson is saying something else/more: that this sort of mathematical mystery only arose because we were “exploring” like frogs and discovered ‘reducibility among combinatorial problems’ and all the rest; it is not a question we would have thought of asking if we were at a “bird” level. What do you think of that?

Comment #12 January 9th, 2009 at 9:57 pm

Does it make sense to attack P \neq PSPACE first before we even think about proving P \neq NP? BTW, I would be grateful for a brief overview of P vs PSPACE affairs. E.g., are there any circuit lower bounds for PSPACE-complete problems stronger than those for NP-complete ones?

Comment #13 January 10th, 2009 at 12:21 am

I remember reading something about Mulmuley and P vs NP before, and (unsurprisingly) not being able to understand it. Here is a pdf:

http://theorie.informatik.uni-ulm.de/Personen/toran/beatcs/column78.pdf

Comment #14 January 10th, 2009 at 3:11 am

“The conjecture asserts that there exist mathematical problems which can be quickly solved in individual cases but cannot be solved by a quick algorithm applicable to all cases.”

I wonder if Dyson actually means something like: “If we believe that NP is not contained in BPP, then heuristics for NP-hard problems will always have to be bound to specific distributions.” This is from Gutfreund et al.’s paper “If NP Languages are Hard on the Worst-Case, Then it is Easy to Find Their Hard Instances.”

Comment #15 January 10th, 2009 at 7:13 am

Wei Dai, that’s interesting. Linear programming is now known to be in P, but it took a long time to prove that, as the usual algorithm (Simplex method) can take exponential time in the worst case, which almost never happens. Does Gutfreund et al’s paper imply that we could have known right away that linear programming is in BPP, simply by observing that the simplex method almost always works?

Comment #16 January 10th, 2009 at 11:06 am

Scott says:

2. “The conjecture asserts that there exist mathematical problems which can be quickly solved in individual cases but cannot be solved by a quick algorithm applicable to all cases.” Only true if you interpret “which can be quickly solved in individual cases” extremely loosely, as meaning “for which a solution can be quickly verified.”

I don’t actually see anything wrong with this sentence. For many of these problems, there do exist individual cases which can be easily solved. I.e., for random 3-SAT with a very low ratio of clauses to variables, etc… Perhaps Scott objects because the existence of these easy cases isn’t part of the actual mathematical conjecture, but it is a true statement about the problems.

I’m surprised no one objected to the statement that “mathematicians” have discovered this. I thought you guys called yourselves “theoretical computer scientists”?

Comment #17 January 10th, 2009 at 11:08 am

Btw, I thought Mulmuley’s approach was working toward VP vs VNP? How would one then go to P vs NP? Is the idea first to do VP vs VNP over the field C, and then later do the field Z_2 (and what would knowing VP \neq VNP over Z_2 tell us about P vs NP)?

Comment #18 January 10th, 2009 at 1:28 pm

Scott, a few questions. What if P * NP were undecidable? I know Chaitin is a proponent of this idea. It has been long believed by wide part of the mathematical community that undecidable propositions are irrelevant. Maybe one or two of the most long-standing and universal problems are simply undecidable.

One could still hope to have a proof of undecidability, like the one given by Goedel and Cohen for the continuum hypothesis in the Zermelo-Frenkel set of axioms.

Is there any line of research you know about looking for a proof of undecidability of P * NP? Does it make sense?

Comment #19 January 10th, 2009 at 3:18 pm

As a layman, I interpret Dyson as saying something about, say, specific instances of the Traveling Salesman problem: If you have n points in a straight line, say, or forming a convex polygon, then, sure, no matter how large n is, you can find an optimal route pretty quickly – and prove that this is so. So this statement isn’t really wrong, is it?

Comment #20 January 10th, 2009 at 3:43 pm

tomate, undecidability seems like a nice out to dodge the question, but doesn’t it seem like for a given model of computation there either is a nice solution to NP problems or there isn’t? CH seems so much more abstract, because ZFC is more abstract. Computational models seem to have a more solid relationship to reality.

It is as if in mathematics we get to choose our axioms based on what is the most interesting/beautiful assumption or result, whereas physics hands us the axioms without any way to ever check if our hypothesized axioms are completely correct. Computer science appears to benefit from both the definiteness of mathematical logic and the most well confirmed axioms of physics.

I’d be curious to read Chaitin’s take on the whole situation.

Comment #21 January 10th, 2009 at 3:44 pm

For those willing to hear, Ketan Mulmuley is going to give a series of 3 talks in Princeton at IAS during February 9-11.

Comment #22 January 10th, 2009 at 4:08 pm

I know that P vs NP has been shown undecidable over some extremely weak theories (weaker than Peano), but that is nowhere near showing undecidability over set theory.

There’s a key difference between undecidability of a problem such as P vs NP and the continuum hypothesis. P v NP can be described as a problem on the natural numbers: there exists X,Y, such that X is a program that correctly solves 3SAT for all inputs Z and runs in time at most |Z|^Y + Y. Whereas a typical undecidable problem in set theory cannot be phrased as a problem over the natural numbers. This is a key difference because the model of your axioms can always be countable. In particular, you can express any natural number, but you can’t express any real number (there are too many real numbers to all be expressible).

If P = NP, then there exist numbers X and Y (as above) which we can write down; and furthermore we can check it for any Z. Perhaps the axioms lack the strength to prove it for _all_ Z, but at least we can check any individual case.

In contrast, for a set theoretical problem such as the continuum hypothesis, you can only write down countably many examples of sets, so there’s no hope of checking it’s true for any given set. In fact, you cannot give an explicit construction of a set between aleph0 and 2^aleph0 (http://en.wikipedia.org/wiki/Constructible_set)

Mathematically, it’s known that set theories such as ZF are extremely powerful when it comes to statements on the natural numbers. See http://en.wikipedia.org/wiki/Ordinal_analysis for a glimpse of how computability and ordinals are related: Peano can only construct computable ordinals up to epsilon0, but even weak set theories produce computable ordinals so large we have no notation for them. Because of that, I feel like it’s unlikely P v NP is undecidable over ZF (but that’s just my guess :). And if it is, there are still much stronger theories: ZFC, large cardinals…

Comment #23 January 10th, 2009 at 4:42 pm

I learned what Dyson calls the “Li-Yorke theorem” in a much more general form as “Sharkovsky’s theorem”, which I thought was earlier than the date he gives….

Comment #24 January 10th, 2009 at 5:06 pm

Cody,

you can read about Chaitin’s take on new axioms in mathematics here and almost everywhere in his bibliography. He mostly refers to the RH, and has little to say about P*NP. I found that page because I shared his view on the RH and looked for some more authoritative opinion.

I like that idea of axioms of computation being more “physical”. But we could just as well wonder if there could ever exist a complete and coherent theory of physics: it seems to me that the measure problem in QM and similar problems in CM (local tendency to equilibrium) could play the role of undecidable questions in physics. But maybe I just read too much Nietzsche and Stirner when I was young.

Luke,

this is a convincing point. And thanks for the links. Premise: I’m no expert.

All of the problems of ZFC come from measure theory over the reals and therefore from uncountability. Moreover, only a countable subset of the reals can be described with finite information, so that we cannot check hypothesis regarding the reals. But Goedel incompleteness theorem holds for any theory that is strong enough to axiomatize the naturals. In a set theory one builds the power set and gets the reals, but it is the naturals that are relevant for incompleteness, so I expect some incompleteness result regarding countable models (is there any axiomatic theory powerful enough to axiomatize the naturals, and not enough for the reals?).

If I ever had to think at an undecidable proposition about naturals, I would of course look at the primes. Because they embody the mystery of mathematics, and because they enter in a foundamental fashion into Goedel’s demonstration of incompleteness. Of course one cannot demonstrate that, for example, the RH is unrefutable, since this would be a proof. But still one could have a demonstration of unprovability, and maybe something more.

Comment #25 January 10th, 2009 at 5:22 pm

Tomate, Scott has written a good article exploring the idea that P=NP might be undecidable:

http://www.scottaaronson.com/papers/pnp.pdf

Comment #26 January 10th, 2009 at 5:26 pm

Luke G

errata: I got confused: of course there are axioms for the natural numbers not strong enough to include the reals (Peano). So now I’m wondering: if P*NP is a problem over nturals, couldn’t one build a set of axiom that fits well into the theory? If one had an incompleteness result over such axioms, would ZFC, which really have nothing to do with computation, fix this situation? I think this is unreasonable. Maybe one should just add the P*NP axiom.

Can you link me to the works you mentioned about undecidability over Peano?

Comment #27 January 10th, 2009 at 10:19 pm

Matt and ScentOfViolets: The problem is that e.g. PSPACE and EXP problems also have individual instances that are easy to solve. So Dyson’s wording doesn’t pick out the P vs. NP problem—again, unless you interpret “solve in individual cases” as “supply succinct certificates,” which seems strange.

Comment #28 January 11th, 2009 at 12:41 am

[i]BTW, I would be grateful for a brief overview of P vs PSPACE affairs. E.g., are there any circuit lower bounds for PSPACE-complete problems stronger than those for NP-complete ones?[/i]

I think I read somewhere that there are no linear time circuits for PP. Can’t find any reference though.

Comment #29 January 11th, 2009 at 1:01 am

Found it.

http://www.scottaaronson.com/democritus/lec16.html

Though the circuit lower bounds are for the entire class, not a single language.

Comment #30 January 11th, 2009 at 4:29 am

Does Gutfreund et al’s paper imply that we could have known right away that linear programming is in BPP, simply by observing that the simplex method almost always works?asdf, I think Gutfreund et al’s paper would only imply that linear programming is not NP-hard for a stronger version of “the simplex method almost always works” than in real life. In order to Gutfreund et al’s result to apply, you’d have to show that it’s impossible to find instances that are likely to be hard for the simplex method in polynomial time, but in reality such hard instances are not so difficult to find.

Comment #31 January 11th, 2009 at 6:01 am

Mulmuley is fascinating and I hope he gets the million dollars. Thanks for writing about him, Scott!

Comment #32 January 11th, 2009 at 10:10 am

Another problem is that he glosses over the difference between a problem in NP and an optimization problem. Maybe ‘glosses over’ is a bit charitable – he seems legitimately confused. Travelling salesman is only in NP for a particular threshold, and then the verifiable statement is that you beat the threshold. The question of verifying whether a solution is optimal is in coNP.

Comment #33 January 11th, 2009 at 2:36 pm

Ah, that makes sense. That goes with knowing too much and reading what isn’t there into the article. Not to be confused with an overly sympathetic or apologetic reading, mind you.

Comment #34 January 11th, 2009 at 5:35 pm

An excellent and accurate evocation of Ketan, except for one minor point. Not so much searching for polar bears, as hunting the Snark…

Comment #35 January 11th, 2009 at 9:33 pm

From my limited reading of Ketan’s papers (I follow them even less than you) it sounds like he understands naturalization limits just fine, and is taking on demonstrating circuit complexity head-on, using fancy math to try and overcome the obstacles.

Do you know if Ketan has heard of the ‘multiplication doesn’t help with addition circuits’ conjecture? That one seems to be a clarifying special case of general circuit complexity, and I’m rather fond of my own special case of it, even though the clarification is the usual computational complexity sort of backwards progress where you feel even less likely to make future progress after making an insight than you felt before it.

Comment #36 January 12th, 2009 at 4:21 am

I remember hearing Dyson talk in Boulder about 15 years ago where he claimed to have a simple thought experiment which disproved Heisenberg’s uncertainty principle.

Jeffrey Shallit on recursivity has just nominated him for Blowhard of the Month, and I think he has a point.

(http://recursed.blogspot.com/2009/01/blowhard-of-month-freeman-dyson.html)

Comment #37 January 12th, 2009 at 10:37 am

Scott, I’d like to ask you to delete the comment above from Someone and ban such people in the future.

Comment #38 January 12th, 2009 at 4:08 pm

I couldn’t agree more with Stas. It’s not the place for that sort of thing.

Comment #39 January 12th, 2009 at 6:24 pm

Stas and Someone Else: Done. Sorry, I’m at QIP in Santa Fe right now, and haven’t had a chance to check the comments section for antisemitic trolls.

Incidentally, for those who are curious: I have nothing whatsoever to say about the Israel/Gaza situation that isn’t being said 10

^{500}other places on the web. I think it’s horrible that hundreds of civilians are getting killed in Gaza, and also horrible that Hamas will probably soon be firing missiles at Tel Aviv. I hope there will be peace in my lifetime, but am not too optimistic.Comment #40 January 13th, 2009 at 4:29 am

Scott says:

… I hope there will be peace in my lifetime, but am not too optimistic …A fine post. But Scott, there

arepeople who are optimistic, and anyone who thinks theymightbe right, qualifies as optimistic too.So you might be closer to optimism than you think.

Comment #41 January 13th, 2009 at 11:48 am

On the face of it, this is off-topic:

Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.— Brian W. KernighanComment #42 January 18th, 2009 at 10:56 pm

“At a literal level, the above passage contains several howlers (I’ll leave it

to commenters to point them out), but at a “deeper” “poetic” level, Dyson

happens to be absolutely right: P versus NP is the example par excellence

of a mathematical mystery that human beings lacked the language even to

express until very recently in our history.” quoting Scott Aaronson.

SH: I was first introduced to the logician John Myhill in Hofstadter’s GEB

and I’ll produce a partial quote of Myhill’s ideas (Impossibility, Barrow).

“In 1952, the American logician John Myhill produced the most interesting

of these. Myhill classified possibilities by borrowing some terminology

from mathematical logic and dividing ideas into three classes: ‘effective’,

‘constructive’, and ‘prospective’. The most accessible and quantifiable

aspects of the world have the attribute of computability. There exists a

definite mechanical procedure for deciding if anything does or does not

possess a particular property. Computers and human beings can be trained

to respond to the presence or absence of such attributes. Truth is not a

computable property, while being a prime number is.

A more elusive set of attributes are those which are merely listable. For

these, we can construct a procedure which will list all the cases that

possess the desired attribute (although you might need to wait for an

infinite time for the listing to be completed). There is, however, no way of producing a listing of all the cases that do not possess the attribute. Many

logical systems are listable, but not computable: all their theorems can be

listed, but there is no mechanical procedure for deciding whether any given

statement is or is not a theorem. In a mathematical world with no Gödel

theorem, every statement would be listable. In a world without Turing’s

uncomputable operations, every property of the world would be computable.”

SH: The “Truth vs. Higher Truth” thread title reminded me that Myhill thinks truth is not computable. I think listable has more to do with P!=NP.

I’ve actually read Seth Lloyds’s description of the universe as a quantum

computer and I’ll assume it’s not that different than Scott’s since his review

of NKS was not all that favorable. Wolfram thinks the universe might be a

simple CA rule unfolding in complexity over billions of years; it displays

pseudo-randomness which is a lack of predictability. Quantum randomness

is conceptually different, there is no pattern, which just can’t be predicted.

Apparently a quantum computer cannot be employed to speed up the process of unfolding a CA rule. Thus we can inspect the computations from within the universe and be unable to distinguish the provenance of

our finite observations of random quantum theory origin from pseudo-random CA rule computation (if one or the other description is correct).

There is no algorithmic operation to transform the “listable” output property

into a computable procedure which distinguishes its origins. One can’t tell

from a finite sequence or digits (or observed events) whether the source

of those digits/events has a random or pseudo-random foundation.

Finishing the quote:

“The problem of deciding whether this page possesses the attribute of

grammatically correct English is a computable one. But the page could still

be meaningless to a non-English reader. With time, the reader could learn

more and more English, so that bits of the page became intelligible, but

there is no way of predicting where the meaningful parts will be located on

the page. The attribute of meaningfulness is thus listable but not

computable.

Not every attribute of things is listable or computable. The property of

being a true statement of arithmetic is an example. Attributes which are

neither listable not computable are called ‘prospective’: they can be neither

recognized nor generated in a finite number of deductive steps.”

SH: This reminded me of the Chinese Room Argument a bit which evoked

the Systems reply. In the comments about Penrose I noticed that the

‘meaning of meaning’ came into question. I think in AI this philosophical

notion often falls under original intentionality vs. derived intentionality.

“Beauty, simplicity, ugliness, and truth are all prospective properties.

Prospective properties are beyond the reach of mere technique. They are

outside the grasp of any mathematical Theory of Everything. That is why

no non-poetic account of reality can be complete.” (Dyson-> poetic level)

Comment #43 January 24th, 2009 at 1:57 pm

I took a brief peek at Ketan Mulmuley’s paper, and very early on he mentions “P versus NP in characteristic zero” and “P versus NP over C” (the field of complex numbers). What do these phrases mean?

Comment #44 January 24th, 2009 at 7:07 pm

John: There’s a notion of computation over an arbitrary ring—where instead of AND, OR, and NOT gates, the basic operations that you’re given are the ring operations together with equality testing. (If it’s an ordered ring like the real numbers, it’s also generally assumed that you have greater-than and less-than tests.) You can then formulate an analogue of the P vs. NP question over whatever ring you want—with the

standardP vs. NP question corresponding to afinitefield or ring. So for example, if I remember correctly, P≠NP over the complex numbers essentially asserts that there’s no circuit involving addition, subtraction, multiplication, equality testing, branching, and complex constants, with size polynomial in n, that accepts the integers from 1 to 2^{n}and rejects all the other complex numbers.(

Exercise:Why is that an “NP” question? That is, if a complex number is an integer from 1 to 2^{n}, why is there a polynomial-size “proof” or “witness” of that fact? You can assume the ability to recognize the constants 0 and 1. Of course, one also needs to prove that the question is “NP-complete” within this world.)You can also formulate analogous “P versus NP” questions over the integers and the reals.

To address the obvious question, no implications among the various P versus NP questions are currently known. But the general feeling is that P vs. NP over the reals or complex numbers is likely to be easier than P vs. NP over finite fields, and might therefore be an excellent “warmup” to the finitary question that we ultimately care about. Alas, the real and complex cases seem incredibly hard in their own right.

If you want to read more about nonstandard analogues of P vs. NP, the place to go is the beautiful book Complexity and Real Computation by Blum, Cucker, Shub, and Smale. I don’t know if you’ll find the time for it, but I’m almost certain you’d like it.

Comment #45 January 25th, 2009 at 1:38 am

John Baez Says:

“I took a brief peek at Ketan Mulmuley’s paper, and very early on he

mentions “P versus NP in characteristic zero” and “P versus NP over C”

(the field of complex numbers). What do these phrases mean?”

SH: I noticed that a guy named Kevin asked the same question about

characteristic zero and received a technical (to me) answer from Ketan.

http://weblog.fortnow.com/2008/04/ketan-mulmuley-responds.html

#9 ketan says to Kevin, …