## The converse of smoothed analysis

A year ago, Timothy Gowers posted the following beautiful question to MathOverflow:

Are there any interesting examples of random NP-complete problems?

Here’s an example of the kind of thing I mean. Let’s consider a random instance of 3-SAT, where you choose enough clauses for the formula to be almost certainly unsatisfiable, but not too many more than that. So now you have a smallish random formula that is unsatisfiable.Given that formula, you can then ask, for any subset of its clauses, whether that subset gives you a satisfiable formula. That is a random (because it depends on the original random collection of clauses) problem in NP. It also looks as though it ought to be pretty hard. But proving that it is usually NP-complete also seems to be hard, because you don’t have the usual freedom to simulate.

So my question is whether there are any results known that say that some randomized problem is NP-complete. (One can invent silly artificial examples like having a randomized part that has no effect on the problem — hence the word “interesting” in the question.)

On skimming this question, my first thought was: “aha, he’s obviously groping toward the well-studied notion of *average-case complexity*! Let me generously enlighten him.” But no, it turns out he *wasn’t* asking about average-case complexity, but about something different and novel. Namely, the *random* generation of computational problems consisting of exponentially many instances, for which we’re then interested in the worst-case instance. When I explained to Gil Kalai what Gowers wanted, Gil amusingly described it as the “converse of smoothed analysis.” In smoothed analysis—one of many contributions for which Dan Spielman recently won the Nevanlinna Prize—we start with a worst-case instance of a problem (such as linear programming), then perturb the instance by adding some random noise. Gowers wants to do the opposite: start from a *random* instance and then perform a “worst-case perturbation” of it. (The closest existing notions I could think of were trapdoor one-way functions and other primitives in cryptography, which involve the random generation of a computational problem that’s then supposed to be hard on average.)

Anyway, I tossed the question onto my stack of “questions that *could* develop into whole new branches of theoretical computer science, if someone felt like developing them,” and pretty much forgot about it. Then, at dinner last night, I posed the question to Allan Sly, who’s visiting MIT to talk about his exciting new FOCS paper Computational transition at the uniqueness threshold. Within an hour, Allan had emailed me a sketch of an NP-hardness proof for the “random 3SAT” problem that Gowers asked about. I repost Allan’s solution here with his kind permission.

Group the n variables into N=n

^{ε}groups of size n^{1-ε}, M_{1},…M_{N}arbitrarily. For each group M_{i}take all the clauses with all 3 variables in M_{i}such that it satisfies both the all 1 and the all 0 assignments i.e. clauses that have either 1 or 2 variables negated. I think that just a first moment estimate should show that with high probability the only assignments on M_{i}that satisfies all of these clauses should be the all 1 assignment or the all 0 assignment – other assignments are just too unlikely. So in taking these clauses we reduce to the case where we have constant values on each of the groups.Once you have these clauses you can then treat each group as a new variable and can construct any SAT assignment on these new variables. Because now you only need to find a clause with 1 variable in each M

_{i}, M_{j}, M_{k}for each (i,j,k) ∈ [N]^{3}that has the right negations. With high probability all of them should exist so you should be able to make whatever SAT assignment on the N variables you want.My back of the envelope calculation then suggests that as long as you have n

^{1+ε}random clauses to begin with then this should be enough.

It’s not hard to see that Allan’s solution generalizes to 3-COLORING and other constraint satisfaction problems (maybe even *all* NP-complete CSPs?). In retrospect, of course, the solution is embarrassingly simple, but one could easily generate other problems in the same vein for which proving NP-hardness was as nontrivial as you wanted it to be. Further development of this new branch of theoretical computer science, as well as coming up with a catchy name for it, are left as exercises for the reader.

Comment #1 October 6th, 2010 at 11:01 am

That’s very nice! I’ll try to remember why I asked the question in the first place, and see whether I this answer is satisfactory or whether I need to come up with a better question.

Comment #2 October 6th, 2010 at 12:22 pm

Hi Scott

The concept of complexity of a random input perturbed in the worst-case has been explored before, I think, in a paper by Feige and Kilian (FOCS 1998). But I don’t believe they showed any hardness results.

Btw, I guess you/Timothy want to fix the “random” initial part to something small (or else I could sample n^4 random clauses of length 3 … and then the random part includes all possible clauses anyway).

Comment #3 October 6th, 2010 at 1:37 pm

What I am confused about is that if you have a random (unsatisfiable) instance of SAT, and then you find a subset of the clauses that comprise a satisfiable instance, then why is this final satisfiable instance random?

It could be that the original random instance always contains a subset of clauses with a special structure, and therefore this special subset is not really “random”.

Comment #4 October 6th, 2010 at 2:01 pm

“distinctly unsmoothed [or] crinkled up analysis”

Comment #5 October 6th, 2010 at 3:40 pm

asterix:

What I am confused about is that if you have a random (unsatisfiable) instance of SAT, and then you find a subset of the clauses that comprise a satisfiable instance, then why is this final satisfiable instance random?It isn’t.

It could be that the original random instance always contains a subset of clauses with a special structure, and therefore this special subset is not really “random”.Yes, that’s completely right! Indeed, in Allan’s NP-hardness argument, he does construct a very special subset of clauses. The issue is just that wasn’t clear a priori that such a subset

existsw.h.p.—and if the original set of clauses has (say) linear size, then it still isn’t clear.Comment #6 October 6th, 2010 at 3:47 pm

Madhu: Thanks for the Feige-Kilian reference!

(or else I could sample n^4 random clauses of length 3 … and then the random part includes all possible clauses anyway).Yes, that’s right! That’s exactly why it’s important that Allan’s solution only requires n

^{1+ε}clauses.Comment #7 October 6th, 2010 at 4:38 pm

“Given that formula, you can then ask, for any subset of its clauses, whether that subset gives you a satisfiable formula. That is a random (because it depends on the original random collection of clauses) problem in NP. ”

I misread/understood “random problem” to be “random subset”.

Now it makes sense. Thanks!

Comment #8 October 6th, 2010 at 11:47 pm

Interesting construction. I might not have been so cavalier about walking over and interrupting the dinner conversation if I had realized that I was also interrupting the genesis of the discipline of jagged catalysis.

Comment #9 October 7th, 2010 at 2:56 am

Chunky analysis? Chunked? Chunkified?

(I like peanut butter)

Comment #10 October 7th, 2010 at 2:18 pm

Local-worst case analysis compaed to the usual global worst-case.

Comment #11 October 7th, 2010 at 5:10 pm

I have a newbie question. I

almostunderstand the foregoing, but I think some tacit standard assumptions of the field are escaping me.Gowers asks us to pick a random set of clauses just large enough to make unsatisfiability overwhelmingly likely. Until now I didn’t know that the overwhelming majority of large 3-SAT instances turns out to be unsatisfiable, but I take it that this is a well-known result.

Then he says to pick a random subset of this preselected set of clauses, small enough (I guess) that it might well be satisfiable, and asks whether

thisis satisfiable, and then adds, confusingly for this beginner, “This is a random problem in NP.”. What is an instance of this problem? What is the universe of instances? What is the parameternof the complexity function? (I didn’t ask that last question properly; I’m too much of a beginner to know how to phrase it, but I hope my meaning is clear.)What’s confusing for me, I guess, is: how can a problem, which is supposed to be a set of instances, nestle inside a single instance of a bigger problem?

Comment #12 October 7th, 2010 at 10:05 pm

The Gowers-Aaronson-Sly (GAS) model of random NP problems should provide fun for everyone on a visit to the Zoo. GAS-3SAT is just the first step, GAS-3COL already beckons…

Using Tracy Hall’s suggested terminology, there are starting instances that never generate an NP-hard problem, and the choice of partition is also important. So this doesn’t seem to be an NP-hardness proof in the usual sense, via a deterministic logspace reduction; the NP-hardness proof is only for “sufficiently jagged” starting instances, and only when a “catalytic” partition is chosen. One question is then: which parameter characterizes the easy to hard transition, and can one prove a phase transition? For instance, does fraction of mixed clauses work for jaggedness? And when does jaggedness fail to provide a catalyst?

Comment #13 October 8th, 2010 at 10:48 am

What’s confusing for me, I guess, is: how can a problem, which is supposed to be a set of instances, nestle inside a single instance of a bigger problem?The original 3SAT problem can be viewed this way also. Creating a 3CNF formula over n variables (say, for the purpose of reducing some other NP problem to 3SAT) can be viewed as starting with the single biggest possible 3CNF formula over n variables (the one with all ≈ n3 possible clauses over n variables) and removing some of its clauses.

In this new model, when constructing the 3CNF formula, I don’t have complete freedom to choose any clause I want. Instead of having complete freedom to choose any clause over n variables, only a randomly chosen subset of n1.1 of the≈ n3 possible clauses are available to me. The question is then, do I have enough freedom to get the behavior I want from the formula with only these clauses available to choose? There are still an exponential number of formulas I can create (2n1.1 versus 2n3 in the standard model), but it is not obvious whether those formulas can get me all the behaviors I might want (although it appears that the answer is yes).

You can of course view other problems this way. Outputting a graph with n nodes can be viewed as starting with Kn, the complete graph over n nodes, and removing some of its edges.

Comment #14 October 8th, 2010 at 11:46 am

In my previous comment,

2n1.1 = 2^{n^{1.1}}

2n3 = 2^{n^3}

n3 = n^3

Kn = K_n

Foiled again by the difference in html rendering between the preview and the actual posted comment.

Comment #15 October 8th, 2010 at 12:23 pm

Dave, your response helps, but I’m still not completely out of the woods. Suppose we pick a big “master problem” with a set S of N different clauses. Now the new problem that we’re constructing has, as inputs, subsets of S. There are exactly 2^N such subsets.

I thought that an

echtDecision Problem had to have an infinite set of possible inputs. But 2^N, while possibly monstrously large, is not infinite. All the definitions of complexity that I know about (and remember, I’m a tyro) require unbounded instance sizes, in order to enable asymptotic reasoning. With a cap on instance size, I claim that the new problem is O(1). [Just take the very hardest instance (there must be one). Its required runtime is some (monstrously enormous) constant T. By definition, the runtime for no instance of the problem exceeds T. So the limit, as the problem size goes to infinity, of the runtime is this constant.]As I said to start with, I’m obviously missing some subtlety.

Comment #16 October 8th, 2010 at 1:29 pm

I thought that an echt Decision Problem had to have an infinite set of possible inputs. But 2^N, while possibly monstrously large, is not infinite.You can take the union over all N, right?

Here might be a way to phrase the problem as a promise problem. Start with the infinite set C of all 3CNF clauses, whose variables are labeled x_i for some positive integer i. For each natural number n, identify the set of ≈ 2^{n^3} clauses that contain only variables x_i with i ≤ n. Randomly select 2^{n^1.1} of them to keep, and throw the rest away. Call the resulting (infinite, since we do this for all n) set of clauses C’.

Now you can define a promise problem P: Given a 3CNF formula that is promised to contain only clauses from C’, is it satisfiable or not? The question is whether this promise problem is NP-hard or not.

What I don’t understand about the argument that Scott outlines is this. Define P_Y to be the set of satisfiable 3CNF formulas with only clauses from C’, and P_N to be the set of unsatisfiable 3CNF formulas with only clauses from C’. Call S_Y the set of satisfiable 3CNF formulas and S_N the set of unsatisfiable 3CNF formulas. For this promise problem P=(P_Y,P_N) to be NP-hard, we would have to show that there is a polynomial-time computable function f such that f(φ) is in P_Y if φ is in S_Y, and f(φ) is in P_N if φ is in S_N. The argument above depends on proving the

existenceof sufficient clauses in C’ to create a formula f(φ) in P that is satisfiable if and only if φ is satisfiable. But I see no way tofindsuch clauses in polynomial time, since they are defined by randomly deleting clauses from an exponential-size (2^{n^3}) of clauses to form another exponential-size (2^{n^1.1}) of clauses.Perhaps the argument is not intended to form a promise problem. In this case, change P_N to be the set of all 3CNF formulas not in P_Y, and leave the definitions of P_Y, S_Y, and S_N the same. Still I don’t know how to reduce 3SAT to P.

Comment #17 October 8th, 2010 at 7:36 pm

Hi Scott, That’s nice! Indeed when we discussed it over email it (and you kindly explained to me Gowers’s question) it looked that the reverese smoothed behavior asked for by Tim will be close to the worst case behavior, while the smothed analysis is often close in properties to the average case. (Of course, we need to worry about the scale for the perturbation.) In this direction we can ask if for LP if we take a worse case linear program in the neighborhood of a random problem will the simplex algorithm be exponential, for those rules that we know it for the worst case?

You mentioned random perturbations of worst case instances and worst case perturbations of random instances. Let me mention that there is also interest in random perturbations of random instances. How come? you may ask. The reason is that probability theory goes much beyond expectations. So while for averages this will not be terribly interesting it still is. In a sense, this the what noise sensitivity/stability is about.

Let me mention another related idea by Bilu and Linial (in a recent ICS paper) that “isolated instances of hard problems are easy”. Of course, being isolated says something like the average behavior of a perturbed problem behaves like the problem itself so in some sense it is related to smoothed analysis.

Comment #18 October 11th, 2010 at 7:09 pm

The collection of NP-hard problems that you get by this method are promise problems in which even stating the promise requires non-uniformity (for each n, the promise is that the given formula is a subset of the randomly chosen one with n variables). You must be a really big fan of promise problems to accept these as “randomly generated NP-complete problems”. They do not even have a finite description!

But what I want to point out in this comment is that more or less interesting (non-artificial) randomly generated NP-complete problems (with finite descriptions!) have been known for some time. I’ll give references later, but let me state a simple special case first.

For a fixed finite graph H, let H-coloring be the following problem: given a graph G, is there a homomorphism from G into H? Here a homomorphism is just a mapping from the vertices of G to the vertices of H that takes edges of G into edges of H (the mapping may identify some vertices and may map non-edges to whatever it wants). The problem is called H-coloring because the case where H is a triangle is just 3-coloring in the usual sense. Now I claim that if you let H be a G(n,1/2) random graph, then H-coloring is NP-complete almost surely as n approaches infinity.

The one-line proof calls the Hell-Nesetril Theorem: If H is bipartite then H-coloring is in P, otherwise H-coloring is NP-complete. Since G(n,1/2) is almost surely non-bipartite, the claim follows. Of course the same works for H taken from G(n,c/n) and many other models of random graphs for H.

A much more general result was shown by Lucsak and Nesetril here for general homomorphism problems into a fixed template (so-called CSPs). The key to the argument is that a random structure is almost surely projective (it has no non-trivial homomorphisms from a power of itself into itself), from which NP-completeness follows from the known half of the so-called “algebraic dichotomy conjecture” for CSPs.

There are other dichotomy results that could give rise to a few other randomly generated NP-complete problems (e.g. directed graph homeomorphism problem).

Comment #19 October 12th, 2010 at 8:13 am

I don’t mind confessing that I have been struggling to follow the above discussion, and in particular, Tim (Gowers) motivation(s) for raising this problem are still somewhat mysterious (to me). So I’ll simply guess at them!

To begin, over on Dick Lipton’s weblog, it appears that similar problems are being attacked from precisely the opposite direction, under the topic

Math and Theory Proof Methods: A Comparison.In particular, Paul Beame has offered some general remarks upon the role of structure theorems in complexity theory, and then Luca Aceto followed up with some well-considered comments and specific references relating to hybrid dynamical systems and bisimulation theory.

The remainder of this post will attempt to identify some common mathematical themes that are shared by these two fine Lipton/Aaronson weblog topics.

Scott and Tim, are your random NP-complete problems being constructed with a view toward demonstrating that Beame-type structure theorems in general, and Aceto-style bisimulation theory in particular, are in some sense irrelevant to at least some members of the broader class

NP-complete problems?In other words, is the idea to construct a (structure-free?) class of Gowers/Aaronson problems that is complementary to the structure-restricted class of Beame/Aceto problems?

Thus, is one possible long-term objective—albeit a hyper-ambitious objective—to establish that all problems in

Psatisfy Beame-type structure theorems (in some to-be-specified sense), but some problems inNPdon’t?This would be IMHO a very interesting program to pursue, and I will observe, that the resulting complexity-theoretic world might well prove congenial both to complexity theorists and to quantum systems engineers.

For example, Lindbladian dynamical processes would be recognized as generically unravelling (Aceto-style) to hybrid bisimulation processes … specifically, with the classical noise/measurement datastreams being bisimulation-dual to Lindblad-compressed quantum trajectories. Thus, practical quantum simulations would be seen as generically residing in

P, without risk of collapse to well-established hierarchies of classical and quantum complexity theory … which would comprise a major advance in our understanding of practical methods for quantum systems engineering.More broadly, these Beame/Aceto structure-centric ideas would help us understand why—as Dick Lipton often observes—so many instances of problems that formally are

NP-complete, in practice are inP. This would occur because real-world problem instances (both classical and quantum) typically are generated, not by random processes and not as worst-case instances, but by dynamical processes that respect Beame-type structure theorems that naturally induce Aceto-style bisimulation structures.Comment #20 October 12th, 2010 at 12:01 pm

@Dave, I follow you up to “What I don’t understand is …”. I stopped trying to understand there, because I’m still wrestling with fundamentals.

Two things bother me about your formulation. First, you are diving into axiom-of-choice hell as soon as you define C’, and it would worry me a lot if the question being asked only makes sense with the choice axiom. But even more worrying is the tacit assertion that the answer to the question is independent of C’. Surely it’s not, so we must be asking about

mostselections of C’. And I’m not at all sure that there is a meaningful measure on the space of these graded selected subsets that would allow one to say “most” in a meaningful way.Comment #21 October 13th, 2010 at 2:26 am

ACW:

WordPress cut off most of my previous comment. I’ll try again, but if it doesn’t work, you can give me your email and I can email the whole response.

As you could probably tell, I realized halfway through attempting to answer your question that I didn’t totally understand Scott’s explanation. So I can’t completely answer your question because I don’t totally understand Scott’s explanation, but here is my current, very likely flawed, understanding of the idea. Also, I was not thinking straight in the previous response, because the numbers I picked are totally wrong. For instance, there are about n^3 clauses of the form (x_i or x_j or x_k) with i,j,k ≤ n, not 2^{n^3} as I mistakenly said before. Because of this I can see one way that such a problem can be considered NP-hard via nonuniform reductions. I’m still not sure how it is in NP, nor how to use uniform reductions to show it is NP-hard. No guarantees that I am thinking straight this time either. 🙂

You are correct that they are talking about most selections of C’, but it is common when doing randomness in theoretical computer science to abbreviate a statement like, “choose A at random; then the probability that A has property P is at least 99%” to “choose A at random; then A has property P”.

I think that there is a way to define a meaningful measure that could help formalize the statement. For each n, there is a finite set C_n of size |C_n| = 4/3 * n^3, consisting of all clauses with 3 variables x_i, x_j, x_k, where i,j,k ≤ n (I think that’s right; n^3 choices for the variables, times 8 choices for where to place negations, divided by 3! = 6 to ignore order since OR is commutative). We want to select a random subset of C_n of size n^1.1; there is a well-defined, finite number of subsets of C_n of size n^1.1: specifically, K(n) = (|C_n| choose n^1.1) of them. So it makes sense to talk about the uniform measure on the set of subsets of C_n of size n^1.1. Given a particular element C’_n of this set (a particular set of clauses of size n^1.1), its probability of being chosen is 1/K(n).

Now, to define how to select “uniformly at random” an infinite sequence of “allowable clauses”, use the product measure: given particular sets C’_1,C’_2,…,C’_n, define the probability that a infinite sequence C’ chosen at random starts with C’_1,C’_2,…,C’_n to be μ(C’_1,C’_2,…,C’_n) = prod_{i=1}^n 1/K(i). This is similar to how you might define uniform measure on the space {0,1}^\infty of infinite binary sequences as the product measure based on the fair coin toss measure on the set {0,1} that assigns Pr[0] = Pr[1] = 1/2, by defining for each finite string x the probability that the sequence begins with x to be 2^{-|x|}. Except now, for each n, instead of choosing between one of two bits uniformly at random, you are choose among K(n) sets of clauses uniformly at random.

Once this measure is defined, it makes sense to ask the following question. Pick an infinite sequence C’ = C’_1, C’_2, … of sets of clauses at random according to μ. Do not take their union; I earlier suggested this but it makes the measure messy. Define 3SAT_n to be the set of satisfiable 3CNF formulas with exactly n variables. Define the language

Rand-3SAT(C’) = union_{n=1}^\infty { \phi \in 3SAT_n | \phi contains only clauses from C’_n}.

C’ is chosen at random, but once chosen, Rand-3SAT(C’) is a unique, well-defined language, which is either NP-hard or it isn’t. The question is, what is Pr_μ[Rand-3SAT(C’) is NP-hard]? I think the argument Scott puts forward proves that this probability is 1.

This is probably not precisely what Scott meant. It is easy to show, for example, that Rand-3SAT(C’) is uncomputable with probability 1. That doesn’t prevent it from being NP-hard, but Scott used the term NP-complete so it’s clear that he has some different language in mind than the one I defined. I think possibly what he meant is that, given access to the n^1.1 allowable clauses in C’_n, one can construct a reduction from 3SAT to Rand-3SAT(C’). This is called a *nonuniform reduction*, in the sense that for each n, before the algorithm that computes the reduction can work, it must be given some information specific to n, the allowable clauses in C’_n, although not specific to the particular n-variable formula φ that is the input to the reduction.

Or possibly, define the promise problem Rand-3SAT(C’) = (P_Y,P_N) by

P_Y = union_{n=1}^\infty { \phi \in 3SAT_n | \phi contains only clauses from C’_n}, and

P_N = union_{n=1}^\infty { \phi \in complement of 3SAT_n | \phi contains only clauses from C’_n}.

Similarly, both P_Y and P_N would be uncomputable with probability 1, and nonuniform reduction would apparently be required to reduce 3SAT to it. Like I said, I’m not sure how to formalize this notion properly, but these are my best guesses.

Comment #22 October 13th, 2010 at 3:35 am

Similarly, both P_Y and P_N would be uncomputable with probability 1, and nonuniform reduction would apparently be required to reduce 3SAT to it. Like I said, I’m not sure how to formalize this notion properly, but these are my best guesses.

Comment #23 October 13th, 2010 at 3:02 pm

What if someone proves P=NP is a few elementary lines that no one has ever though of. Will all the research of the past 50 years be worthless?

Comment #24 October 13th, 2010 at 3:28 pm

AJ: More or less. But if someone discovers a method for interstellar travel using yarn and peanut butter, most of the research NASA has done for the past 50 years will be worthless as well.

Comment #25 October 13th, 2010 at 3:55 pm

There are examples in history which suggests things like that happen. Hilbert’s basis theorem which virtually vanquished Gordon’s and Godel’s incompleteness which vanquished Hilbert’s. Both Gordon’s and Hilbert’s vision was insurmountable and Hilbert’ and Godel’s foundational work was by every means elementary.

Comment #26 October 13th, 2010 at 6:35 pm

AJ, such things have happened even in more recent decades.

A good example si Walter Kohn and Lu Jeu Sham’s celebrated 1965 article

Self-Consistent Equations Including Exchange and Correlation Effects. This article introduced what are today known as the Kohn-Sham equations, which efficiently solve (with sufficient accuracy for many practical purposes) a class of quantum simulation problems that once were regarded as intractable.Although Kohn and Sham’s article received few citations during its first decade, as of today Physical Review on-line lists it has having been cited 10,341 times.

In comparison, CiteSeer lists Richard Feynman’s 1982 article

Simulating Physics With Computers—which broadly reaches the opposite conclusion as Kohn and Sham—as being cited 222 times … which certainly is a very respectable citation record … but nowhere near the class of Kohn and Sham.To paraphrase Arthur Clarke: “If a distinguished theorist says that a quantum simulation problem is formally NP-hard, they are almost certainly right. If they say that it is NP-hard in practice, they are almost certainly wrong.” 🙂

Comment #27 October 13th, 2010 at 7:38 pm

AJ, to provide some quantitative foundations to the above Clarke-style aphorism, I have extracted from an article by Redner, titled

Citation Statistics From More Than a Century of Physical Review, a figure showing the cumulative citation statistics that are associated to the Kohn-Sham article.These citation statistics are pretty incredible … it turns out that “KS” is the most-cited Physical Review article of all time, by a huge and ever-increasing margin (note: in the figure the competing “EPR” and “BCS” Physical Review articles are precisely the ones that a quantum physicist would guess).

This citation record is especially remarkable for an article that describes quantum simulation techniques whose mathematical underpinnings remain rather poorly understood, even to the present day.

Comment #28 October 14th, 2010 at 1:05 am

Hi John:

I do not know many things you are communicating here. But I do take Knuth’s argument on this. There may be atmost finitely many obstructions to P = NP. But I have a gut feeling that the Mathematics built over 5000 years cannot have missed out anything that is as subtle as not being adequate to settle the conjecture.

Comment #29 October 14th, 2010 at 8:34 am

AJ, you make a good point … it scarcely ever happens in mathematics that widely-accepted theorems are discovered to be outright wrong.

Yet century-by-century in mathematics—and physics too—widely-accepted elements of naturality are set aside. For example, in the 19th century the naturality of Euclidean geometry was set aside, and in the 20th century, the naturality of Newtonian mechanics was set aside.

So we can ask, in the 21st century, what 20th century elements of physical and mathematical naturality are likely to undergo substantial evolution?

Shtetl Optimizedgrapples with two elements that are top candidates for 21st century evolution: (1) Hilbert state-spaces, and (2) the complexity classesPandNP.The point is that there’s nothing wrong with the many theorems that have been proved about Hilbert space dynamics, and about the complexity classes

PandNP… just as there is nothing wrong with theorems about Euclidean geometry and Newtonian dynamics.Yet neither is it self-evident that these are the sole, or even the most natural, elements for building a 21st century understanding of physical dynamics and complexity.

Perhaps the safest expectation is that, in coming decades, students will learn the traditional 20th century elements of Hilbert space dynamics and

P-vs-NPcomplexity classes, in conjunction with new dynamical state-spaces and new hierarchies of complexity classes, whose nature we are at present still struggling to grasp.In short, among most important discoveries in each century are new, good, natural questions to ask … because very often, once we discover good questions to ask, good answers are not too hard to discover.

An outstanding weblog that regularly grapples with this class of questions is Dick Lipton’s

Gödel’s Lost Letter and P=NP.Comment #30 October 15th, 2010 at 11:13 am

Just to keep this thread ventilated, the topic currently running in Dick Lipton’s webblog provides a good illustration of (my own understanding of) Tim Gowers’ motivation in posing his question about the existence random NP-complete problems … as follows.

Lipton’s weblog is discussing a recent preprint by Venkatesan Guruswami and Adam Smith, titled

Codes for Computationally Simple Channels: Explicit Constructions with Optimal Rate. Guruswami and Smith are making progress with the (very tough) problem of error correction against, not random errors, but adversarial errors.Focusing on the broad strategy of the Guruswami and Smith preprint, and ignoring the (very many) interesting details, their strategy is to change the starting premises of the problem, by restricting the adversarial errors to those errors that can be generated by a bounded-resource adversary.

This strategy allows Guruswami and Smith to prove what Dick Lipton calls a “pretty theorem” whose key ideas are “extremely appealing.”

Tim Gowers’ question seemed (to me) to be motivated by the idea of adopting a similar strategy for making progress in

P-versus-NP, namely, change the starting premises of the problem by restricting the classesPandNPto problem instances that can be generated by adversaries having bounded computational resources.Here the point is that a great many problems that are infeasible to solve when they are posed by an adversary having access to an omniscient oracle look *much* easier when they are posed by dumber adversaries. Guruswami and Smith’s preprint shows how this strategy can prove theorems that are concrete, powerful, and even “pretty.”

Comment #31 October 15th, 2010 at 11:26 am

Is there a polynomial time algorithm known for NP-complete problems where the word sizes one has to operate are exponentially large ? Say you do quadratically many multiplications to solve an NP-complete problem but on words of size n^n bits. If such an algorithm is found, would that be p=np? or at least would that be of any interest to the community?

Comment #32 October 15th, 2010 at 1:45 pm

LifeIsChill: Great question! It’s actually been known since the 1970s that, if you allow unit-cost arithmetic operations on exponentially-long numbers, and ALSO the reading out of individual bits from the binary representations of those numbers, then you can solve not only NP-complete problems, but even PSPACE-complete problems in polynomial time. (I don’t remember the reference offhand—does anyone?)

Needless to say, most people interpret this not as a practical proposal for solving PSPACE, but as an illustration of the hazards of the unit-cost model.

Comment #33 October 15th, 2010 at 2:15 pm

Actually this is great news. I was confused a lot last week. I have a half page technique on a particular hard problem. It takes quadratic operations but as expected on exponentially large words. Do you think any such technique would add any useful knowledge to the community other than the fact the technique is simple (and may have been known before but I have not found a reference)?

Comment #34 October 15th, 2010 at 2:22 pm

It

mightbe interesting—from your description, I can’t really say.Comment #35 October 15th, 2010 at 11:32 pm

Thank you.

Actually it is with regards to the factorial function.

What about polynomially growing bits size $n^k$ bits? Say one has a solution to computing n! (factorial) in $log^{c}(n)$ steps but with arithmetic operations on size $n^k$ bits.

Would that be of any importance? Are such relations known? Are there any references?

Comment #36 October 16th, 2010 at 12:01 am

We need at least $nlog(n)$ bits to represent n!. However are their easy solutions to computing n! given word size of $n^k$.

Comment #37 October 16th, 2010 at 11:34 am

Professor I found the reference. Thanks for the tip. It is Shamir’s work on factoring in $\log(n)$ steps. I could not get the paper online though. Interestingly I am also getting similar results as he did. However I do not know if he is saying n! can be computed trivially if word size is of order $n^k$ or $k^n$ since I do not have access to the work. I seem to be getting $n^k$ space. I do not know what is technique either. Any comments from you would help.

Comment #38 November 4th, 2010 at 5:52 am

AJ:

There is reason we still discuss Hilbert’s program, right? Hilbert was wrong about the answer, but his question was the right question and led to discovery of lots of new areas in math, including TCS (I can’t think of TCS without Turing machines and Turing machines without Hilbert’s program).

So the question is is P vs NP the right question? and the answer is absolutely!, just take a look at discoveries it has led to. I don’t agree with Scott’s answer, the world will be very different if P=NP, but that does not mean that the amazing structures and concepts that we have discovered during last 50 years will go away.

P vs NP is the Holy Grail, but it is not the only question.