## A Woitian links, links, links post (slightly stale but still edible)

Razborov and Rudich won the Gödel Prize for “Natural Proofs”, which probably did as much as any single paper to elucidate the nature of the P vs. NP problem. (More from the Bearded One and the Pontiff.) Loosely speaking, R&R showed that any circuit lower bound satisfying certain extremely broad criteria would “bite its own tail,” and lead to efficient algorithms to distinguish random from pseudorandom functions — the very sort of thing that we wanted to prove was hard. This doesn’t by any means imply that a P≠NP proof is impossible, but it does show how the problem has a strange, self-referential character that’s not quite like anything previously encountered in mathematics, including in the work of Gödel and Turing. Technically simple but conceptually profound, the paper is also a masterpiece of clear, forceful exposition. When I first came across it as an undergrad at Cornell, I knew complexity was my subject.

Following on the heels of the *New Yorker*, the *New York Times* ran its own epic on the Large Hadron Collider. So science writers *can* do a decent job when they feel like it. Why can’t they write about P vs. NP the same way? Oh, right … them big machines …

Andy Drucker poses the following problem: suppose there are n blog posts, and for each post b_{i}, you’re told only that it was posted during the time interval [t_{i},u_{i}]. Is there an efficient algorithm to count how many orderings of the blog posts are compatible with that information? Alternatively, is the problem #P-complete? Let me stress that Andy doesn’t know the answer to this question, and neither do I.

A certain MIT undergrad of my acquaintance sent the following letter to MIT’s DMCA enforcement office.

Dear MIT DMCA Agent,

After viewing

Scoopand receiving your notice, I was more than happy to comply with NBC’s request to destroy it. Rest assured that I will no longer be downloading or sharing any post-Manhattan Woody Allen films.

Comment #1 May 20th, 2007 at 11:47 am

Razborov and Rudich won the Gödel Prize for “Natural Proofs”, which probably did as much as any single paper to elucidate the nature of the P vs. NP problem.It’s a good thing that you didn’t say “more than”, because surely Baker, Gill, Solovay is equal to this one; and what really makes the story interesting is both papers together.

the problem has a strange, self-referential character that’s not quite like anything previously encountered in mathematics,Well, that is a sweeping and (in my view) onerous statement. Sure, it’s one of the best TCS papers ever, but to call it completely different from the rest of mathematics…? For instance, there is a theorem in logic that it will never be possible to prove that large cardinals exist, or even that their existence is independent from the axioms of set theory, or any reasonable extension of those axioms. It’s not the same as Godel, because it’s settled that Godelian statements are independent from the axioms. Logicians accept large cardinals as credible, but the door will always be open to proving that they don’t exist.

As I understand it, Razborov-Rudich shows that you can’t prove that P != NP using constructive combinatorics. It’s great to actually know that rigorously. (More or less; they actually appeal to moderately stronger cryptographic assumptions). But people have offered this advice informally for some other problems. For instance, some topologists have been pessimistic about combinatorial proofs of the Poincare conjecture for a long time. (Whereas other topologists have attempted such proofs anyway.) Sure enough, the actual proof uses a lot of PDE analysis.

Concerning Woody Allen: Am I gauche in thinking that the Purple Rose of Cairo is a great movie? Also, we recently saw “Everything You Wanted to Know…” and I thought that it was uneven.

Comment #2 May 20th, 2007 at 6:01 pm

I started reading their paper after I saw the announcement, and it’s incredibly exciting stuff, but I didn’t understand why the largeness assumption is part of the idea of “natural proof”.

Comment #3 May 20th, 2007 at 6:15 pm

Walt: The RR argument says that,

ifyour lower bound proof satisfies the constructivity and largeness conditions, then you can use it to distinguish random from pseudorandom functions. Basically, this is because your proof technique would show a random function f to be hard with high probability over f, whereas a pseudorandom function wouldneverbe shown to be hard (assuming your proof is actually correct!).You need the constructivity condition since that’s what gives you an efficient algorithm to test whether or not your lower bound proof applies to a given f. And you need the largeness condition since that’s what ensures that your proof

doesapply to a non-negligible fraction of f’s. Without largeness, you might just get an algorithm that accepts a pseudorandom function with probability 0, or a truly random function with probability 2^{-cn}(the probability being over the choice of function). And that’s not good enough to let you distinguish randomness from pseudorandomness and thereby invert one-way functions.Comment #4 May 20th, 2007 at 6:35 pm

I agree with what Scott says, but I think Walt is just saying that constructivity by itself is constitutive of what most people would think of as a ‘natural’ approach to proving circuit lower bounds. Largeness is more of a correlate of the constructive approaches most people try.

Comment #5 May 20th, 2007 at 7:52 pm

So science writers can do a decent job when they feel like it. Why can’t they write about P vs. NP the same way?Sometime between 1985 and 1987,

The Economistcarried what I thought at the time was an excellent article (four pages long!) about the P vs. NP problem. I remember at the time being amazed by how the concepts were explained so clearly using accessible everyday language, yet everything was technically accurate. I suspect that the anonymous author is no longer with that magazine …Comment #6 May 20th, 2007 at 8:29 pm

So science writers can do a decent job when they feel like it.In all fairness, it’s not so much a matter of whether they “feel like it”, but rather their and their editors’ expertise. Dennis Overbye has a bachelor’s degree in physics, and he even started graduate school in astronomy. So sometimes he will simply know how to do a good job. Since he also has a lot of seniority at the Times, the editors also let him pick his stories and keep his words more, good or bad.

The real problem is that right now, no one in the Times science section has any graduate exposure to rigorous mathematics. Gina Kolata did, but she moved to the health section. They sincerely want to cover mathematics well, but there is a lack of intuition for what’s a good story, what quotes to keep, etc. Positive efforts tend to be dented by competing priorities. They had very reasonable articles about Perelman, Yau, and Tao, but even in these you could notice a low glass ceiling.

It should also be said that the science section is different from, and much better than, other sections of the New York Times, and just about any section of any other US newspaper. (A few years ago, the NYT magazine carried a credulous story about one Woody Norris, an inventor who claimed to own “a one-million-Gauss magnet strong enough to tear fabric and human flesh alike.”) It’s possible that popular presentations of mathematics just don’t combine well with traditional journalism, and that the future lies with forums like blogs and Wikipedia.

Comment #7 May 20th, 2007 at 8:41 pm

Andy D’s interpretation of me is correct. Is there some sense in which largeness is “natural”, or that just an assumption needed to make the argument work?

Comment #8 May 20th, 2007 at 11:58 pm

Scott, I think you wrote somewhere that you have a lower bound technique that doesn’t naturalize (perhaps not a circuit lower bound, though).

Could you elaborate on that ?

Comment #9 May 21st, 2007 at 7:09 am

I think “Purple Rose of Cairo” is nearly a

perfectmovie, which means that it is almost impossible to improve by tweaking in any way. It may not be agreatmovie, because it doesn’t address any difficult moral, social, or philosophical questions.Comment #10 May 21st, 2007 at 9:03 am

Peter: That is an interesting criterion. It suggests that a movie can be great without even being good. That seems valid to me.

Even so, I don’t quite agree about the “Purple Rose of Cairo”. I think that it is a great movie.

— SPOILER warning —

To me, it argued powerfully that there are fundamental differences between movies and real life. You could say, duh, but in particular, the movie argued that real life is not about happy endings. People may have reasons to be happy or unhappy, but they generally aren’t rescued from unhappiness. Allen addresses how modernity creates the hope that life is fair (that is, with movies), but that it isn’t the truth.

— end SPOILERS —

Anyway, how many of Woody Allen’s movies are great in this sense? Is it possible for an opus of movies to be great even if few or none of them are great individually?

Comment #11 May 21st, 2007 at 3:19 pm

Could you elaborate on that ?Sure, Pascal! Look at Section 10 of this paper (pages 33-37). What I prove there is an exponential lower bound on “manifestly orthogonal tree size.” Basically, this is the formula size of a Boolean function f:{0,1}

^{n}→{0,1}, where you can only take the AND of two subfunctions if they’re over disjoint sets of variables, or the OR of two subfunctions if they’re over the same set of variables but have disjoint sets of 1-inputs. (It’s more natural than it sounds!) I prove this lower bound for a random “coset state” (basically, the AND of randomly-chosen affine functions over GF(2)).One of these days I ought to write a paper specifically devoted to this lower bound and why it doesn’t seem to naturalize.

On the other hand, let me stress that this is not the only example we have of a non-naturalizing lower bound — other examples include the results MA

_{EXP}⊄P/poly and PP⊄SIZE(n^{k}), as well as (in some sense) every diagonalization result. The problem, of course, is just that the non-naturalizing lower bounds we have are much weaker than what we think is true.Comment #12 May 21st, 2007 at 8:12 pm

Is there some sense in which largeness is “natural”, or that just an assumption needed to make the argument work?Walt: No, I don’t see largeness as just some sort of technical assumption, but rather as the crux of the issue. The “moral” I take from RR is that, if you’re going to prove NP⊄P/poly, then you’ll have to “zero in” on some special property of the NP-complete problems that makes them hard —

nota property that NP-complete problems share with random problems. And finding such a property is much, much harder than it sounds! Since the dawn of theoretical computer science, we’ve basically had justonelower bound technique that violates the largeness condition — namely diagonalization — and that technique suffers from well-known limitations of its own.A related point is that, without the largeness condition, the RR result would probably be too strong to be true. For given a function f:{0,1}

^{n}→{0,1}, we could define “the property of being f.” Then provided f’s truth table wasrecognizablein time 2^{O(n)}(so that constructivity was satisfied), one could never hope to prove any lower bound for f, even by zeroing in on some specific property satisfied by f alone.Comment #13 May 21st, 2007 at 11:46 pm

Thanks Scott for the elaboration! Short of a full paper, is it possible to explain in a few lines if the non-naturalizing proofs that we currently know violate the largeness or constructivity conditions?

Comment #14 May 22nd, 2007 at 12:26 am

Pascal: As far as I know all of the known non-naturalizing proofs violate largeness; not one of them violates constructivity.

To me this is not so surprising. After all, one

shouldbe able to peer into any lower bound proof and figure out what makes it tick; that is, what property P of a function f does it identify, by reason of which f is hard? And if P is not decidable in time 2^{O(n)}, then it’s hard to imagine how any proof could have shown that f satisfies it, without first finding some efficiently-decidable property P’ such that P’⇒P and f satisfies P’. So then the only remaining question is what fraction of functions satisfy P’.I admit that this isn’t an airtight argument: it’s

conceivablethat there could be a proof for which the property P’ didn’t exist. In that case — if there were no change to P that would give us constructivity at the expense of largeness — I suppose we’d say that constructivity was violated.Comment #15 May 22nd, 2007 at 5:52 am

Hi, Pascal—The algebraic idea I sketched at the bottom of Bill Gasarch’s item intuitively violates largeness as well as complexity, because for “most” p the Jacobian ideal Jac(p) (which is generated by the first partial derivatives of p) has no monomials. The permanent polynomials perm_n, however, have gazillions of distinct minimal monomials in Jac(perm_n)—whereas the determinant polynomials yield none.

Well, “gazillion” == “at least 1,873 for n = 4, while I estimated my Groebner run for n = 5 would take over 100 years”. Nor have I been able to port the “Natural Proofs” framework to the algebraic case so that e.g. notions of “genericity” would correspond to largeness in the Boolean setting. (Maurice Jansen and I did start on that, including trying to apply some of the Riemann-dependent work you originated. Do you know of more-recent efforts?) But anyway the intuition is there: having lots of monomials in associated ideals is a special property of polynomials like perm_n which are believed to be hard, not shared by “random” ones.

In the last 2 pages of my Mulmuley-Sohoni survey, I interpret the “largeness” and “complexity” criteria as being linked, at least in their case. I also now prefer not saying “constructivity” for the latter, as algebraic geometry has lots of techniques involving EXPSPACE-hard predicates and functions that are certainly constructive, and even fairly intuitive.

Comment #16 May 22nd, 2007 at 10:58 am

Although I am very far from being an expert on computational complexity … or anything else I am about to talk about … hey, it’s just a blog!

To me, P .ne. NP is a theorem about geometry, specifically, the geometry of the state space of quantum computers (because, is there any other kind of computer? ).

If you can simulate a quantum computation efficiently, then obviously it is in P. To calculate that simulation efficiently in space and time, it must be projected onto a Kahler manifold.

As your simulations become larger, the Kahler manifold volume scales polynomially (as does its Ricci curvature, etc.). But the space of functions you want to compute scales exponentially.

Obviously ( ), a polynomially large Kahler volume can never compute all the functions that are in NP, no matter how clever the algorithm that guides the trajectory. Duh! But it definitely

canhold all the algorithms in P, (because the algorithm itself tells you how to adapt the state space to fit the computation, just as in computational fluid dynamics).From this quantum system engineering point of view, P vs NP is not fundamentally a conjecture about algorithms — which is why relativising proofs can never succeed. Instead, it is a conjecture about quantum simulation theory, Kahler geometry and model order reduction.

The above is (of course) no kind of rigorous proof! And oh yeah, this whole post is definitely

notfor the humor-impaired! The intended point is rather, that from some perspectives — geometric being one example — non-relativising proofs are the most natural kind.Comment #17 May 22nd, 2007 at 12:11 pm

Hi Kenneth, no, I don’t know of any recent results along those lines.

Comment #18 May 22nd, 2007 at 1:32 pm

Scott, I think what you say about non-naturalizing proofs is inaccurate. The condition that diagonalization arguments violate is constructivity, not largeness. Take, for instance, the proof that there are functions in the Exponential Hierarchy that don’t have sub-exponential size circuits. The underlying property in this proof is simply that “f does not have sub-exponential size circuits”, which is true of a random function. In this case, the property is in coNP and we know that crytographic pseudo-random generators can be broken by non-deterministic circuits, hence there is no contradiction.

One might argue that the underlying property in this case is instead something like “f is the

lexicographically firstfunction that doesn’t have sub-exponential size circuits”, but this argument confuses construction of a property with explicitly finding a function that satisfies the property. Clearly, it is reasonable for the purpose of testing “naturalization” to consider theeasiestlower bound property that can be associated with a purported proof, rather than a lower bound from which it is also clear how to derive an explicit lower bound.Diagonalization arguments are just counting arguments in disguise. If there is an ingredient in a proof which violates the largeness condition, it is probably something other than the diagonalization ingredient.

About the significance of Razborov-Rudich, I share some of the skepticism expressed by Greg and Walt. The coinage of the term “natural proofs” was a masterstroke, precisely because the conditions required are

notaltogether natural. The natural proofs machinery does a very good job of explaining why the techniques in vogue in the 80s and early 90s would not by themselves suffice to show the lower bounds we want. Thus, they’re good at weeding out approaches, but it’s not clear how much of anobstaclethey represent. People can’t even agree whether “largeness” or “constructivity” is the more natural condition, then how natural could either of these be? Also, if we’re interested in arguing that proving lower bounds for NP is hard, it is reasonable to try to show implausibility of NP-natural proofs, not of P-natural proofs. One has to be careful here, because non-deterministic circuitscanbreak cryptographic PRGs. Steve Rudich has a paper about this, but there doesn’t seem to be much evidence for the hardness assumption in his paper. Finally, the natural proofs machinery says nothing about proofs that make use of uniformity.I agree Razborov-Rudich is a great paper, but for a reason that no-one seems to talk much about – it gives a new paradigm for proving hardness results. It’s totally unclear how to prove a result like “Factoring reduces to deciding if a function has high circuit complexity”, or even whether such a thing is true. Razborov-Rudich, building on Goldreich-Goldwasser-Micali, gives a delightfully indirect way of doing this.

Comment #19 May 22nd, 2007 at 4:56 pm

People can’t even agree whether “largeness” or “constructivity” is the more natural condition, then how natural could either of these be?I haven’t seen it discussed in recent comments, but RR also show that “largeness” is inherent in the most natural of approaches to lower bounds (my use is the colloquial form of “natural”). In particular, if one has a “formal complexity measure” mu for which

mu(literals) <= 1,

mu(f OR g) <= mu(f) + mu(g) and

mu(f AND g) <= mu(f) + mu(g)

(essentially anything that will lower bound formula size in the obvious way) then use of mu to prove a lower bound automatically implies largeness. Thus, the only issue for this approach is constructivity.

Comment #20 May 22nd, 2007 at 7:04 pm

John, you’re on the wavelength of the title of the Mulmuley[-Sohoni] series of eleven(!)-and-counting papers, “Geometric Complexity Theory”. I just learned about papers 4–11 (all since March) from Aaron Sterling’s comment in Bill Gasarch’s current item.

Moreover, you prompt a witticism I cut from my previous post. My flip remark “higher cohomology is inevitable” in Bill Gasarch’s P-NP poll meant to express my belief that combinatorial properties associated with (only) the first few Betti numbers, such as used for order-nlogn depth lower bounds on algebraic decision trees, won’t work on PERM vs. DET or NP-vs.-P/poly things. On reading about string theory and examples like pp991–992 of Penrose’s

The Road to Realityon the non-local nature of cohomology, I wonder if the physics route might be easier and more intuitive than tackling heavy texts on (abstract) scheme theory. Now a Calabi-Yau manifold is a special case of Kähler. So you may already agree that while Scott has been teaching complexity to stringers, we P != NP hunters may need to learn some string theory :-).Quick notes: The relevant time bound is really quasipoly-in-2^n, not poly(2^n), i.e. EXP not E. As in my comment above I think the largeness and complexity obstacles are linked, though I haven’t formalized this. IMHO the term “natural” is apt and supported by the limitation on “formal complexity measures” shown in section 5 (see this up-to-date draft of the journal version), which is brought out in Arora-Barak’s excellent draft chapter on the paper. Rahul, do you mean “factoring in subexp time reduces to …” , where subexp = 2^{n^{o(1)}}?

Comment #21 May 22nd, 2007 at 7:52 pm

Saw Paul’s comment after posting, and I can add that this limitation also bedevils attempts to work by induction directly on formulas or circuits/straight-line-programs for the function f being lower-bounded. In either the Boolean or algebraic case, I mean formulas (or circuits/SLPs) which are assumed for sake of contradiction to be small.

BTW, I responded to Wikipedia’s “needs attention by an expert” request for their “Natural proof” entry last January, putting my comments on the discussion page. Whoever adapted them even granted me a self-cite 8). I took a minimal approach and didn’t think Wikipedia itself needed to get technical with the main theorem statement—so long as it didn’t say a natural proof would “break RSA” as I sometimes see glibly stated, and had a pointer to the paper’s journal version itself. But non-negligible (i.e. “large” :-)) suggestions for improvement are welcome there or here or by private e-mail.

Comment #22 May 22nd, 2007 at 8:51 pm

The complexity measure approach is even more restrictive, but in general I too think constructivity is more of an issue than largeness…

Ken, thanks for correcting me. It is also true that Factoring reduces to deciding if circuit complexity of a function (given as its truth table) is high, but that result appears in the beautiful work of Allender et al. on “Power from Random Strings”.

Comment #23 May 23rd, 2007 at 3:18 am

Kenneth Regan says

John, you’re on the wavelength of the title of the Mulmuley [-Sohoni] series of eleven (!, and counting) papers, “Geometric Complexity Theory”.Thank you very much for that pointer, Kenneth; I was not aware of Mulmuley’s and Sohoni’s work and will certainly look into it—a first glance shows that we are using many of the same mathematical tools.

On the other hand, I dunno if quantum system engineers will ever operate “on the same wavelength” as the algebraic geometers; the obstruction is that quantum model order reduction (QMOR) makes extensive use of a mathematical tool that only a few algebraic geometers have mastered: the decimal point!

Speaking only a little bit more seriously, our QSE Group is definitely studying similar mathematical objects as Mulmuley and Sohoni, but by the archetypal engineering approach of constructing numerical representations, gazing at the results, noting patterns, and explaining these patterns

ex post facto.A main difference is that the product-sum manifolds that are ubiquitous in model order reduction (both classical and quantum) generically have negative scalar Ricci curvature.

For example, this very evening I have been numerically computing the Ricci curvature tensor of randomly chosen points on a 10-spin, rank-5, product-sum Kahler manifold. The fifty nontrivial eigenvalues of this curvature tensor (all negative) show startling (to me) regularities that definitely would be nice to understand analytically!

From the QMOR point of view, this negative curvature is good: it ensures that the QMOR state-space is geometrically “foamy”, so that a low-dimensionality state-space manifold can fill the higher-dimension embedding Hilbert space.

From the viewpoint of algebraic geometry, however, negative curvature is “bad” for at least two (related) reasons, first because negative curvature is generically associated with singularities, and second because most of the theorems linking topology to curvature apply only to positive-curvature manifolds.

All of this is history repeating itself — mathematicians love entire functions, while engineers love special functions … the latter being loaded with poles and cuts, whose existence is necessary to their practical function.

That is not to say that QMOR’s decimal-driven explorations cannot upon occasion lead to elegant and useful closed-form results. For example, the above line of reasoning very naturally leads to the theorem that the spin-$j$ thermal density matrix has a positive $P$-representation in terms of coherent states, which is given explicitly by $P_j(\hat n) = 1/Q_{j+1}(-\hat n)$.

This relation is straightforward to check using algebraic tools similar to those of Mulmuley and Sohoni, but no method of constructing it is known to us, other than the path of QMOR by projection onto Kahler manifolds, and solving a Fokker-Planck equation.

From this emerging perspective, geometry is the “earth” upon which both quantum simulation theory and quantum complexity theory grow, and for this reason I believe that in coming years quantum system engineers and quantum complexity theorists both will be studying, and both will be helping to develop, advanced geometric methods whose nature we can at present only dimly foresee.

Comment #24 May 23rd, 2007 at 5:35 am

John, I’m glad to help make a connection! Going the other way, would the “Symbolic Model Order Reduction” paper be a good place for us P-NP’ers to start? Is there a printed version of the “QMOR in a Nutshell” talk? How about MIT’s MOR pages?

To specify my question to Rahul better: (1) Has the tradeoff between the circuit lower bound and the randomized-algorithm upper bound (also bringing complexity of the proof and maybe “largeness” into the picture) been spelled out somewhere in full detail?, and (2) (When) can de-randomization be applied to yield factoring (or etc.) in some deterministic running time? I’m also interested in whether really concrete circuit lower bounds for SAT-or-etc. along lines of Stockmeyer’s cosmological lower bounds for ESPACE might impact the “Minicrypt” vs. “Cryptomania” question, e.g. trading off into constructibility issues for collision-resistant hash functions.

Comment #25 May 23rd, 2007 at 6:55 am

Kenneth Regan says:

John, I’m glad to help make a connection! Going the other way, would the “Symbolic Model Order Reduction” paper be a good place for us P-NP’ers to start? Is there a printed version of the “QMOR in a Nutshell” talk? How about MIT’s MOR pages?I’m not sure of a

goodplace to start, but here’s working QMOR reading list that our QSE Group uses. There are three areas that need to be united: (1) The modern theory of quantum dynamics, noise, and measurement. For this there is no better summary IMHO than chs. 2 and 8 of “Mike and Ike” (and it also helps to know some of the basic Ito-type mathematical techniques that are discussed in, e.g., Gardiner’sHandbook of Stochastic Processes). (2) The modern theory of nonlinear model order reduction. Here too the literature is enormous: a very good recent summary is Rewienski’s thesisA Trajectory Piecewise-Linear Approach to Model Order Reduction of Nonlinear Dynamical Systems, which came out of the MIT Model Order Reduction Group. (3) The modern theory of Kahler manifold geometry. Here again there is a vast literature, and the best text depends mainly on one’s preference for concrete vs. abstract formalisms. A concrete reading list would be, e.g., Weinberg’sGravitation and Cosmology, with the analytic extension to Kahler manifolds as summarized in Flaherty’sHermitian and Kahlerian Geometry in Relativity. A good abstract summary is Andrei Moroianu’sLectures on Kahler Geometry, for which preprint versions can be found on the web.Although none of these authors reference each other, it turns out that their respective disciplines do mesh together quite naturally. What Rewienski’s thesis describes as model order reduction by projection onto a nonlinear state-space is recognizably, in the quantum domain, simply the Dirac-Frenkel variational equation.

The resulting formalism is summarized on what we distribute to our QSE students as “QSE Page A“, much of which will be familiar to readers of this blog.

When we numerically implement the Page A algorithms in their most efficient form, we find that everything is determined by two scalar functions, one of which is recognizably the Kahler potential (which specifies the state space geometry), and the other is a stochastic potential whose biholonomic gradient specifies the quantum dynamics. This linkage is summarized on “QSE Page B“. Again, much of Page B will be familiar to readers of this blog.

These one-page summaries were created for a in-progress review article that surveys these QIT-MOR-geometry linkages … hopefully this review will get finished in the next couple of weeks. The most enjoyable part of writing this review has been a growing appreciation of the rich and strong foundations that geometry provides for QIT and QMOR.

Comment #26 May 23rd, 2007 at 8:07 am

I think “Model Order Reduction” (minus all the fancy Differential Geometry which clouds the issues involved) is just a very simple idea–what is called the “Noisy OR” approximation in the field of Bayesian Networks. This approximation lowers the complexity of a probabilistic circuit. An added bonus of looking at “Model Order Reduction” in this way is that complexity theorists are fond of talking about circuits, and the “Noisy OR” concept is already phrased in terms of circuits (Bayesian networks). I’m a physicist, so my inclination would be to study P to NP as a phase transition. Applying renormalization group ideas to this phase transition would entail finding decimations that will vary the complexity of a circuit. “Noisy OR” or something similar could be used to design such a decimation rule.

Comment #27 May 23rd, 2007 at 8:28 am

Ken, I’m not sure what you mean in question (2). Derandomization requires finding an explicit function of high circuit complexity; this might be easier than deciding if a function has high circuit complexity. Indeed, since any one-way function reduces to the latter problem, there is significant evidence that it is hard.

There is an interesting connection going the other way, though – the reduction from Factoring to the circuit complexity problem could conceivably help in derandomization. The known reduction is probabilistic; if we had a deterministic truth-table reduction instead, this would imply that E does not have sub-exponential size circuits, and hence that BPP=P. But it seems we would need a different technique than working via pseudo-random generators, since reductions of this kind are inherently non-uniform/probabilistic.

Comment #28 May 23rd, 2007 at 10:38 am

rtucci says:

I’m a physicist, so my inclination would be to study P to NP as a phase transition. Applying renormalization group ideas to this phase transition would entail finding decimations that will vary the complexity of a circuit.Hmmmm … the following Friedman article includes a clear discussion of Riemann normal coordinates (which is why it is in our QSE BibTeX database).

But since these Riemann normal coordinates arose precisely in the context of Friedman’s renormalization group analysis of phase transitions, the article may be of even greater interest to you than to us!

@article{Friedan:85, author = {D. H. Friedan}, title = {Nonlinear Models in 2+\epsilon Dimensions}, journal = {Annals of Physics}, year = 1985, volume = 163, pages = {318–419}, jasnote = {succinct reference for covariant power series expansion of metric tensor in Riemann normal coordinates; see eq. 4.7.7}}

Comment #29 May 23rd, 2007 at 12:23 pm

Rahul, you caught my meaning the right way around: to get factoring in some deterministic time, I meant by de-randomizing the reduction. But when you write:

:: the reduction from Factoring to the circuit complexity problem could conceivably help in derandomization. The known reduction is probabilistic; if we had a deterministic truth-table reduction instead, this would imply that E does not have sub-exponential size circuits, and hence that BPP=P.

—I don’t follow: If Factoring is in P then we get a deterministic truth-table reduction (from Factoring to anything), but I don’t see BPP = P or lower bounds on E following from that. I guess you mean a tt-reduction in general cases would have those conclusions (helped by the relation to polynomial identity testing?), but I’d be curious to know exactly which cases.

Comment #30 May 23rd, 2007 at 12:38 pm

Ken, I meant a “useful’ reduction, a reduction that is both honest and actually depends on the answers made to queries. Namely, there is some sequence of answers to queries that causes the reduction to return “yes”, and some other sequence that causes it to return “no”. The proof is simple, and works as long as the reduction is from any language L such that L has “yes” and “no” instances of each length which are easy to generate. Simply generate the queries corresponding to such a yes-instance x_n and a no-instance y_n, and concatenate the queries together – since the reduction is useful, the resulting string corresponds to the truth table of a hard function.

My earlier response to your question (2) seems stuck in moderation limbo, so let me repeat it. You’re right that Razborov-Rudich only reduces factoring in sub-exponential time to deciding if circuit complexity is high. But it is also true that Factoring reduces to Circuit Complexity in the poly-time range – this was shown in the beautiful work of Allender et al. on “Power from Random Strings”.

Comment #31 May 23rd, 2007 at 10:28 pm

rtucci’s comment reminded me of the work of Susan Coppersmith, most notably her recent eprint “Renormalization group approach to the P versus NP problem”

As best I understand from skimming the paper, she presents a decimation scheme for Boolean functions (i.e, an iterative scheme in which at each step one associates a Boolean function of n variables with one of n-1 variables) that has markedly different characteristics (“flows”) depending on whether or not the original Boolean function on N variables is expressible as some polynomial of degree D

Comment #32 May 23rd, 2007 at 10:31 pm

[Silly me, I used a less-than sign in HTML which caused some of my comment to disappear.]

rtucci’s comment reminded me of the work of Susan Coppersmith, most notably her recent eprint “Renormalization group approach to the P versus NP problem”

As best I understand from skimming the paper, she presents a decimation scheme for Boolean functions (i.e, an iterative scheme in which at each step one associates a Boolean function of n variables with one of n-1 variables) that has markedly different characteristics (“flows”) depending on whether or not the original Boolean function on N variables is expressible as some polynomial of degree D < N except for a fraction of inputs that shrinks exponentially in D/log(N).

And as best I understand, she thinks this is interesting because she conjectures that all Boolean functions in P are exponentially well approximated by low-order polynomials in the above manner.

She also presents an argument in the paper of why it doesn’t run afoul of the Razborov-Rudich Natural Proofs result.

Anybody here have thoughts on this paper?

Comment #33 May 23rd, 2007 at 11:25 pm

Is Susan Coppersmith related (daughter?) to Don Coppersmith, the legendary MIT genius that won the Putnam all four years he was in college at MIT. (Known in the quantum computing field for his discrete fourier transform decomposition…just a small footnote in his resume)

Comment #34 May 24th, 2007 at 3:22 am

Bill Kaminsky Says:

That was a very nice link to a beautiful and thought provoking paper. Thank you very much, Bill!

But as for “thoughts”? Sorry. This is a blog, after all. There’s room here only for “thots”

So here are some “thots” from a geometric point of view. Namely, Coppersmith’s approach would be even more beautiful, if the state space were a manifold (instead of a binary-valued bit-list), endowed with a complex metric (Kähler of course … so as to encompass quantum computation processes).:)

Hmmm … what might be a geometric version of Coppersmith’s decimation? Well, one candidate might be the recursive embedding of smaller dimension submanifolds within larger dimension manifolds—model order reduction, in other words. And might be the geometric obstruction to such embedding? Singularities, of course. And what is the signature of such singularities? Perhaps, negative eigenvalues of the Ricci tensor (?) … which on a Kähler manifold are trivial to compute from the (scalar) Kähler potential.

This would point to something like the following geometric version of Coppersmith’s approach. (1) All processes in P can be simulated on a noisy quantum computer (duh). (2) Any noisy quantum computer evolves on a Kähler state-space of positive curvature that is recursively stable under decimation (but is this really true ??). (3) Generic NP problems, at a fixed dimensionality, occupy so much state space that their manifolds necessarily have negative curvature, hence are singular, hence fail the decimation process (but is

thisreally true ??? and if it were, how would you prove it ????).To say it another way, a geometric version of Coppersmith’s argument might be this: if we attempt to make a Kähler state-space quantum simulation do more and more computational work, by adjusting its Kähler potential to have greater and greater geodesic divergence, while holding the dimensionality of the state-space fixed, then we necessarily create singularities in the state-space, which ruin the renormalization flow.

It would be typical of a really good proof of

P≠NPthat it could be expressed in many ways, including geometric ways; for this reason I like Susan Coppersmith’s approach very much.Comment #35 May 24th, 2007 at 10:53 am

Is Susan Coppersmith related (daughter?) to Don Coppersmith, the legendary MIT genius that won the Putnam all four years he was in college at MIT.They both graduated from MIT: Don Coppersmith in 1972 and Sue Coppersmith in 1978. They are also the only two Coppersmiths in Math Reviews. So they are possibly siblings.

Comment #36 May 24th, 2007 at 2:46 pm

John Sidles,

Here is a thought. To make the jump to geometry, decompose the Boolean function of n variables into a collection of Boolean functions of 2 variables taken from the set of n variables, and applied iteratively. (Think of a Boolean network, and how the state of a given cell or node in the network depends on the states of successive larger sets of cells at multiple time steps in the “past”.) The interlocking set of Boolean functions of 2 variables can be thought of as a simplicial complex—an approximation of a manifold—that links the “vertices”, ie, the bits in the bit-list.

(Also compare with the discussion of the connection between renormalization and triangulations of a manifold in gr-qc/0404088.)

Comment #37 May 24th, 2007 at 4:28 pm

Dear Chris — thanks for the suggestion … but that quantum gravity stuff is too hard for me!

From a more applied point of view, there is a very interesting new paper on the arxiv server (as of fifteen minutes ago) that pursues the very interesting idea of using quantum circuits to organize (what amounts to) QMOR by Dirac-Frenkel projection onto a Kähler manifold state space (although their article uses a non-geometric language):

Unifying variational methods for simulating quantum many-body systems.This article is interesting (IMHO) particularly in that “the key feature [of their approach] is a novel variational method over quantum circuits via infinitesimal unitary transformations, inspired by flow equation methods.”

The point, AFAICT, is to use QIT formalisms to adaptively “evolve” the QMOR state-space geometry to be the best possible match to the physics. This is very interesting approach, and a very interesting article, IMHO.

Comment #38 May 25th, 2007 at 7:18 am

Blog discussions are like fruit flies — short-lived & attracted to sweet topics! So perhaps now is a good time to take a break from ultra-tough topics like

P≠NP, phase transitions, and quantum gravity, and instead pursue the sweet topic ofsimplecalculations.By “sweet” I mean, calculations that are reasonably accessible to students, and yet unite QIT, algebra, and geometry. Our QSE Group calls these topic “confections.”

Curvature calculations are among our QSE Group’s favorite confections. Here is an example. Let’s consider the manifold of all the (unnormalized) quantum states of two spins that are not entangled. Such states are of the general algebraic form

ψ = {

c_1,c_2} ⊗ {c_3,c_4}where {

c_1,c_2,c_3,c_4} are arbitrary complex numbers.The set of all such states {ψ} obviously comprises a three-dimensional complex submanifold of the four-dimensional Hilbert space (three-dimensional, because any one of the

c_i can be scaled to unity).From the QIT viewpoint, {ψ} is the set of unentangled quantum states—which is a very interesting quantum object! From algebraic point of view, {ψ} is the set of values taken by an ordered set of polynomials in the variable {

c_i}—which is a very interesting algebraic object! From the geometric point of view, {c_i} is a ruled Kähler manifold (which we shall call “Κ”)—which is a very interesting geometric object! So all the ingredients are at hand for a “confection.”To begin, each point in Κ is obviously equipped with a set of rules, which are just the integral curves associated with the variation of each {

c_i}. These integral curves obviously are geodesics in the larger Hilbert space (which we regard as the definition of a “rule”) and hence, they are geodesics of Κ too. The existence of these natural geodesic rules is the geometric reason that computing, e.g., tangent spaces of Κ is algorithmically efficient.Now we can ask, what is the scalar curvature

Rof Κ? It is easy to calculate the following simple result:⟨ψ|ψ⟩

R= -8and it is not much harder to generalize this result to the case of

nspin-jproduct states, for which we find⟨ψ|ψ⟩

R= -16 n(n-1)j^2.Students who have had a course in GR or in differential geometry will find that verifying the above results is not particularly hard. Machine help

isrecommended, because the curvature tensors have lots of indices. There are many Mathematica packages for use in general relativity that will grind through the algebra automatically.But don’t waste too much time using these GR packages … they should serve mainly to motivate you to explore the theory of Kähler potentials, which makes the above curvature calculations far easier.

These results provide a very interesting geometric insight into nonentangled quantum states: the manifold of product states has strongly negative curvature everywhere. And this property persists when we take sums of product states—which are precisely the states that pretty much everybody uses in quantum simulations.

We are led to a picture in which sum-of-product states comprise a “foamy” submanifold of Hilbert space. That manifold has only polynomially many dimensions, yet it has the remarkable geometric property that geodesics upon it diverge exponentially fast (there are tons of theorems about geodesics on negatively curved manifolds), such that the set of trajectories does a surprisingly good job of filling the larger Hilbert space.

With regard to simulating quantum systems—and speaking solely from my personal point of view—the above geometric features account for both the remarkable fidelity, and the remarkable algorithmic efficiency, of product-sum Kähler manifolds in simulating complex quantum systems. The quantum fidelity arises from the “foamy” geometry created by negative Ricci curvature, and the algorithmic efficiency arises from the natural ruled structure on these manifolds.

So who says quantum simulation is generically hard (

i.e., in EXP)? From the above geometric point of view, quantum simulations are generically in P! The only exceptions are systems whose trajectories are carefully engineered never to intersect the space-filling “foam” of product-sum states.These specially designed hard-to-simulate systems are known, of course, as “quantum computers”.

Comment #39 May 25th, 2007 at 8:13 am

start your own blog.

Comment #40 May 25th, 2007 at 9:00 am

Roland Says:

start your own blog.Roland, that is coming (I hope) … however, the “state-space” of jobs in QIT has to expand by another factor of two or so, to make room for what would necessarily be an engineering-oriented blog.

Just to state the obvious, the continued expansion of jobs in QIT cannot be taken for granted. For example, a graph on page 66 of this month’s (June)

Scientific Americansummarizes the dismal contraction in American physics PhDs awarded annually; a contraction which has been sustained for more than 25 years.Just as sobering, American biology is now in the early phases of what appears to be a similar more-or-less permanent PhD contraction.

These contractions are very bad news for young people who are seeking to establish careers in science and technology.

Can QIT escape a similar contractile fate? The answer IMHO is an optimistic “yes, absolutely!”. But this will require a solid commitment to discovering practical applications that derive from the present generation of (fabulous! beautiful!) fundamental results in QIT.

Those practical applications of QIT are what my engineering colleagues and I have begun to focus upon. Obviously, it is not necessary that everyone enjoy doing this, but equally obviously, it

isnecessary thatsomepeople enjoy doing it.Meanwhile, Scott provides a tremendous service to everyone—which demands a lot of work on his part—in creating his good-humored forum for dicussing ideas “at the interface.”

Which come to think of it, is precisely what shtetls are for!

Comment #41 May 25th, 2007 at 9:43 am

Correction of the link in my previous post: gr-qc/0404088

John,

This paper is as much about differential geometry as it is about a particular approach to quantum gravity. It contains some observations that I think are well worth considering in other contexts.

Comment #42 May 25th, 2007 at 11:39 am

Rahul, thanks for the reminder about Allender et-al, which I actually (finally!) began really studying earlier this month.

Chris, thanks—the relativity paper has a generally helpful appendix, too.

I haven’t had time to fully deconstruct Susan Coppersmith’s paper yet, but if you pardon some “thots” (as a colleague staying nameless observed, they take the “ugh” out of thoughts), here goes: The crux seems to be an assertion that:

() All P/poly function families [f_N]

either(a) have low-degree approximations that are correct except for about a 1/2^{N^e} part of the domain (eqn 10 with \xi = “a fractional power” N^e),

or(b) allow f_N to be written as a function of N-1 or fewer “composite variables”.

Maybe this formalizes as saying that poly-size circuits either have good low-degree approximations or can be massaged into poly-size circuits whose underlying graphs have nontrivial N-1 node separators. I’m not sure how to get around trivialities like an output gate of fan-in 2 having a 2-node separator—in some cases one can partition the input variables X into X1 and X2 of roughly-equal size and talk about separators of X2 from X1. In general the paper needs attention from a formalist, fixing asymptotics and things like “function in P” when a concrete N is meant, and making the notions of “generic phase” and “flow” less cryptic.

The resulting hardness predicate negates “either-or” to “neither (a) nor (b)”. The paper’s Appendix B shows that not-(a) is “large”, and I imagine not-(b) is also “large”, so non-naturalizing must be based on complexity (i.e., on non-“constructivity”). Indeed, she writes at the end of section 4 (p10 top) that “the procedure discussed in Sec. 3 requires a number of operations that scales superexpoenntially with N”. This appears based mainly on there being about 2^{n^{(1-e)n^e}} candidate approximations of degree n^e to wade through, per counting on p7 bottom. However, I’m skeptical—

has anyone looked into whether good low-degree approximations (when promised to exist?) can be found in EXP-time by non-brute-force means?Or just decide predicate (a) in EXP-time by scaling up known work on low-degree testing?My other skepticism is that all this is over the field GF(2) and hence multi-linear. The premise of my idea for Perm \notin arith-P/poly in comment #15 is to exploit non-multilinear algebra that is “latent” in hard polynomials. Indeed, it’s hopefully not a gross oversimplification to say that the key post-1950 geometric insight (Grothendieck et al.) is that the equation “x^2 = 0″ has more algebraic structure than “x = 0″, and geometry needs to be extended to reflect this structure. Well, how does this play out in QFT—to what extent does multilinearity suffice? (To close with another witticism, Scott wrote in Democritus lecture 10 that Feynman’s Nobel was essentially for proving BQP is in PP. Maybe Dyson, left out in 1965, will eventually get kudos for helping prove NP != P.:-)

Comment #43 May 25th, 2007 at 1:29 pm

Something else of interest in the June

SciAm.Comment #44 May 25th, 2007 at 3:51 pm

Bill Kaminsky said, regarding Susan Coppersmith’s paper

“Anybody here have thoughts on this paper?”

Apparently not Bill. Maybe because she is a physicist stealing the show from computer scientists

Comment #45 May 25th, 2007 at 4:06 pm

:: rrtucci—Bill Kaminsky said, regarding Susan Coppersmith’s paper

“Anybody here have thoughts on this paper?”

A long post of mine doing exactly that awaits moderation—maybe having 4 HTML tags, not just 4 hyperlinks, delays it. Short version: The crux seems to be an assertion that for all P/poly function families [f_n}, either (a) f_n has a good approximation by a degree n^e polynomial over GF(2) (where you can scale e between 0 and 1 to taste), or (b) f_n depends on fewer than n “composite variables”. The hardness property “not(a or b)” is “large”, but (a) at least is adduced to require double-exp time, evidently because there are double-exp degree-n^e polynomials to try. The Q I boldfaced, for those who know low-degree testing stuff better than I, is whether (a) is decidable in single-EXP time after all, which would “naturalize” it.

Comment #46 May 25th, 2007 at 11:25 pm

John Sidles says: Roland, that is coming (I hope) … however, the “state-space” of jobs in QIT has to expand by another factor of two or so, to make room for what would necessarily be an engineering-oriented blog.

I say: No, that’s not a reason why you can’t start your own blog now. Just start your own blog. I’d read it. Do it now. There’s no reason not to, since you obviously have things to say. You’ve been hijacking threads here for a long time. Your posts are often non sequiturs (e.g. your recommendation that we read a Borges story while listening to a particular piece of music, which you posted out of nowhere not once, but twice (in December and more recently) this is the kind of thing that you could post on your own blog)

I’ll read your blog, and I’m sure Scott will add you to his blogroll if your blog is okay.

Comment #47 May 26th, 2007 at 6:33 am

I’ll defend John here. He’s put in great personal effort in both the technical and life issues that get discussed in this blog, and I find his viewpoints interesting and expressed reasonably. His Averroes’ Search Xmas item came with a nice link, and Borges on Spinoza was directly relevant to Scott’s Merneptah and Spinoza item.

In terms of my poetically-expressed theme from Scott’s April 1 item, he’s “paid his dues”—even to proselytize a (qu)bit for QIT. [BTW, I'm making a short draft paper of my thoughts on probability and multiverse theory before taking that further---basically it's like Nassim Taleb's "Black Swans" point applied to math/physics.]

Comment #48 May 26th, 2007 at 8:37 am

KWRegan, thanks and I agree with you. John’s comments are almost always interesting, and his links are often good, although I haven’t followed all of them. I don’t think anyone’s attacked John though. I just think it makes a lot of sense for him to have his own blog, since he already seems to have a blog within a blog, which is fine if it’s fine with Scott, it’s just puzzling. And as I said, I’d read his blog if he started one.

Comment #49 May 26th, 2007 at 9:53 am

I’m not sure even Scott knows what’s “fine” to optimize his shtetl. In the old days the rabbi/village-elders could absorb heat to help others cool down, in systems that were isolated hence pretty well closed. But in the Net age, shtetl-optimization is surely NP-hard, and whether such adiabatic methods can solve NP-complete problems is the technical question now enshrined in the header of this blog :-).

Comment #50 May 26th, 2007 at 10:12 am

Seriously, though, per my long comment #42, I’m curious to know:

() (generally if vaguely) how much of QFT/”renormalization” is multilinear, and

() (specifically) whether known results on low-degree testing may scale up to “naturalize” item (a) relating to Susan Coppersmith’s paper.

Comment #51 May 26th, 2007 at 8:02 pm

Scott asks a rhetorical question, then answers it:

“Why can’t they write about P vs. NP the same way? [as the LHC] Oh, right … them big machines …”

rrtucci Says:

(Comment #26, before he invoked Ken Wilson): “I’m a physicist, so my inclination would be to study P to NP as a phase transition.”

Combining these two, I think we have an unsolicited Theomathematics and Theophysics proposal to create high-luminosity beam collisions of polynomials and nonpolynomials in a million Gauss magentized Calabi-Yau manifold, with huge expensive machines looking at the jets of emitted literals, atoms, and axioms, to reconstruct the collisions and search for statistical proof of a phase transition.

If this is to be funded while the GOP is still in power, I suggest a back-up Faith-Based proposal which attempts the same within a religion’s rules of inference. When God said “Let there be light” — did he mean photons alone, or a mixture of photons and photinos? A huge expensive satellite examining the Cosmic background Radiation for signs of “Fiat” and/or “Lux” might be the answer.

I’d explain the Homeland Security version here, as they have a $10^10 R&D budget, but this blog does not support sufficiently strong encyption.

Comment #52 May 27th, 2007 at 6:03 am

Hi,

I am no expert in CS.

It seems there is an obvious amount of evidence that P is not equal to NP. So, where lies the difficulty proving (or disproving) the Clay’s millenium problem (whether P=or !=NP)? Why is it so difficult?

Comment #53 May 27th, 2007 at 9:20 am

Yup, you aren’t an expert in CS.

Comment #54 May 27th, 2007 at 9:48 am

Not even right, im no expert in CS either… but im pretty sure the problem is difficult because no one has any idea how to prove it (or disprove it), which Scott has said before, (as well as Bernard Chazelle). now im starting to see that part of that is because current proof techniques seem inadaquate to attack the problem (though i dont understand these ‘natural proof’ ideas yet either, but ill get there in time).

as for the ‘obvious amount of evidence that P is not equal to NP’, i think that might more properly be viewed as an extreme lack of evidence that P=NP. the problem is that algorithms can be extremely tricky, and so maybe we just arent clever enough yet; the AKS primality test is sort of an example of this, and it was only discovered in 2002.

additionally, weve learned that mathematics doesnt really care what we think, or how often something seems to be true, if there isnt a full proof, we dont really know (see Pólya conjecture).

Comment #55 May 28th, 2007 at 2:49 am

So, where lies the difficulty proving (or disproving) the Clay’s millenium problem (whether P=or !=NP)? Why is it so difficult?Sure, it’s not an easy problem. But it might be feasible.

The problem is that CS people hardly invest serious time to solve it, as they need to publish many papers. So they are not

effectivelyinterested in big questions, rather in publishing hundreds of small, feasible, increamental mainstream papers with tiny clever combinatorial ideas.Comment #56 May 28th, 2007 at 3:03 am

It’s too bad it’s becoming known as a “Clay’s Millenium Problem”

That’s what Clay wants alright, but to my ear it’s like asking “have you read the Penguin Classic, Homer’s Odyssey?”

Comment #57 May 28th, 2007 at 6:22 am

Not even right,

In physics and biology, if you have a lot of evidence supporting a hypothesis and no evidence countering it, you can say that hypothesis is “proven”. In math and CS, evidence doesn’t cut it. You need a formal logical argument, in which each step follows directly from the ones that came before it. You need to prove that no counterevidence will

everbe found.Comment #58 May 28th, 2007 at 9:42 pm

So, it is a mathematical problem. Can it be proved by contradiction? Assume that P=NP is true, this assumption leads to something impossible or wrong.

Comment #59 May 28th, 2007 at 9:55 pm

Can it be proved by contradiction? Assume that P=NP is true, this assumption leads to something impossible or wrong.Brilliant! Why didn’t anyone think of that? I’m going to try it right now…

Comment #60 May 28th, 2007 at 11:55 pm

Brilliant! Why didn’t anyone think of that? I’m going to try it right now…See, if you had only read your Wittgenstein, you would arrive at these insights much faster.

Comment #61 May 29th, 2007 at 1:09 am

See, if you had only read your Wittgenstein, you would arrive at these insights much faster.Oh, that is the only trick I could think of (the trick that my high school maths teachers taught us to disprove something) or some smart mathematicians can attack or disprove it “more directly”.

Comment #62 May 29th, 2007 at 4:15 pm

So, are you taking the job at MIT, or did that post start off a bidding war that has yet to be resolved?

Comment #63 May 29th, 2007 at 5:44 pm

Mu(the Zen word that “unasks” a question)Comment #64 May 30th, 2007 at 1:01 pm

Not even right,

I think you had better revert to lurking.

Comment #65 May 31st, 2007 at 1:38 am

Resolventeer says : Sure, it’s not an easy problem. But it might be feasible.The problem is that CS people hardly invest serious time to solve it, as they need to publish many papers. So they are not effectively interested in big questions, rather in publishing hundreds of small, feasible, increamental mainstream papers with tiny clever combinatorial ideas.

Actually quite a few people have spent a lot of time on big questions in CS and on P=NP.

Mike Fellows comes to mind (one of the two developers of parameterized complexity)

Comment #66 May 31st, 2007 at 1:52 am

Chris W. says: I think you had better revert to lurking.Sorry, why?

Comment #67 May 31st, 2007 at 3:25 am

Resolventeer Says:Sure, it’s not an easy problem. But it might be feasible.The problem is that CS people hardly invest serious time to solve it, as they need to publish many papers. So they are not effectively interested in big questions, rather in publishing hundreds of small, feasible, increamental mainstream papers with tiny clever combinatorial ideas.

That is also the case in physics. The physics people publish many papers which may contain only a slightly different idea in each paper. But it is also worthwhile to think about the big problems. If you can propose the important theories like Special Relativity, General Relativity or Yang-Mills gauge theory, just to name a few, your work is without any doubt a milestone in the history of physics. Their importance can’t be matched by a hundred papers which make the progress in physics research only by a little amount.

Comment #68 May 31st, 2007 at 5:43 am

I can answer N.E.R. from comment #52 on in some detail. There are certainly two “human factors” why P vs. NP is considered difficult:

(1) The # of bright-person-years spent without solving it rivals that of older conjectures, and P vs. NP itself turned 50 last year.

(2) We have not yet even developed tools to prove meaningful super-

linearlower bounds on general computational models (or super-nlogn for algebraic circuits over Q,R,C…). E.g. no language in E U NP is known to be unsolvable by Boolean circuits of size 5n—and even the Raz-Lachish 4.5n lower bound (you can Google it) “cheats” by not using the full binary basis.But what really gives people pause are two “structural factors”:

(3) The Baker-Gill-Solovay (et al.!) results giving (“oracle”) languages A,B such that PA = NPA but PB != NPB. Any PSPACE-complete A and “random” B do that. An oracle A allows machines to write down strings z and get the answer “is z in A?” immediately “for free”.

(4) The Razborov-Rudich obstacle, which blunted 15+ years of progress on circuit lower bounds.

Item (3) “smells like” a formal independence result, while (4) comes with actual independence results. Scott himself has an excellent survey Is P versus NP Formally Independent? which covers all of these issues.

Your social observations are indeed remarked on. Jin-Yi Cai observed after Wiles solved Fermat that complexity theory has not yet developed the kind of

toolsthat always existed for Fermat, and gave incremental progress that spurred other interesting mathematics for centuries—before Ribet (et al.) cracked the door widest open in 1986. He might say the same of Poincare’. Many of those doing “incremental progress” see themselves as helping along just such a toolkit—though IMHO Jin-Yi is right that the tools are notmatureyet. My remarks above show I’ve taken at least some kind of big swing at the big questions (most specifically PERM vs. DET), but I might also be held as an example of the dangers—it is hard to publish non-results! The Mulmuley-Sohoni program goes much further in developing tools (especially with new papers since last month), and while I haven’t surveyed my own stuff yet, at least I surveyed a non-negligible initial segment of their work. A field needs people of all kinds to be healthy…Comment #69 June 16th, 2007 at 8:34 pm

Re: #58, #59:

Deconstruction of a dumb, wrong argument about P vs NP:

http://arxiv.org/PS_cache/arxiv/pdf/0706/0706.2035v1.pdf

Title: Critique of Feinstein’s Proof that P is not Equal to NP

Authors: Kyle Sabo, Ryan Schmitt, Michael Silverman

Comments: 5 pages, 2 definitions

We examine a proof by Craig Alan Feinstein that P is not equal to NP. We present counterexamples to claims made in his paper and expose a flaw in the methodology he uses to make his assertions. The fault in his argument is the incorrect use of reduction. Feinstein makes incorrect assumptions about the complexity of a problem based on the fact that there is a more complex problem that can be used to solve it. His paper introduces the terminology “imaginary processor” to describe how it is possible to beat the brute force reduction he offers to solve the Subset-Sum problem. The claims made in the paper would not be validly established even were imaginary processors to exist.

Comment #70 June 17th, 2007 at 10:07 am

Someone actually took the time to write up a formal response to something Craig Feinstein wrote???

For Not even right’s comment #58, if you wanted to prove directly that P ≠ NP, the most obvious approach would be to try to construct a language L that is

notin P (but hopefully is still in NP). Traditionally, the way one would go about doing that is through some sort of diagonalization process. What Baker-Gill-Solovay showed was that this approach was doomed to failure. You could also try to work in the other direction, starting with a problem you know is in NP and is believed to be hard (no shortage of candidates for that), and then show that the problem is not in P. Razborov-Rudich has knocked down one line of attack in this direction.Since attempts at a direct proof of P ≠ NP haven’t been having too much success, I think it’s safe to say that people have been looking at ways to prove this by contradiction (and more rigorously than Craig Feinstein evidently has been). I’m all with you on that point. I just hope that the board of the Clay Mathematics Institute isn’t stacked with a bunch of constructivists.