## Barriers to snarky blogging

I’m writing from the Barriers in Computational Complexity workshop in Princeton, where too many real things are happening for me to blog about nothing. I understand that streaming video of all the talks will be up eventually; for now, a few highlights:

- On Tuesday I hosted a panel discussion on “Barrier Problems in Boolean Complexity.” The panelists were Steve Cook, Avi Wigderson, Russell Impagliazzo, and Sasha Razborov. We got lots of questions from the floor, about everything from whether P≠NP, to whether P vs. NP is independent of set theory, to whether the laws of physics can be understood as computer programs. Alas, there were few to no serious disagreements among the panelists (indeed, you can probably guess their answers to the last three questions).

- I gave a talk entitled Has There Been Progress on the P vs. NP Question? (The link goes to the PowerPoint slides.)

- Ketan Mulmuley spoke about Geometric Complexity Theory (GCT), his approach to P vs. NP and related problems based on algebraic geometry and group representation theory. For months I’ve been planning a blog post about GCT. Spurred on by people at the workshop, I might actually finish it soon. In the meantime, those of you who can’t wait for your daily helping of plethysms, Weyl modules, G-varieties might want to check out Mulmuley’s new complexity-theoretic overview and complementary mathematical overview of GCT.

- Ben Rossman spoke about recent lower bounds in circuit complexity: an ~n
^{k/4}lower bound on the size of AC^{0}circuits computing the k-clique function, and (a brand-new result) an ~n^{k/4}lower bound on the size of*monotone*circuits computing the k-clique function, even on average.

- Ran Raz gave an awesome talk on “How to Fool People to Work on Circuit Lower Bounds.” (Answer: by giving them completely innocuous-looking mathematical problems, without telling them that the answers would imply breakthroughs in complexity theory. Alas, presumably no one who attended Ran’s talk—or for that matter who’s reading this entry—can be fooled, since we’re in on the secret.) In particular, Ran spoke about his STOC’08 paper on elusive functions, as well as some brand-new work on how lower-bounding the rank of explicit tensors would lead to circuit and formula size lower bounds.

Meanwhile, Lance has a superb survey article in *Communications of the ACM* about the status of the P vs. NP problem.

(An earlier version of this post discussed a preprint by Gus Gutoski on quantum multi-prover interactive proof systems. That preprint has since been retracted.)

And now I bid adieu, as the next talk is starting and my laptop is running out of batteries.

Comment #1 August 27th, 2009 at 3:18 pm

For some reason I can’t the presentation file to open. Wondering if something is wrong with it?

Comment #2 August 27th, 2009 at 5:06 pm

it works fine for me

Comment #3 August 28th, 2009 at 1:08 am

Suppose Frank Wilczek gets to ask the super-aliens whether P=NP and they answer that P!=NP. What has anyone really learned?

Comment #4 August 28th, 2009 at 1:11 am

Re the presentation, I notice in Open Office that a bunch of your ppt’s have issues where parts of the slides overlap and occlude each other. Also the slides are locked against editing so I can’t easily move the parts around to be able to read the obstructed text. Being interested in cryptography, I just click on one of the boxes so that the text appears OR’d with text from the other box, then squint and try to make out what it says. But it’s a nuisance.

Comment #5 August 28th, 2009 at 2:17 am

I was hoping you’d write a post about Geometric Complexity Theory (GCT).

The recent paper arXiv:0907.2850 notes that “This program is beginning to gain the attention of the mathematical community” and has a very elegant description of some geometric aspects of the approach.

Comment #6 August 28th, 2009 at 7:54 am

asdf: Indeed, I don’t think we’d learn much from that, or indeed from pretty much any other single bit of information.

To be honest, I was so excited that P vs. NP had permeated the culture to the point where even a Nobel physicist had heard of it, that I decided not to raise my hand to quibble with his statement. 🙂

Comment #7 August 28th, 2009 at 7:55 am

a bunch of your ppt’s have issues where parts of the slides overlap and occlude each other.asdf: The reason is that I make heavy use of PowerPoint animations. Maybe try the PowerPoint viewer if OpenOffice can’t handle that?

Comment #8 August 28th, 2009 at 9:47 am

How is Rossman’s result on monotone circuits different from Razborov’s? The Alon-Boppana improvement of Razborov’s argument also gets a lower bound of around n^Omega(k).

Comment #9 August 28th, 2009 at 7:44 pm

Scott, OpenOffice only shows the final version of each slide in your presentations, without letting you step through them (or at least I don’t know how to make it do that). Since you’re fond of having steps which cover other steps, sometimes the text gets obscured.

Comment #10 August 29th, 2009 at 2:05 am

I liked your PowerPoint presentation — I have long thought that the Permanent-Determinant issue was the key to the P-NP question.

Is there any reasonable approach to proving P!=PSPACE that wouldn’t also show P!=NP? It seems that P!=PSPACE ought to be much easier to prove, but all the discussion seems to be about ways to tackle P!=NP.

In terms of philosophical importance, I would recommend NP?=CoNP as even more interesting than P!=NP (of course P=NP would be way more interesting than any previous mathematical result ever proven). Informally, NP=CoNP says “there are no accidents, everything that is true is true for an intelligible reason”.

Comment #11 August 29th, 2009 at 1:14 pm

Scott, the following is a quote from your paper with Wigderson on algebrization.

The last few words there strike me as a nice slogan to bear in mind if one is thinking about P versus NP and wants to avoid accidentally attempting to give a naturalizable proof. Can you give a similar slogan that says what an algebrizable proof is trying to do that it in fact cannot do? I sort of 70% understand the formal definition from the introduction to your paper but have absolutely no feel for what kinds of proof attempts are likely to be algebrizable. Maybe the only way of getting that feel is to read your paper much more carefully, but if a short informal answer exists then it would be very nice. (I write this because I have my own complexity theory barrier, which is that I can never hold in my head the definitions of enough of the jargon to be able to understand what people write, even when they are trying to be friendly, and this despite the fact that I know more about the subject than the average non-TCS mathematician.)

Comment #12 August 29th, 2009 at 2:11 pm

Bram: Yes, I know that’s the problem; sorry about that. Maybe try ClosedOffice? 🙂

Comment #13 August 29th, 2009 at 3:41 pm

I sort of 70% understand the formal definition from the introduction to your paper but have absolutely no feel for what kinds of proof attempts are likely to be algebrizable.Hi Tim,

As it happens, some of us were discussing that exact question at the Barriers workshop. My own intuition is that “algebrizing techniques” roughly coincides with “relativizing techniques” plus “IP=PSPACE-like techniques” (that is, techniques based on “lifting” Boolean formulas to low-degree polynomials over some larger field).

However, it must be admitted that some other results, which don’t fall into that framework, turn out to algebrize as well (one example being the Goldreich-Micali-Wigderson theorem on zero-knowledge proofs). And also, the PCP Theorem and the MIP=NEXP Theorem (about multi-prover interactive proof systems) are

partlybased on lifting Boolean formulas to low-degree polynomials, but there’s a natural sense in which they fail to algebrize. (Whether they algebrize or not depends on the oracle access mechanism — Avi and I show that MIP=NEXP algebrizes if the NEXP machine is restricted to making polynomially-long queries to the oracle, but it trivially fails to algebrize if the NEXP machine is allowed to make exponentially-long queries.)You might get some more insight from the followup paper An Axiomatic Approach to Algebrization, by Impagliazzo, Kabanets, and Kolokolova.

Regarding the “jargon barrier”—sorry about that! 🙂 If it makes you feel better, the same barrier applies in the other direction (e.g., whenever I pick up a yellow book). I’ll be happy to answer any other questions.

(Your comment reminded me of one of the slides from Ran Raz’s talk, which showed “the ultimate barrier to circuit lower bounds”: an anatomical diagram of the human brain.)

Comment #14 August 29th, 2009 at 3:43 pm

Aha, I didn’t even realize that those slides were animations and I was only seeing the final image in each sequence. That explains things.

Scott, I came across a pretty cool dissertation, “Predicative Recursion and Computational Complexity” (1992) by Stephen J. Bellantoni, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.8024

It is inspired partly by Ed Nelson’s concept of predicative arithmetic, which is interesting its own right. Bellantoni defines some classes of functions through a very restricted form of recursion on the formulas defining individual functions: nothing to do with Turing machines or running time. Then he shows that the functions defined by a particular restriction, “predicative recursion”, turn out to be exactly the P-time functions. Some other restrictions yield other familiar complexity classes.

Do you know about this thesis? It seems to me to sort of explain what you describe as the “robustness of P”. It also leads to a wild thought: is membership in the class of predicative recursive functions decidable, say with something like quantifier elimination? There is a famous algorithm by Leonid Levin that solves SAT by diagonalizing over Turing machines and running them in parallel until one of them has generated a proof witness for the given SAT instance. This algorithm obviously terminates with the right answer. The cute observation is that its running time is polynomial if and only if P=NP. So if there is a decision procedure for predicative recursion, then running Levin’s algorithm through it would immediately answer P vs. NP. (Well, of course decision procedures can be pretty slow, like the one for Presberger arithmetic is double exponential. So actually running the procedure and getting the answer would be computationally unfeasible, but at least you’d know that there IS an answer).

Comment #15 August 29th, 2009 at 3:50 pm

The answer to the “wild thought” is obviously that there is no decision procedure (it would solve the halting problem). Oops.

Comment #16 August 29th, 2009 at 6:29 pm

Scott, a really welcome topic for a

Shtetl Optimizeddiscussion would be: “Pretty Good Yellow-Cover Math Books”.Because as Fermi expressed it “For me there are two kinds of [yellow-covered] mathematics books: the ones where I don’t understand the first page, and the ones where I don’t understand the first sentence.”

A good-hearted syllabus of “pretty good’ yellow-cover math books would be the math community’s version of Garrison Keillor’s

Prairie Home Companiongood-hearted joke-themed shows (which are anthologized in the book seriesPretty Good Jokes).Right now I am struggling through Arnol’d’s pretty good and (literally) yellow-covered

Classical Mechanics… also pretty good and (literally) yellow-covered is Joe Harris’Algebraic Geometry.These yellow books offer plenty of tough-but-fun mathematical insights into the workings of QM/QIT/QIS; if more students were encouraged to read them, the long-term results would be good for everyone (IMHO).

As Huck Finn said of

Pilgrim’s Progress: “The statements was interesting, but tough” … yeah, that’s what reading yellow math books is all about.Comment #17 August 29th, 2009 at 9:04 pm

A little late, but what version of Open Office are you using? I opened the slides in OO 3.1 on a Mac, ran the slide show and everything looked fine except I didn’t have one of the fonts. But the substitute was readable. All the animation I saw worked fine.

Comment #18 August 31st, 2009 at 12:35 am

I own a few dozen Yellow Books. Offhand, I would rate “Perturbation Theory for Linear Operators” by Tosio Kato to be the best/most useful. I have the short version limited to the finite dimensional case (Matrix Theory). Kato is an example of the rare author who can organize deep stuff so well it all seems obvious. Anyone working with matrices, eigenvalues, or eigenvectors will discover lots of good info.

I just checked Amazon, and the full version has been re-released, so it only costs half as much as the short version. Wow! I can make $50 by upgrading.

Comment #19 September 1st, 2009 at 1:56 am

I don’t know how you’re going to talk about GCT, but Fornow’s description, breaking the program into those 3 steps, is bizarre. If you’re going that way, here are some questions I’d suggest answering:

Why would you prove an infinite collection of statements by showing that they’re susceptible to a particular algorithm, and showing what that algorithm does to them? Are there other examples where people have done this? why an efficient algorithm? who cares if they’re in P, when you’re finally done and know the answer in constant time?

It sounds to me like there is supposed to be some extra property that leads both to the algorithm applying and to being able to understand the infinite family. From this point of view, the algorithm is a detour. It might be a useful detour; for example, if you can prove the algorithm works, you can run it in a finite number of cases. Even if you can’t prove it works, if you want the answer to be Yes and the problem is in NP, as in the example, you can run the algorithm anyhow. Has this been done? I suspect no, and I suspect reason is that the algorithm is supposed to be polynomial in the inputs, but the inputs are big in terms of n. (Also, another obvious question: why do they propose two algorithms? It does seem plausible that it’s useful to prove that a standard heuristic like LP relaxation applies; but a special-purpose algorithm?)

Comment #20 September 1st, 2009 at 1:57 pm

My wife Connie gave me some illuminating insights into the publishing phenomenon of “Yellow Books” this morning … in the context of designing the cover of Connie’s forthcoming book

In My Nature.Connie pointed out that yellow book covers are a wonderful example of

biophilia(a term coined by E. O. Wilson to describe the innate human liking for everything that our evolutionary history has conditioned us to like).It works like this. You go to a library or a bookstore … you pick a book off the shelf … a book that somehow looks enticing … a book with a green or brown background … and a foreground that is banana-yellow … cherry-red … orange-orange … lime-green … blueberry-blue.

Hmmm … when you think about it … what’s more enticing to a primate … than a lusciously ripe, brightly covered fruit … framed by darker green/brown foliage … within easy arm’s reach?

Isn’t this “yellow book” bookbinding aesthetic partly the reason why libraries and bookstores are such comfortable places? They are paradises in which the fruits of intellect are within easy reach!

Connie and I laughed together at these silly notions … but then, hey … we too designed the book cover to accord with the traditional “yellow-book” biophilia-driven aesthetic of publishing … and we shipped those book files this morning off to the printer. Because even when you understand their evolutionary origins, “yellow book”: biophilic traditions demand and deserve respect.

All hail the mighty memoir class of LaTeX, which makes it easy to respect “yellow book” traditions! 🙂

Comment #21 September 1st, 2009 at 6:39 pm

John,

The book looks Birkhauser Green to me! It appears to have a cool publisher, too.

Comment #22 September 1st, 2009 at 7:57 pm

“Steve Cook, Leonid Levin, and Richard Karp10, 24, 27 developed the initial theory of NP-completeness that generated multiple ACM Turing Awards.”

Ah, why mention the awards? Tacky.

Comment #23 September 2nd, 2009 at 4:42 am

Hmmmm … Scott’s account of this conference was lively, timely, well written, and valuable to me … as was Scott’s talk

Has There Been Progress on the P vs. NP Question?that he was gracious enough to post … ditto for Rahul Santhanam’s in-depth and very interesting guest-post on the Fortnow/GASARCH blog … and so both have earned our gratitude.Comment #24 September 3rd, 2009 at 8:58 pm

Apropos of nothing in particular… (except that Scott inflicted upon my mind the pernicious notion of “Yellow Books”) … here is a fun page of Yellowest Books Ever … I had no idea there were so many!

On this list, the yellow Swiss book

Die Physikerlooked particularly interesting. Its Wikipedia summary reminds me strongly of Carter Scholz’ excellent and mathematically erudite bookRadiance… both are dark comedies about science … both are available (in limited preview) on Google Books.And yes,

Radiancetoo has a yellow-themed cover … what’s up with that?Comment #25 September 3rd, 2009 at 9:52 pm

I have a general question about the Lance paper. He writes:

Transportation of all forms will be scheduled optimally to move people and goods around quicker and cheaper.

Then later on, he writes:

…we can now solve, in practice, traveling salespeople problems with more than 10,000 cities…

Unless we have really determined salesmen or truck drivers or something, I don’t get it: if we can already solve these optimization problems for 10,000 nodes, what practical good would a P=NP solution be?

As a computational linguist, I also doubt his claim that “Near perfect… language comprehension and translation and all other learning tasks become trivial.” It’s true that generating a machine translation system from a large corpus takes a long time, but that’s not what’s holding us back: we don’t have sufficiently large corpora even for the best language pairs, and we have far from sufficiently large corpora for 99.9% of the pairs. And while there are doubtless better approaches to MT, it’s not at all clear that the problem would be solved by a faster algorithm.

Of course, there might be computational linguists out there who would disagree with me. So: does anyone know where I could find discussion of claims like these? (If P=NP, then X would become trivially easy.)

(BTW, if language comprehension–or better, learning a natural language well enough to comprehend it–is NP-hard, then maybe children are an existence proof that P=NP, since they learn languages so fast!)

Comment #26 September 3rd, 2009 at 10:24 pm

Michael, a “yellow book” answer to your question is that we have pretty good representation theories for ergodic systems; this class includes all symplectic dynamical systems (i.e., all time-dependent Hamiltonian systems, both classical and quantum), and also all systems whose state-space is an exhaustive catalog (graphs, for example). And we can generically construct, for this broad class of dynamical systems and state-spaces, optimization problems that are NP-complete.

But we do

not(presently) possess similarly good representation theories for litotic systems, that is, the broad class of systems that are dynamically concentrated onto small-dimension state-spaces. This class generically includes symplectic systems that are in contact with noisy and/or low temperature reservoirs (such systems being ubiquitous in the real world). Empirically these systems can be simulated efficiently because the accessible regions of state-space are algorithmically compressible, but we do not presently have any very deep/broad understanding of how this concentration occurs (although we do have litotic constructions for havesomeconcentration theorems, e.g., positiveP-representations for arbitrary-jthermalized spin systems).The bottom line: we humans are hot, wet, noisy, algorithmically compressible creatures whose languages are “learnable” precisely because these languages evolved via hot, wet, noisy, algorithmically compressible physical processes.

The above is a quantum systems engineering view, at any rate … many alternative views are equally reasonable … for the very good reason that we don’t at present really know all that much about this class of problem.

Dick Lipton’s blog has a fine essay on this problem, titled

SAT Solvers: Is SAT Hard or Easy?Comment #27 September 4th, 2009 at 3:52 pm

Michael Maxwell: I think the claim w.r.t. NLP is that if P = NP, we would “easily” be able to find a model of language that explains all the corpora we currently have, and that is maximally generalizable while making the simplest possible set of assumptions (or something along those lines). It might not truly solve NLP, but it would come pretty close. In other words, it’s not about a faster algorithm; it’s about being able to search through the vast space of possible models efficiently.

It seems to me that the “children learning language” is an example of nature being able to come up with a very good heuristic solutions through its use of selection. But the facts that languages change over time, that there are different dialects of the same language, and that children don’t talk exactly like their parents suggest to me that however nature does it, it really is just a heuristic. Nature hasn’t yet proven P = NP.

Comment #28 September 7th, 2009 at 11:20 pm

A couple of pretty decent reviews of GCT are posted here:

http://www.cse.buffalo.edu/~regan/papers/pdf/Reg02MSFD.pdf

and here:

http://arxiv.org/abs/arxiv:0907.2850