Barriers to snarky blogging

I’m writing from the Barriers in Computational Complexity workshop in Princeton, where too many real things are happening for me to blog about nothing.  I understand that streaming video of all the talks will be up eventually; for now, a few highlights:

  • On Tuesday I hosted a panel discussion on “Barrier Problems in Boolean Complexity.”  The panelists were Steve Cook, Avi Wigderson, Russell Impagliazzo, and Sasha Razborov.  We got lots of questions from the floor, about everything from whether P≠NP, to whether P vs. NP is independent of set theory, to whether the laws of physics can be understood as computer programs.  Alas, there were few to no serious disagreements among the panelists (indeed, you can probably guess their answers to the last three questions).
  • Ketan Mulmuley spoke about Geometric Complexity Theory (GCT), his approach to P vs. NP and related problems based on algebraic geometry and group representation theory.  For months I’ve been planning a blog post about GCT. Spurred on by people at the workshop, I might actually finish it soon.  In the meantime, those of you who can’t wait for your daily helping of plethysms, Weyl modules, G-varieties might want to check out Mulmuley’s new complexity-theoretic overview and complementary mathematical overview of GCT.
  • Ben Rossman spoke about recent lower bounds in circuit complexity: an ~nk/4 lower bound on the size of AC0 circuits computing the k-clique function, and (a brand-new result) an ~nk/4 lower bound on the size of monotone circuits computing the k-clique function, even on average.
  • Ran Raz gave an awesome talk on “How to Fool People to Work on Circuit Lower Bounds.”  (Answer: by giving them completely innocuous-looking mathematical problems, without telling them that the answers would imply breakthroughs in complexity theory.  Alas, presumably no one who attended Ran’s talk—or for that matter who’s reading this entry—can be fooled, since we’re in on the secret.)  In particular, Ran spoke about his STOC’08 paper on elusive functions, as well as some brand-new work on how lower-bounding the rank of explicit tensors would lead to circuit and formula size lower bounds.

Meanwhile, Lance has a superb survey article in Communications of the ACM about the status of the P vs. NP problem.

(An earlier version of this post discussed a preprint by Gus Gutoski on quantum multi-prover interactive proof systems.  That preprint has since been retracted.)

And now I bid adieu, as the next talk is starting and my laptop is running out of batteries.


28 Responses to “Barriers to snarky blogging”

  1. Ghaith Says:

    For some reason I can’t the presentation file to open. Wondering if something is wrong with it?

  2. Anthony Says:

    it works fine for me

  3. asdf Says:

    Suppose Frank Wilczek gets to ask the super-aliens whether P=NP and they answer that P!=NP. What has anyone really learned?

  4. asdf Says:

    Re the presentation, I notice in Open Office that a bunch of your ppt’s have issues where parts of the slides overlap and occlude each other. Also the slides are locked against editing so I can’t easily move the parts around to be able to read the obstructed text. Being interested in cryptography, I just click on one of the boxes so that the text appears OR’d with text from the other box, then squint and try to make out what it says. But it’s a nuisance.

  5. Richard Says:

    I was hoping you’d write a post about Geometric Complexity Theory (GCT).

    The recent paper arXiv:0907.2850 notes that “This program is beginning to gain the attention of the mathematical community” and has a very elegant description of some geometric aspects of the approach.

  6. Scott Says:

    asdf: Indeed, I don’t think we’d learn much from that, or indeed from pretty much any other single bit of information.

    To be honest, I was so excited that P vs. NP had permeated the culture to the point where even a Nobel physicist had heard of it, that I decided not to raise my hand to quibble with his statement. 🙂

  7. Scott Says:

    a bunch of your ppt’s have issues where parts of the slides overlap and occlude each other.

    asdf: The reason is that I make heavy use of PowerPoint animations. Maybe try the PowerPoint viewer if OpenOffice can’t handle that?

  8. Anonymous Says:

    How is Rossman’s result on monotone circuits different from Razborov’s? The Alon-Boppana improvement of Razborov’s argument also gets a lower bound of around n^Omega(k).

  9. Bram Cohen Says:

    Scott, OpenOffice only shows the final version of each slide in your presentations, without letting you step through them (or at least I don’t know how to make it do that). Since you’re fond of having steps which cover other steps, sometimes the text gets obscured.

  10. Joe Shipman Says:

    I liked your PowerPoint presentation — I have long thought that the Permanent-Determinant issue was the key to the P-NP question.

    Is there any reasonable approach to proving P!=PSPACE that wouldn’t also show P!=NP? It seems that P!=PSPACE ought to be much easier to prove, but all the discussion seems to be about ways to tackle P!=NP.

    In terms of philosophical importance, I would recommend NP?=CoNP as even more interesting than P!=NP (of course P=NP would be way more interesting than any previous mathematical result ever proven). Informally, NP=CoNP says “there are no accidents, everything that is true is true for an intelligible reason”.

  11. gowers Says:

    Scott, the following is a quote from your paper with Wigderson on algebrization.

    For diagonalization zeroes in on a specific property of the function f being lower-bounded—namely, the ability of f to simulate a whole class of machines—and thereby avoids the trap of arguing that “f is hard because it looks like a random function.”

    The last few words there strike me as a nice slogan to bear in mind if one is thinking about P versus NP and wants to avoid accidentally attempting to give a naturalizable proof. Can you give a similar slogan that says what an algebrizable proof is trying to do that it in fact cannot do? I sort of 70% understand the formal definition from the introduction to your paper but have absolutely no feel for what kinds of proof attempts are likely to be algebrizable. Maybe the only way of getting that feel is to read your paper much more carefully, but if a short informal answer exists then it would be very nice. (I write this because I have my own complexity theory barrier, which is that I can never hold in my head the definitions of enough of the jargon to be able to understand what people write, even when they are trying to be friendly, and this despite the fact that I know more about the subject than the average non-TCS mathematician.)

  12. Scott Says:

    Bram: Yes, I know that’s the problem; sorry about that. Maybe try ClosedOffice? 🙂

  13. Scott Says:

    I sort of 70% understand the formal definition from the introduction to your paper but have absolutely no feel for what kinds of proof attempts are likely to be algebrizable.

    Hi Tim,

    As it happens, some of us were discussing that exact question at the Barriers workshop. My own intuition is that “algebrizing techniques” roughly coincides with “relativizing techniques” plus “IP=PSPACE-like techniques” (that is, techniques based on “lifting” Boolean formulas to low-degree polynomials over some larger field).

    However, it must be admitted that some other results, which don’t fall into that framework, turn out to algebrize as well (one example being the Goldreich-Micali-Wigderson theorem on zero-knowledge proofs). And also, the PCP Theorem and the MIP=NEXP Theorem (about multi-prover interactive proof systems) are partly based on lifting Boolean formulas to low-degree polynomials, but there’s a natural sense in which they fail to algebrize. (Whether they algebrize or not depends on the oracle access mechanism — Avi and I show that MIP=NEXP algebrizes if the NEXP machine is restricted to making polynomially-long queries to the oracle, but it trivially fails to algebrize if the NEXP machine is allowed to make exponentially-long queries.)

    You might get some more insight from the followup paper An Axiomatic Approach to Algebrization, by Impagliazzo, Kabanets, and Kolokolova.

    Regarding the “jargon barrier”—sorry about that! 🙂 If it makes you feel better, the same barrier applies in the other direction (e.g., whenever I pick up a yellow book). I’ll be happy to answer any other questions.

    (Your comment reminded me of one of the slides from Ran Raz’s talk, which showed “the ultimate barrier to circuit lower bounds”: an anatomical diagram of the human brain.)

  14. asdf Says:

    Aha, I didn’t even realize that those slides were animations and I was only seeing the final image in each sequence. That explains things.

    Scott, I came across a pretty cool dissertation, “Predicative Recursion and Computational Complexity” (1992) by Stephen J. Bellantoni, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.8024

    It is inspired partly by Ed Nelson’s concept of predicative arithmetic, which is interesting its own right. Bellantoni defines some classes of functions through a very restricted form of recursion on the formulas defining individual functions: nothing to do with Turing machines or running time. Then he shows that the functions defined by a particular restriction, “predicative recursion”, turn out to be exactly the P-time functions. Some other restrictions yield other familiar complexity classes.

    Do you know about this thesis? It seems to me to sort of explain what you describe as the “robustness of P”. It also leads to a wild thought: is membership in the class of predicative recursive functions decidable, say with something like quantifier elimination? There is a famous algorithm by Leonid Levin that solves SAT by diagonalizing over Turing machines and running them in parallel until one of them has generated a proof witness for the given SAT instance. This algorithm obviously terminates with the right answer. The cute observation is that its running time is polynomial if and only if P=NP. So if there is a decision procedure for predicative recursion, then running Levin’s algorithm through it would immediately answer P vs. NP. (Well, of course decision procedures can be pretty slow, like the one for Presberger arithmetic is double exponential. So actually running the procedure and getting the answer would be computationally unfeasible, but at least you’d know that there IS an answer).

  15. asdf Says:

    The answer to the “wild thought” is obviously that there is no decision procedure (it would solve the halting problem). Oops.

  16. John Sidles Says:

    Scott, a really welcome topic for a Shtetl Optimized discussion would be: “Pretty Good Yellow-Cover Math Books”.

    Because as Fermi expressed it “For me there are two kinds of [yellow-covered] mathematics books: the ones where I don’t understand the first page, and the ones where I don’t understand the first sentence.”

    A good-hearted syllabus of “pretty good’ yellow-cover math books would be the math community’s version of Garrison Keillor’s Prairie Home Companion good-hearted joke-themed shows (which are anthologized in the book series Pretty Good Jokes).

    Right now I am struggling through Arnol’d’s pretty good and (literally) yellow-covered Classical Mechanics … also pretty good and (literally) yellow-covered is Joe Harris’ Algebraic Geometry.

    These yellow books offer plenty of tough-but-fun mathematical insights into the workings of QM/QIT/QIS; if more students were encouraged to read them, the long-term results would be good for everyone (IMHO).

    As Huck Finn said of Pilgrim’s Progress: “The statements was interesting, but tough” … yeah, that’s what reading yellow math books is all about.

  17. Markk Says:

    A little late, but what version of Open Office are you using? I opened the slides in OO 3.1 on a Mac, ran the slide show and everything looked fine except I didn’t have one of the fonts. But the substitute was readable. All the animation I saw worked fine.

  18. Raoul Ohio Says:

    I own a few dozen Yellow Books. Offhand, I would rate “Perturbation Theory for Linear Operators” by Tosio Kato to be the best/most useful. I have the short version limited to the finite dimensional case (Matrix Theory). Kato is an example of the rare author who can organize deep stuff so well it all seems obvious. Anyone working with matrices, eigenvalues, or eigenvectors will discover lots of good info.
    I just checked Amazon, and the full version has been re-released, so it only costs half as much as the short version. Wow! I can make $50 by upgrading.

  19. Douglas Knight Says:

    I don’t know how you’re going to talk about GCT, but Fornow’s description, breaking the program into those 3 steps, is bizarre. If you’re going that way, here are some questions I’d suggest answering:

    Why would you prove an infinite collection of statements by showing that they’re susceptible to a particular algorithm, and showing what that algorithm does to them? Are there other examples where people have done this? why an efficient algorithm? who cares if they’re in P, when you’re finally done and know the answer in constant time?

    It sounds to me like there is supposed to be some extra property that leads both to the algorithm applying and to being able to understand the infinite family. From this point of view, the algorithm is a detour. It might be a useful detour; for example, if you can prove the algorithm works, you can run it in a finite number of cases. Even if you can’t prove it works, if you want the answer to be Yes and the problem is in NP, as in the example, you can run the algorithm anyhow. Has this been done? I suspect no, and I suspect reason is that the algorithm is supposed to be polynomial in the inputs, but the inputs are big in terms of n. (Also, another obvious question: why do they propose two algorithms? It does seem plausible that it’s useful to prove that a standard heuristic like LP relaxation applies; but a special-purpose algorithm?)

  20. John Sidles Says:

    My wife Connie gave me some illuminating insights into the publishing phenomenon of “Yellow Books” this morning … in the context of designing the cover of Connie’s forthcoming book In My Nature.

    Connie pointed out that yellow book covers are a wonderful example of biophilia (a term coined by E. O. Wilson to describe the innate human liking for everything that our evolutionary history has conditioned us to like).

    It works like this. You go to a library or a bookstore … you pick a book off the shelf … a book that somehow looks enticing … a book with a green or brown background … and a foreground that is banana-yellow … cherry-red … orange-orange … lime-green … blueberry-blue.

    Hmmm … when you think about it … what’s more enticing to a primate … than a lusciously ripe, brightly covered fruit … framed by darker green/brown foliage … within easy arm’s reach?

    Isn’t this “yellow book” bookbinding aesthetic partly the reason why libraries and bookstores are such comfortable places? They are paradises in which the fruits of intellect are within easy reach!

    Connie and I laughed together at these silly notions … but then, hey … we too designed the book cover to accord with the traditional “yellow-book” biophilia-driven aesthetic of publishing … and we shipped those book files this morning off to the printer. Because even when you understand their evolutionary origins, “yellow book”: biophilic traditions demand and deserve respect.

    All hail the mighty memoir class of LaTeX, which makes it easy to respect “yellow book” traditions! 🙂

  21. Raoul Ohio Says:

    John,

    The book looks Birkhauser Green to me! It appears to have a cool publisher, too.

  22. johnny Says:

    “Steve Cook, Leonid Levin, and Richard Karp10, 24, 27 developed the initial theory of NP-completeness that generated multiple ACM Turing Awards.”

    Ah, why mention the awards? Tacky.

  23. John Sidles Says:

    Hmmmm … Scott’s account of this conference was lively, timely, well written, and valuable to me … as was Scott’s talk Has There Been Progress on the P vs. NP Question? that he was gracious enough to post … ditto for Rahul Santhanam’s in-depth and very interesting guest-post on the Fortnow/GASARCH blog … and so both have earned our gratitude.

  24. John Sidles Says:

    Apropos of nothing in particular… (except that Scott inflicted upon my mind the pernicious notion of “Yellow Books”) … here is a fun page of Yellowest Books Ever … I had no idea there were so many!

    On this list, the yellow Swiss book Die Physiker looked particularly interesting. Its Wikipedia summary reminds me strongly of Carter Scholz’ excellent and mathematically erudite book Radiance … both are dark comedies about science … both are available (in limited preview) on Google Books.

    And yes, Radiance too has a yellow-themed cover … what’s up with that?

  25. Michael Maxwell Says:

    I have a general question about the Lance paper. He writes:

    Transportation of all forms will be scheduled optimally to move people and goods around quicker and cheaper.

    Then later on, he writes:

    …we can now solve, in practice, traveling salespeople problems with more than 10,000 cities…

    Unless we have really determined salesmen or truck drivers or something, I don’t get it: if we can already solve these optimization problems for 10,000 nodes, what practical good would a P=NP solution be?

    As a computational linguist, I also doubt his claim that “Near perfect… language comprehension and translation and all other learning tasks become trivial.” It’s true that generating a machine translation system from a large corpus takes a long time, but that’s not what’s holding us back: we don’t have sufficiently large corpora even for the best language pairs, and we have far from sufficiently large corpora for 99.9% of the pairs. And while there are doubtless better approaches to MT, it’s not at all clear that the problem would be solved by a faster algorithm.

    Of course, there might be computational linguists out there who would disagree with me. So: does anyone know where I could find discussion of claims like these? (If P=NP, then X would become trivially easy.)

    (BTW, if language comprehension–or better, learning a natural language well enough to comprehend it–is NP-hard, then maybe children are an existence proof that P=NP, since they learn languages so fast!)

  26. John Sidles Says:

    Michael, a “yellow book” answer to your question is that we have pretty good representation theories for ergodic systems; this class includes all symplectic dynamical systems (i.e., all time-dependent Hamiltonian systems, both classical and quantum), and also all systems whose state-space is an exhaustive catalog (graphs, for example). And we can generically construct, for this broad class of dynamical systems and state-spaces, optimization problems that are NP-complete.

    But we do not (presently) possess similarly good representation theories for litotic systems, that is, the broad class of systems that are dynamically concentrated onto small-dimension state-spaces. This class generically includes symplectic systems that are in contact with noisy and/or low temperature reservoirs (such systems being ubiquitous in the real world). Empirically these systems can be simulated efficiently because the accessible regions of state-space are algorithmically compressible, but we do not presently have any very deep/broad understanding of how this concentration occurs (although we do have litotic constructions for have some concentration theorems, e.g., positive P-representations for arbitrary-j thermalized spin systems).

    The bottom line: we humans are hot, wet, noisy, algorithmically compressible creatures whose languages are “learnable” precisely because these languages evolved via hot, wet, noisy, algorithmically compressible physical processes.

    The above is a quantum systems engineering view, at any rate … many alternative views are equally reasonable … for the very good reason that we don’t at present really know all that much about this class of problem.

    Dick Lipton’s blog has a fine essay on this problem, titled SAT Solvers: Is SAT Hard or Easy?

  27. lylebot Says:

    Michael Maxwell: I think the claim w.r.t. NLP is that if P = NP, we would “easily” be able to find a model of language that explains all the corpora we currently have, and that is maximally generalizable while making the simplest possible set of assumptions (or something along those lines). It might not truly solve NLP, but it would come pretty close. In other words, it’s not about a faster algorithm; it’s about being able to search through the vast space of possible models efficiently.

    It seems to me that the “children learning language” is an example of nature being able to come up with a very good heuristic solutions through its use of selection. But the facts that languages change over time, that there are different dialects of the same language, and that children don’t talk exactly like their parents suggest to me that however nature does it, it really is just a heuristic. Nature hasn’t yet proven P = NP.

  28. Alex Mikunov Says:

    A couple of pretty decent reviews of GCT are posted here:

    http://www.cse.buffalo.edu/~regan/papers/pdf/Reg02MSFD.pdf

    and here:

    http://arxiv.org/abs/arxiv:0907.2850