Razborov and Rudich won the Gödel Prize for “Natural Proofs”, which probably did as much as any single paper to elucidate the nature of the P vs. NP problem. (More from the Bearded One and the Pontiff.) Loosely speaking, R&R showed that any circuit lower bound satisfying certain extremely broad criteria would “bite its own tail,” and lead to efficient algorithms to distinguish random from pseudorandom functions — the very sort of thing that we wanted to prove was hard. This doesn’t by any means imply that a P≠NP proof is impossible, but it does show how the problem has a strange, self-referential character that’s not quite like anything previously encountered in mathematics, including in the work of Gödel and Turing. Technically simple but conceptually profound, the paper is also a masterpiece of clear, forceful exposition. When I first came across it as an undergrad at Cornell, I knew complexity was my subject.
Following on the heels of the New Yorker, the New York Times ran its own epic on the Large Hadron Collider. So science writers can do a decent job when they feel like it. Why can’t they write about P vs. NP the same way? Oh, right … them big machines …
Andy Drucker poses the following problem: suppose there are n blog posts, and for each post bi, you’re told only that it was posted during the time interval [ti,ui]. Is there an efficient algorithm to count how many orderings of the blog posts are compatible with that information? Alternatively, is the problem #P-complete? Let me stress that Andy doesn’t know the answer to this question, and neither do I.
A certain MIT undergrad of my acquaintance sent the following letter to MIT’s DMCA enforcement office.
Dear MIT DMCA Agent,
After viewing Scoop and receiving your notice, I was more than happy to comply with NBC’s request to destroy it. Rest assured that I will no longer be downloading or sharing any post-Manhattan Woody Allen films.