I’ll ask Carl Feynman and Danny Hillis some time if they can shed any light on “Feynman apparently couldn’t be convinced that P vs. NP was an open problem.” I don’t recall ever discussing the matter with him. Your Deutsch anecdote is amazing, but there’s no polite way to slip it into a paper, footnoted “personal conversation” or the like.

Do let the Caltech Math secretary know, so that a notice of your talk can be posted in the Math department, too. CS was, historically, in the Engineering Division, not in Physics, Astronomy, and Mathematics.

]]>(1) Feynman apparently couldn’t be convinced that P vs. NP was an open problem, and somehow got through his entire *Lectures on Computation* without a single mention of complexity issues, and

(2) Deutsch, the founder of quantum computing, had to be told what BQP was when I visited him in Oxford. 🙂

My lecture at Caltech will be either April 18 or 19.

]]>Charles Goodwin has noted that if Deutsch is correct about the multiverse, he will win the Nobel prize. And also not win it.

When is your CS lecture at Caltech again?

]]>On a different note, the “is math tautological” question has spawned this post.

]]>John Sidles nailed it. The squirrel (and us) are learning—or trying to learn—by trial and error. If there is any general problem of learning it is that of understanding what kind of a universe allows trial and error to have some hope of success, and what constraints must ultimately be assumed by trial and error strategies.

(Suppose, for example, that on any given day the sun could decide not to come up, and no attempt at explaining these exceptions was effective.)

]]>I am sitting here looking at a squirrel who is desperately trying to figure out how to get to the hanging bird feeder.

Somehow, I don’t have the impression that squirrel is trying to “crack the problem of induction.”

But I *do* thinks he’s going to get to the feeder … those squirrels are much smarter than us humans. 🙂

All mathematical propositions are tautologies, but they *do* convey effectively new information. They are computational output from wetware.

(One example is the surprisingly common view that “all mathematical propositions are tautologies,” and therefore can’t convey any new information. This view is much easier to hold if the only mathematical propositions you *know* are tautologies!)

Now, much as it pains me to say so, not every question can be reduced entirely to complexity theory — other considerations are occasionally relevant. 🙂 So the point is not to view the whole world through this particular lens; the point is just to *have the lens in your camera-bag* for those occasions when it gives you a better picture. Most linguists and philosophers clearly don’t!

In the immortal words of the Prophet Turing (P.B.U.H.):

Re: approximate language learning. The devil is in the details. How would you evaluate the accuracy of translation? Grammaticality? Idiomaticity? Semantic accuracy? Even ranking different candidates is non-trivial. I’m pretty sure entire PhD theses are devoted just to the problem of finding an appropriate metric. I know you want to view everything through a complexity-theoretic lens, but I honestly question how meaningful PAC-type formalizations are when applied to natural language learning.

This isn’t to say that the problem can’t be meaningfully formalized. I just think the right formalization to shoot for is statistical, not complexity-theoretic.

]]>