Paul Cohen, who proved that one can imagine infinite sets larger than the set of integers but smaller than the set of real numbers without leading to contradiction, and who won (to date) the only Fields Medal ever awarded for work in logic, died yesterday. He was 72. You can read more about his achievements here or in Lecture 3.

I only saw Cohen once, when he participated in a panel discussion at Berkeley about Hilbert’s problems from 1900. He came across as funny and likable — which was good since, to my mind, it might as well have been Euclid or Aristotle sitting there trading quips with the other panelists. (As Rebecca Goldstein wrote about Gödel, the man whose work paved the way for Cohen’s: “I once found the philosopher Richard Rorty standing in a bit of a daze in Davidson’s food market. He told me in hushed tones that he’d just seen Gödel in the frozen food aisle.”)

Like Cantor himself, Cohen started out not as a logician but as an analyst. The famous story is that Cohen was once teasing a logician friend about how there were no meaty, nontrivial open problems in logic — *certainly* nothing that an analyst couldn’t swoop in and solve. “Oh yeah?” the friend shot back. “I’d like to see you prove the independence of the Continuum Hypothesis!” So that’s what he did. I’d love to know whether the story has any grain of truth to it.

Comment #1 March 24th, 2007 at 8:22 pm

“Almost everywhere Paul Cohen did not exist.” That’s a theorem. Hence mathematics needs irony.

Speaking of irony, Gödel shopping for food…

These are painful things. Then again mathematics is the last refuge of the eternal.

Comment #2 March 24th, 2007 at 10:37 pm

Walter Rudin’s memoir

The Way I Remember It, aside from being a fascinating historical and mathematical account, recounts a few episodes of Cohen’s genius.Comment #3 March 24th, 2007 at 10:49 pm

Thanks! Just ordered it from Amazon. Looks like excellent plane reading.

Comment #4 March 24th, 2007 at 11:10 pm

The most authoritative version of the story I’ve heard (from people who knew Cohen) was that he did indeed joke/brag about his plans to prove the independence of the continuum hypothesis, but he had long been interested in logic and had spent a lot of time thinking about it before then. It’s not clear whether his comments preceded formulating a serious plan of attack, but he definitely didn’t start working on it on a whim.

Comment #5 March 24th, 2007 at 11:42 pm

Yeah, it’s hard to understand how Cohen could have done what he did, had he not been obsessed with foundational issues for a long time. I just thought it’d be great if he went in hoping to prove once and for all why “real” mathematicians shouldn’t worry about foundations, and emerged years later having proved the opposite!

Comment #6 March 25th, 2007 at 11:34 am

I just thought it’d be great if he went in hoping to prove once and for all why “real” mathematicians shouldn’t worry about foundations, and emerged years later having proved the opposite!It doesn’t take much wisdom to declare that foundations are important. But

whichfoundations? Shelah is another brilliant logician; have you looked at anything that he has done lately? Closer to home, I happen to think that Kadison and Ringrose is fantastic foundational material in (infinite-state) quantum probability; but do you agree?Comment #7 March 25th, 2007 at 2:23 pm

The stuff Shelah does seems fascinating to me, if only I could understand it better.

Regarding Kadison and Ringrose: for some reason, seeing the symbol “C” followed by “*” followed by “algebra” always causes me to …

YAWN… oh, sorry, what were we talking about?I’m willing to be convinced that I’m completely in the wrong here, but I’ll need a concrete surprising result or open problem that I can understand.

Comment #8 March 25th, 2007 at 2:42 pm

for some reason, seeing the symbol “C” followed by “*” followed by “algebra” always causes me to … YAWN … oh, sorry, what were we talking about?This is fantastic coming from the maintainer of the complexity zoo.

Comment #9 March 25th, 2007 at 2:48 pm

If it were possible to define a complexity class based on C*-algebras (C*P?), maybe I

couldget interested…Comment #10 March 25th, 2007 at 4:06 pm

Regarding Kadison and Ringrose: for some reason, seeing the symbol “C” followed by “*” followed by “algebra” always causes me to … YAWN … oh, sorry, what were we talking about?Scott, you (perhaps) have no idea how frustrating it is to hear such disparagement of one’s favorite areas of mathematics from an individual whose writings one otherwise enjoys so much. (I get the same feeling of exasperation every time you talk about infinite-dimensional Hilbert spaces as if they were something bad, to be avoided.) Not that you’re not entitled to your own idiosyncratic tastes and preferences, of course–but I would like to try to come to some understanding of this phenomenon!

I would feel somewhat better if you could give an example of a scientist you admire (a physicist perhaps?) who is/was nevertheless vocal about his or her contempt for complexity theory.

I’m willing to be convinced that I’m completely in the wrong here, but I’ll need a concrete surprising result or open problem that I can understand.By “understand”, do you mean “understand without having to learn any new definitions”? (After all, presumably you

couldunderstand anything you like, so long as you were willing to put in the work!)Comment #11 March 25th, 2007 at 4:31 pm

C*-algebra-loving Jameses: You know what? You’re right.

After thinking it over, I concluded that I really

wasout of line. I shouldn’t dismiss a whole field of math that I know next to nothing about. The talks I’ve attendedwithin quantum information and foundationsthat invoked C*-algebras have almost invariably been boring, and from that, I seem to have made an unfair generalization. It’s possible that the speakers were simply taking deep, beautiful, and important mathematical structures, and applying them to questions for which those structures were manifestly irrelevant.My request for the C*istas to argue with me and convince me that I’m wrong was a genuine one. But James C., you seem to have grossly overestimated my intelligence: as a (highly) computationally-bounded agent, I

can’tunderstand anything I like so long I put in the work. In particular, if we adopt the standard definition that “a mathematician is a person who reads yellow books,” then I’m clearly not a mathematician. I need far more motivation and examples than one can typically find between yellow covers.Comment #12 March 25th, 2007 at 4:38 pm

Incidentally, James C.:

I would feel somewhat better if you could give an example of a scientist you admire (a physicist perhaps?) who is/was nevertheless vocal about his or her contempt for complexity theory.Richard Feynman

David Deutsch

Ed Farhi

Dave Bacon

(Now watch the flamewar start about whether these esteemed individuals are

reallycontemptuous of complexity… )Oh, and computer scientists I admire who have been openly contemptuous of quantum computing:

Leonid Levin

Oded Goldreich

Steve Cook

Lance Fortnow

Comment #13 March 25th, 2007 at 4:39 pm

If it were possible to define a complexity class based on C*-algebras (C*P?), maybe I could get interested…Of course you can and do. You use finite-dimensional C^*-algebras,

and the resulting complexity class is BQP. The role of infinite-dimensional C^*-algebras (or more properly, von Neumann algebras) is more subtle. It is analogous to the role of infinite-state probability in CS theory. There is no question that it has a role, but a subtle one. If you are dogmatically combinatorial, you can choose to ignore both infinite-state probability and infinite-dimensional operator algebras when doing complexity theory.

I won’t waste your time arguing that operator algebras is must-know material for someone doing QCQI. I would almost have called it that, but the material is just too hard to be called must-know. Even so, it’s not quite consistent to praise Paul Cohen but dismiss the operator algebraists, unless you’re tipping your hat to him largely because he died. Operator algebras are a beautiful topic; they are philosophically appealing for the same reason as quantum computing.

Comment #14 March 25th, 2007 at 4:57 pm

Greg, I think people who are immersed in a given area of math tend to underestimate the importance of spectacular results and open problems for explaining to outsiders what the area is all about. From an outsider perspective, set theory is the field that contains the amazing discoveries of Cantor, Gödel, and Cohen,

as well asundoubtedly various other things. Elliptic curve theory is the field that contains Wiles’s proof of Fermat’s Last Theorem. Complexity theory is the field that contains the P vs. NP problem. Quantum computing is the field that contains Shor’s algorithm.C*-algebras, quantum logic, category theory, programming language semantics: these are the fields that contain, respectively, … what? Once I know where the peak is, maybe I can summon enough motivation to start the climb.

Comment #15 March 25th, 2007 at 7:04 pm

“…the flamewar start about whether these esteemed individuals are really contemptuous of complexity…”

Not to mention the flamewar about the “esteemed” part

Comment #16 March 25th, 2007 at 7:36 pm

Not to mention the flamewar about the “esteemed” partIndeed: “one of these kids is not like the other.” BTW, I would describe my relationship to complexity theory not as contemptuous but as very very very confused. Except that PSPACE class which is my new favorite complexity class.

Comment #17 March 25th, 2007 at 7:44 pm

Well, outsiders are the people who like to learn about things without actually learning them. That has its place, but a lot of interesting things are not going to be much fun for outsiders. One ironic example is learning theory. I can’t think of any “top 40” result to motivate outsiders to learn learning theory. Outsiders learn without any theory of how to learn :-). Another, less ironic example is operator algebras.

But I know that you don’t always wear the hat of an outsider. You are an insider of quantum information theory. So I think that it could be motivation enough that most of the theory came from operator algebras: POVMs, quantum entropy, complete positivity, etc. When Stinespring defined complete positivity in 1955, I can assure that he was

notthinking about quantum communication channels.Maybe a better sales-pitch, although again it won’t work for a complete outsider, is an extended analogy: Classical mechanics is to quantum mechanics, as complexity is to quantum complexity, as topology is to ______. How do you fill in the blank? With C^*-algebras!

Still, knowing your personality, I suspect that you won’t be satisfied with either the Moliere argument (you’ve been speaking operator algebras without knowing it), or with persuasion by analogy. So here is an example that might work, if you’re patient enough to combine it with the other two arguments.

Consider a group — any group — which as you know has multiplication but not addition. Throw in addition formally, to make its group ring. It is a fundamental conjecture in group theory that no group ring has a nontrivial solution to the equation x^2 = x. I think that it is due to Kaplansky. Now, much of the progress on this question and its generalizations has involved C^*-algebras, the same field that is quantum topology, and that led to the definition of complete positivity.

The point is that the group ring is a subset of the group algebra (over the complex numbers), the group algebra is a subset of the group C^*-algebra, and even there the conjecture is that there are no new solutions to x^2 = x. Moreover, that condition would follow from a vastly stronger property asserted by the Baum-Connes conjecture. It is too technical to state here, but I suspect that it was an also-ran to the Clay problems.

Comment #18 March 25th, 2007 at 7:58 pm

Thanks, Greg! That’s a really cool problem, and exactly the sort of example I was looking for. Not to re-anger anyone, but is the answer at least known for

finitegroups? 😉Comment #19 March 25th, 2007 at 8:04 pm

Sorry, I forgot one condition in the Kaplansky conjecture. It’s not quite any group, it has to be torsion-free, i.e., without any elements of finite order.

The Baum-Connes conjecture does not require the torsion-free condition, but it says something different and much more complicated. I am just guessing here, but I suspect that the Baum-Connes conjecture for finite groups is simply Burnside’s theorem, as you may already have encountered it in the quantum Fourier transform business.

Comment #20 March 25th, 2007 at 8:11 pm

I can’t think of any “top 40″ result to motivate outsiders to learn learning theory.That’s funny — I can think of two!

(1) The Occam’s Razor Theorem, that “compression equals prediction” — i.e. that the learnable concept classes are precisely those with finite VC dimension. Technically not too hard, conceptually one of the greatest insights we have about David Hume’s problem of induction.

(2) The Kearns-Valiant theorem, that PAC-learning an arbitrary regular language is as hard as breaking RSA. This is really what Chomsky meant to say but didn’t when he proposed his famous Poverty of the Stimulus argument about language learning in infants.

Comment #21 March 25th, 2007 at 8:22 pm

I’m confused about (2). In PAC-learning, you get negative examples too; as far as I know, the “poverty of stimulus” argument needs the assumption that learners only get positive evidence.

Comment #22 March 25th, 2007 at 8:26 pm

The Occam’s Razor Theorem, that “compression equals prediction”Sorry, that one is too tautological (and too technical to state). If it weren’t true, we would have to replace Valiant’s learning theory with another theory where it is true. It’s like the theorems in physics that say that you cannot have non-zero momentum trapped in a stationary box. I appreciate that people prove these theorems, but duh, if they weren’t true, something would be wrong with the formalism.

The Kearns-Valiant theorem, that PAC-learning an arbitrary regular language is as hard as breaking RSA.Now

thatshould actually count. But what is a regular language? English certainly isn’t one. If you are insufferably anti-intellectual, Kearns-Valiant would only half-qualify. But okay, it works for me.Comment #23 March 25th, 2007 at 8:42 pm

Some more remarks on the (reduced) C^*-algebra of a group, of Baum-Connes fame.

If you study HSP and QFT and all that, you should be interested in the Fourier transform operator for any group. For finite groups it is just Burnside’s theorem: A finite group algebra is isomorphic to a direct sum of matrix algebras, or what I call a “hybrid quantum memory” in my paper. For infinite groups, even for Z, it is more subtle. The correct Fourier dual of the integers is a circle, viewed not just as a group but as a topological space. The Hilbert space of a circle is the space of square-integrable functions; you cannot integrate on a circle unless you know its topology.

So what is the correct Fourier dual of a non-commutative infinite group, for example SL(2,Z)? It is a lot like a topological space — it has to has some topological structure when the group is infinite. But the “functions” on this mystery space do not commute with each other, because the group is non-commutative. The correct answer is that it is the C^*-group algebra of the group, interpreted by definition as a quantum topological space.

It may look like this definition didn’t really go anywhere, but actually it is very clever. For example, the group C^*-algebra of the integers is isomorphic to the algebra of continuous functions on a circle, with pointwise multiplication. A quantum topological space doesn’t have points in the same sense as a classical topological space does; rather it has states, like a qubit has states. A qubit is of course the first non-trivial quantum topological space, a fellow traveller of the topological space with two points, with the discrete topology.

Comment #24 March 25th, 2007 at 8:50 pm

Cheshire Cat: Nope! The Kearns-Valiant result works for learners that get negative examples as well; see the paper for details.

It seems to me that when the linguists and philosophers say “clearly you can’t learn a language from positive examples only,” they’re (a) using a fallacious argument to (b) conclude something weaker than what’s true! Such are the pitfalls of asking a complexity-theoretic question and then refusing to look for a complexity-theoretic answer.

Comment #25 March 25th, 2007 at 9:03 pm

Greg: It’s clear that we have different tastes. Yes, the Occam’s Razor Theorem is pretty tautological — and countless forests have been destroyed over the last 200 years to publish papers by philosophers of science who didn’t understand the tautology!

From a modern perspective, Turing’s universality theorem, Shannon’s coding theorems, and the Cook-Levin theorem also border on tautological. For me, the Occam’s Razor Theorem belongs in more-or-less the same class with them.

Comment #26 March 25th, 2007 at 9:09 pm

I see PAC-learnability being equated to having finite VC dimension yet also being applied to regular languages, so I must chime in. The regular languages, as a concept class, have infinite VC dimension, so we can forget about PAC-learning them. What about constructing the smallest consisten DFA for a given labeled sample and applying Occam’s razor? The catch is that your error bound will depend on the size of the target DFA, which you don’t know. I don’t know if there’s a name for this sort of learning; I call it “local PAC” to distinguish it from the (standard) uniform PAC.

Now building the smallest consistend DFA is NP-hard (see papers by Gold or Angluin); it’s even hard to approximate (see Pitt and Warmuth — or email me for references). This motivates much of my research with Corinna Cortes and Mehyrar Mohri on embedding languages in Hilbert spaces (y’know… these are those boring useless abstract mathematical thingamabobers whose operator algebras are C* 😉

Here is a learning-DFA-related problem we put on a midterm; it’s instructive to work out if you haven’t seen this sort of thing.

Comment #27 March 25th, 2007 at 9:17 pm

From a modern perspective, Turing’s universality theorem, Shannon’s coding theorems, and the Cook-Levin theorem also border on tautological. For me, the Occam’s Razor Theorem belongs in more-or-less the same class with them.Those other examples can only be called tautological in hindsight. For instance, Shannon’s coding theorems say that entropy is the one and only asymptotic currency of information and information capacity. Phrased that way, the result is

hopelessly falsefor quantum information.How much hindsight is there in the view that Kearns-Valiant is tautological? I will grant you some; obviously Valiant cut through a thicket with his papers. (It is presumptuous for me to even praise Valiant; everyone knows that he created several fundamental topics in computer science.) But not as much as in the other examples. A more comparable result might be universality for quantum Turing machines.

Comment #28 March 25th, 2007 at 9:37 pm

Sorry, Leo — I meant regular languages specified by a small finite automaton. Incidentally, do you know if anything is known about PAC-learning regular languages specified by a small regular expression?

Comment #29 March 25th, 2007 at 9:51 pm

Yes, I know the Kearns-Valiant result holds when the learner gets negative examples too; the point was that the impossibility argument may be much simpler when only positive examples are given. It was unclear how to avoid overgeneralization, but I realize the distribution on examples may tell you something.

The PAC model has some very nice features, but how relevant is it to language learning in the real world? We should probably allow membership queries, and we certainly don’t care about learning with respect to all distributions…

Comment #30 March 25th, 2007 at 9:51 pm

Scott,

Angluin’s paper, “On the Complexity of Minimum Inference of Regular Sets” shows that constructing the smallest consistent regular expression for a labeled sample is hard (pretty much in the same sense as constructing the smallest consistent DFA is hard). Insofar as learning and compression are equivalent, that’s the answer; beyond that, I don’t know…

As a side note, it seems that learning automata is almost a dirty notion in American CS; someone referred to my work on this as having a “European flavor”. Perhaps it was the early slew of negative results that discouraged people from looking at this — I am hoping that the new kernel methods will lead to a renewed interest.

Comment #31 March 25th, 2007 at 10:24 pm

The PAC model has some very nice features, but how relevant is it to language learning in the real world? We should probably allow membership queries, and we certainly don’t care about learning with respect to all distributions…Cheshire Cat: These are all excellent questions; the nice thing about computational learning theory is that it gives you the framework to address them (as well as lots of other questions)!

For many concept classes (though not DFA’s), Angluin and Kharitonov extended the Kearns-Valiant hardness results to the case where membership queries are available.

In the case where both membership

andequivalence queries are available, Angluin gave a beautiful algorithm to learn DFA’s in polynomial time.I can’t figure out what’s known about learning DFA’s in the case where membership queries are available but not equivalence queries. Can anyone enlighten us? I’d also love to find out what’s known about these problems for specific example distributions like the uniform distribution.

Of course, once you go to a big enough concept class (like constant-depth threshold circuits), you can construct pseudorandom functions, and that immediately gives you a hardness result with membership queries and under any example distribution you want.

Comment #32 March 26th, 2007 at 12:22 am

Scott,

Exercise 8.1 in Kearns & Vazirani (p. 185) shows that any class exactly learnable from membership and equivalence queries is PAC-learnable with membership queries.

Cheshire Cat: I don’t know how meaningful it is to apply the PAC learning model to human language, decoupled from general intelligence. You could talk about formal grammar induction — and there’s a large body of work on that. But human language, at least in my view, is not a passive binary concept, or even a probabilistic one. Semantics plays a key role, as does real-world knowledge. You can PAC-learn various aspects of language, but I’m not sure how to even

posethe problem of learning a language.Consider Quine’s famous gavagai example to get a sense of the kind of built-in assumptions humans employ in learning language and draw your own conclusion on the appropriateness of the PAC model here.

Comment #33 March 26th, 2007 at 12:28 am

Exercise 8.1 in Kearns & Vazirani (p. 185) shows that any class exactly learnable from membership and equivalence queries is PAC-learnable with membership queries.Duhhhh, thanks!

Comment #34 March 26th, 2007 at 12:37 am

Leo, I completely agree that humans use many built-in assumptions to learn a language — but far from being undermined by that fact, I think that computational hardness results are an important piece of

evidencefor it! (There are, of course, other pieces of evidence one can use to reach the same conclusion.)For much, much more about these issues, see Ronald de Wolf’s master’s thesis, “Philosophical applications of computational learning theory.”

Comment #35 March 26th, 2007 at 12:52 am

Ack, you’re not allowed to point me to fascinating 119-page-long documents when I’m supposed to submit my thesis in a month!

I’m sure de Wolf is going to touch upon this (or maybe even beat it to death), but here’s my $.02. There are at least two sources of hardness in learning language. The one we’ve been discussing is computational — if you model language as a finite-state or context-free process, you get the classical results. But the other problem is even more basic — it’s the problem of induction captured by the gavagai example.

I don’t think infants are solving NP-hard problems when they learn language. I think they are setting the (relatively) few Chomskyan parameters that span the (huge but rather constrained) space of all possible human languages.

To recap: I don’t think the computational hardness aspect is all that relevant in human language learning. The problem of induction is the one to crack if we’re going to have a learning robot. It’s all about those constraints, of course — somewhat paradoxically, they are precisely what makes learning possible at all. Constraints correspond to general learning principles. One emerging principle is that high-dimensional real-world data tends to lie on low-dimensional manifolds. Another one is that functions of interest are typically smooth, so when optimizing, penalize jumpiness. I believe it’s general principles of this sort that will pave the way for the next generation of intelligent algorithms.

Comment #36 March 26th, 2007 at 1:02 am

Leo, you still have a

whole monthbefore your thesis is due? Then what are you worried about? You still have 3 1/2 more weeks to procrastinate!“Gavagai” always struck me as yet another example of philosophers gone wild, their imaginations unconstrained by complexity upper bounds. If we say that translations between languages should incur at most a polynomial blowup, and that we only care about getting the translations approximately right, then aren’t we back to the usual PAC-learning scenarios?

Comment #37 March 26th, 2007 at 1:58 am

“Gavagai” is philosophers gone wild only if you take common sense for granted. If you take the problem of explaining it seriously, you’ve got to deal with it somehow.

Re: approximate language learning. The devil is in the details. How would you evaluate the accuracy of translation? Grammaticality? Idiomaticity? Semantic accuracy? Even ranking different candidates is non-trivial. I’m pretty sure entire PhD theses are devoted just to the problem of finding an appropriate metric. I know you want to view everything through a complexity-theoretic lens, but I honestly question how meaningful PAC-type formalizations are when applied to natural language learning.

This isn’t to say that the problem can’t be meaningfully formalized. I just think the right formalization to shoot for is statistical, not complexity-theoretic.

Comment #38 March 26th, 2007 at 5:34 am

No, I’m not taking common sense for granted, I’m taking

the existence of limits to information-processingfor granted. This is a very specific blind spot that philosophers have, and it leads them into all sorts of trouble.(One example is the surprisingly common view that “all mathematical propositions are tautologies,” and therefore can’t convey any new information. This view is much easier to hold if the only mathematical propositions you

knoware tautologies!)Now, much as it pains me to say so, not every question can be reduced entirely to complexity theory — other considerations are occasionally relevant. So the point is not to view the whole world through this particular lens; the point is just to

have the lens in your camera-bagfor those occasions when it gives you a better picture. Most linguists and philosophers clearly don’t!In the immortal words of the Prophet Turing (P.B.U.H.):

The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false. A natural consequence of doing so is that one then assumes that there is no virtue in the mere working out of consequences from data and general principles.Comment #39 March 26th, 2007 at 7:50 am

One example is the surprisingly common view that “all mathematical propositions are tautologies,” and therefore can’t convey any new information.All mathematical propositions are tautologies, but they

doconvey effectively new information. They are computational output from wetware.Comment #40 March 26th, 2007 at 9:25 am

Leo says:

The problem of induction is the one to crack if we’re going to have a learning robot.I am sitting here looking at a squirrel who is desperately trying to figure out how to get to the hanging bird feeder.

Somehow, I don’t have the impression that squirrel is trying to “crack the problem of induction.”

But I

dothinks he’s going to get to the feeder … those squirrels are much smarter than us humans.Comment #41 March 26th, 2007 at 1:50 pm

The

problem of induction? Are we still talking about that in 2007?John Sidles nailed it. The squirrel (and us) are learning—or trying to learn—by trial and error. If there is any general problem of learning it is that of understanding what kind of a universe allows trial and error to have some hope of success, and what constraints must ultimately be assumed by trial and error strategies.

(Suppose, for example, that on any given day the sun could decide not to come up, and no attempt at explaining these exceptions was effective.)

Comment #42 March 26th, 2007 at 3:12 pm

Chris W.: We’re still talking about the problem of induction and we’ll be talking about it until we have solved the strong AI problem. Squirrels and humans are wired to make inferences about the world by trial and error. Of all the possible trials and all the possible valid inferences, we choose a rather small set, adapted to our world. If we had a good description of this “good” trial/inference set, we could build a learner that learns on par with humans or their superior cousins, the Sciurids.

On a different note, the “is math tautological” question has spawned this post.

Comment #43 March 30th, 2007 at 12:19 am

i think what scott is saying, basically is: it’s important to be able to write math engagingly. period.

Comment #44 March 31st, 2007 at 12:49 pm

Scott: are you aware of the irony of writing: “…vocal about his or her contempt for complexity theory…Richard Feynman…David Deutsch…” given that my mentor Feynman was the grandfather of Quantum Computing? And David Deutsch one of the fathers of Quantum Computing?

Charles Goodwin has noted that if Deutsch is correct about the multiverse, he will win the Nobel prize. And also not win it.

When is your CS lecture at Caltech again?

Comment #45 March 31st, 2007 at 8:46 pm

Jonathan: My comments were based on the facts that

(1) Feynman apparently couldn’t be convinced that P vs. NP was an open problem, and somehow got through his entire

Lectures on Computationwithout a single mention of complexity issues, and(2) Deutsch, the founder of quantum computing, had to be told what BQP was when I visited him in Oxford.

My lecture at Caltech will be either April 18 or 19.

Comment #46 March 31st, 2007 at 9:37 pm

Scott shoots, he scores!

I’ll ask Carl Feynman and Danny Hillis some time if they can shed any light on “Feynman apparently couldn’t be convinced that P vs. NP was an open problem.” I don’t recall ever discussing the matter with him. Your Deutsch anecdote is amazing, but there’s no polite way to slip it into a paper, footnoted “personal conversation” or the like.

Do let the Caltech Math secretary know, so that a notice of your talk can be posted in the Math department, too. CS was, historically, in the Engineering Division, not in Physics, Astronomy, and Mathematics.