Certainly peer review and highlighting the important stuff is valuable, but having randomly selected reviewers anonymously make such decisions is a ridiculous approach to take, with inherently arbitrary results which haven’t been shown to bear much, if any, correlation to future importance. Much more effective would be to simply publish everything, and have notable people link to the notable things from their blogs.

]]>Some people suggest otherwise: that ordinarily (that is, in “everyday” mathematics, as opposed to “really big” results) referees tend to view their function as determining, for the benefit of nonspecialist editors, whether the results of a paper are “important” enough to merit publication in the journal concerned. (Some mathematicians, such as Paul Halmos, have been known to become irate at the suggestion that referees are “proof checkers”.) Ask yourself what percentage of rejections are actually handed down solely on the grounds that the results of a paper are incorrect.

But let’s assume you’re right, and that computer verification of proofs would eliminate the formal institution of peer review in mathematics. I claim that the nature of the pursuit known as “mathematics” is such that, in order for it to continue at this stage, papers would still have to be read (or at least their results verified) by actual humans. In other words, mathematicians would still want to be “convinced” of the results, not just of the fact that they could become convinced. Consequently, an informal institution of peer review would inevitably develop, as mathematicians learn to “sort out” the interesting/important papers from among the mass of correct ones.

]]>It’s possible that we’ll have journals (although undoubtedly online ones) in the future which are entirely devoted to math commentary, philosophy, and interpretation. If so, that would be a hilarious 180 of math papers from antiseptic dryness to stoner pontification.

]]>>there is still a need for a human-readable version

If the formal language is well designed it is also human-readable and there is no need for a separate human-readable informal proof description. Mizar is a good step in this direction, but I think Isabelle‘s Isar is much more readable, especially in ZF logical framework (HOL is a bit different, since standard math is based on ZF rather than HOL).

]]>Clearly the field already has its Ted Nelson (in fact, lots of them) but is missing its Jimmy Wales. It’s also missing its Guido van Rossum, which might be a bigger problem.

]]>The purpose of a proof is to convince other people of a result–to communicate (a portion of) the prover’s psychology to the minds of the audience. A proof is rigorous exactly to the degree that it is convincing: if we criticize an argument as unrigorous, it is because we are not totally convinced by some aspect of it. As I see it, the principle goal of the activity we call “mathematics” is this psychological one of “understanding”, or becoming “convinced” of things in this sense.

In other words, mathematics is really “about” the semantics of the formal symbols, rather than their syntax. (This applies even to parts of mathematics whose object of study is syntax itself, for in that case what is of interest is the syntax of the object language, which is described via the semantics of a suitable metalanguage.)

Thus, for instance, I wouldn’t be “convinced” of the Poincare’ conjecture simply by hearing that it had been formally verified by a computer, or for that matter by hearing that experts are convinced by Perelman’s proof. At most, what these things convince me of is my ability to become convinced of the conjecture after a sufficient amount of work on my part.

So why would we bother to formalize a proof in such a way that it could be checked by a computer? The reason is not so that we can be sure it is “true”. The reason is that doing so requires us to reduce our concepts down to their philosophical essences, so that we gain a deep introspective understanding of whatever it is we are talking about. (You’ve got to be _very_ clear about what you mean if you want to convince a nonhuman machine!) It seems obvious to me that this was the goal of pre-computer-age logicians such as Russell and Whitehead (and even Frege, despite the fact that he would have taken exception to my view of mathematics).

So the reason we should formalize proofs is so that philosophically-oriented humans can read them!

]]>a version for a conference

another version for a journal

another version after referees

Diff style files and mandatory Formats.

I also complained about people who say “blog” when they mean “entry”. You were one of my main inspirations. ðŸ™‚

*If we ALSO insist that the proofs be verifiable, that will only ADD to what you need to do in order to get an article published.*

No, I’m imagining that you’d code a formal proof, and then everything else would be intuitive handwaving, of the sort we all find easy and enjoyable when arguing in front of a blackboard.

]]>I agree that it’s an interesting article, and I also don’t buy the conclusion. Although several of its predictions about the future of formal verification of programs have turned out to be incorrect or misplaced (it’s a 1979 CACM paper), its point of view is still stimulating and insightful for other respects.

]]>Drew: I agree that if NP-complete problems were solvable efficiently in the physical world, then mathematics as a human activity would end, except in an “ironic” sense. That, to me, is why P vs. NP is one of the most profound questions that humans have ever asked.

(Indeed, GÃ¶del’s letter to von Neumann shows he fully understood that *this*, and not the questions he and Turing were able to answer, is really the key question about automating mathematics.)

But as a practical matter, I wouldn’t worry too much about it. The smart money is on NP-complete problems *not* being efficiently solvable in the physical universe. ðŸ™‚