I can imagine she’d be tremendously smugly satisfied that all of your successes after Gr. 10 were due to her inducement; the fact of your impetus to achieve being solely to be her anathema would be a wholly inconsequential detail. Deliciously ironic.

]]>To this, we may apply the populist solution: if you want to see math in the Times rather than proto-scientific reruns, take it up with the Times and encourage others to do so also.

]]>The most basic mathematics is quite obviously devised with the purpose of talking about the real world. But is all mathematics equally suitable for talking about every aspect of the world? For instance, no-one worth listening to argues that 4+3 != 7. But it counting days of the week, asserting 7 == 0 is useful, whereas in counting apples it isn’t. So is 7 “really” equal to 0, or not?

My current answer is that 7 and 0 are both ideas used to understand the world, including apples and the time-measuring standards of humans. What you might like to assert as axioms about them depends entirely on what structures you are interested in (or if one is unrepentently a pure mathematician, one’s sense of aethetics).

Math seems to me a tool for naming and describing structures: analogies stripped from the objects which are the analogs, or without requiring that analogs even exist in the world. The fact that this discipline is useful for the sciences indicates that the world is somewhat orderly, or at least that humans aren’t entirely delusional. Conversely, if the world is structured, should it be so surprising that physics inspires structures, as well as the intuition necessary to describe them?

Yes, mathematics “merely a convention” — in the same sense that money is “merely a convention”, and in the same sense as gravity is “merely a theory”. Just because something is a convention does not mean that it cannot “bite back”.

Something “bites back” precisely when it is a useful idea. But is every useful idea necessarily scientific?

]]>I would say that Kant was on to something when he described mathematics as being part of the strange case of “synthetic a priori” judgments. The failure of Russell and Whitehead seems to point to mathematics not being “analytic a priori” judgments. The “synthetic a priori” explanation is nice because then mathematics provides us with new information that is necessarily true, and thus explains the predictive power of the natural sciences, which would otherwise be purely “synthetic a posteriori” judgments.

– Homin

]]>The sentence “Mathematical propositions express no thoughts” is dumb in the same way that “approximating the Jones polynomial of a braid” is dumb if you assume that the sentence is about hair. Wittgenstein uses the word “thoughts” as a technical term much in the same way that mathematicians use the word “braid” to mean something quite different from its common usage. Using common words as technical terms is nice because they are often evocative of their actual meaning, but they have the unfortunate effect of misleading people outside the field.

As to whether or not Wittgenstein is “dumb” in the larger sense, I guess

one is left to their own “reasons of the heart”. Any such debate, whether debating the status of the Christian Trinity, the morality of engaging in duties prescribed to a different caste, or the ontological status of mathematical entities, only matters if you care about Christianity, caste systems, or ontology and epistemology. If I don’t care a whit about Christianity then I could care less about the status of the Trinity.

Now I’m sure that philosophers get a lot wrong about mathematics, but it’s pointless to criticize their writing unless one learns their language first. I would also say that it’s rather rash to assume that philosophers are talking nonsense if you can’t understand their language. I don’t understand a word when people talk about “BRST cohomologies” and “ghost numbers”, but I’m in no position to say whether those people are making sense or not. I also know that I have little interest in understanding the topic, not for any defendable reason, but once again, for reasons of the heart. I would say the same to philosophers who talk about mathematics of course.

– Homin

]]>Basically, if you play around with wolfram’s automata for quite a while (as I have) it becomes fairly obvious that certain of them correspond to strictly weaker models of calculation than general turing machines (I’m guessing presburger arithmetic). One of them happens to tie for grungiest-looking on random inputs with the one which an employee of wolfram’s (most definitely not wolfram himself, despite the impression ANKOS may give you) demonstrated to be ‘np-complete’ (wolfram doesn’t seem to understand encoding issues, but we can gloss over that as an uninteresting detail, despite the existence of a different interesting-looking automata for which the encoding issues are fairly serious because it can’t carry information to the left). In ANKOS, wolfram decides to single out this automata to talk in his usual chest-thumping way of how you can build a computer in it. He then gives an example run of it with a small set of living cells in a dead arena, which becomes patterned fairly quickly in an interesting way. He goes on to brag about how he’s done this for two million different starting configurations, but none of them did anything interesting. It doesn’t seem to have occured to him that maybe none of them did anything ‘interesting’ because such ‘interesting’ initial patterns just plain don’t exist. Oh the irony…

Now we just have to get a proof that the give automata can’t build general computers. We unfortunately don’t have such a proof, but it’s well worth looking for – such a result would be far more surprising and interesting than anything in ankos.

]]>Tell me about it. Yet there’s clearly an audience ready to wolf down gibberish with mustard: Wittgenstein is far more respected these days than Russell, who repeatedly made the mistake of being clear.

]]>It was. The problem, in my experience, is that people who advocate these claims tend to do so in a slippery, unfalsifiable way. Every nontrivial theorem that goes against the claims is met by an excuse, rather than by a counter-theorem supporting them.

]]>As automated theorem provers get better and have more powerful primitives, making a proof computer verifiable wouldn’t be any more painful than writing the paper in Latex to begin with.

In fact it might turn out to be easier, as some rather cumbersome proofs can now be skipped, something along the lines: “the theorem prover agrees and the only proof I have is entirely pedestrian and unelightening, so let’s not waste each other’s time and skip it”.

]]>There’s a nascent movement for mathematicians to do all their work formally, as in, verifiable by computer. The mathematicians say this is a waste of time. The programmers think the mathematicians should stop whining. I personally think doing everything computer-verifiably would be an enormous boon, because it (1) would shorten the review process to almost nothing, or allow it to be skipped completely (2) would get rid of the mistaken results which still crop up from time to time, and most importantly (3) would allow one to write explanatory papers which, completely unburdened by the needs of formality, could provide some intuition, context, and commentary. These days the standard way to write a math paper is to take the proof, remove all the insight, and write up what’s left, and that’s a bad thing (not my witticism).

Both of the claims you mention, “quantum computers are really just analog computers,” and “all cellular automata that aren’t obviously simple are Turing-complete,” happen to be false. I’m not sure if that was your point about them.

]]>