Joe, have you ever read White Light? It’s one of the very few quasi-comedic novels ever written about the CF … which is what earned it a place in our UW QSE Group’s library of subversive literature.

Perhaps I’ll transcribe some excerpts from *White Light* in the next few days … it includes quite a few quotations from Cantor’s own writings on the physical and meta-physical implications of CH.

In my thesis (“Cardinal Conditions for Strong Fubini Theorems”, October 1990 Transactions of the AMS) I showed that some such trans-ZFC assumption was necessary, because it is consistent with ZFC that iterated integrals (for non-negative functions to avoid trivial counterexamples) always match when they exist.

My favorite candidate for an axiom that settles CH is the existence of a countably additive measure on ALL (not just the measurable) subsets of the reals. This axiom (known as RVM for “real-valued measurable cardinal”) is equiconsistent with a measurable cardinal, entails the continuum being very large (having many “weakly inaccessible cardinals” below it), and has a certain intuitive plausibility. (It also implies the strong Fubini theorems I alluded to above — iterated integrals must agree when they exist.)

The Banach-Tarski phenomenon shows that such a measure could not be rotationally invariant. But a possible alternative history for physics could have had Riemann not die young and discover General Relativity quite early, before the quantum theory destroyed the intuition of continuous space.

In this alternate timeline, Banach-Tarski might been discovered shortly thereafter and the assumption that was sacrificed as unphysical could have been the isotropy of space rather than its continuity, since General Relativity already requires anisotropy and assumes continuity, and so RVM could have come to be accepted as an axiom. By the time quantum mechanics had been discovered, RVM would have been shown so fruitful (it proves Con(ZFC) for example!) that it would be retained as a mathematical axiom and CH would be considered settled.

]]>I’m not saying that it contradicts your logic or anything, but still, I’m stunned, and it does change my thinking about set theory and stuff a bit.

]]>However, with all such results you need to be careful in defining the model. Regarding your sorting example, there are several issues:

(1) Even verifying a sort will require n*log(n) time, if we measure time by the number of bit operations. This is because, if each number in the array has at least n possible values (as is needed for the n*log(n) lower bound to hold), then the numbers will take log(n) bits each to specify, hence even reading them all will take n*log(n) time.

(2) If we instead adopt the comparison model (where the only allowed operation is to compare two numbers, but each comparison takes unit time), then if we’re going to be consistent and apply the same rules to the witness, it again will take n*log(n) time to verify a sort (by a generalization of the standard proof that sorting takes n*log(n) time in the comparison model).

(3) If we consider more powerful models — which involve “unit-cost comparisons” but also bit operations — then the n*log(n) lower bound for sorting breaks down (in many such models it’s actually known to be false).

So, I don’t know whether there’s any reasonable model for which sorting yields a separation between DTIME(n) and NTIME(n).

]]>Is P “exactly” equal to NP?

In other words, given that a solution to problem P can be verified in exactly n^k time and g(n) space, can it also be solved in exactly n^k time and g(n) space?

Do we know the answer to this already? I was thinking about that question and the following problem doesn’t seem to be verifiable as quickly as it is solvable:

Given an array of n numbers, identify a sequence of n or less steps that can sort the array.

It requires at least nlogn operations to solve but can be verified in linear time, isn’t that right? But this isn’t a yes/no problem anyway. Do we know that P isn’t “exactly” equal to NP?

Things like the Continuum hypothesis have always struck me as artefacts of the language of set theory. Sets are such a basic concept that they’re approaching the point of concepts left undefined and can easily lead to self-contradiction if they aren’t limited in their scope. However if you limited them too much they don’t really correspond to our intuitive understanding of a set. Hence we end up with axioms like ZFC which are purposefully vague about exactly when something is a set. Which leads to a question like CH having no answer since it asks how many subsets of the Naturals there are. Definitely something you can’t answer if you haven’t said exactly what a set is.

Hence CH seems to be a language thing.

Constrast this with P=NP or Cauchy’s integral theorem, very definite statements about well defined objects.

]]>can one weaken the assumption in the above result, from ZF+Con(ZF) is consistent to ZF is consistent?

Yes, it suffices to assume that ZF is consistent.

The truth value of the formula “M halts” depends only on the set of natural numbers, \omega, in the model. If two models have the same natural numbers then either M halts in both models, or M halts in neither of them (that is, “M halts” is an absolute formula). In particular, “M halts” relativized to the constructible universe L is true if and only if “M halts” is true in V.

Now we choose a model T of ZF in which AC is false, we get that

T |= AC is false, T |= AC^L is true

If ZF implied that “AC holds iff M halts”, then we would get that

T |= M doesn’t halt

T |= (M halts)^L

we would get a contradiction.

In general, when we prove independence results by forcing or by considering L, we don’t change truth values of arithmetic formulas.

]]>