## The Complete Idiot’s Guide to the Independence of the Continuum Hypothesis: Part 1 of <=Aleph_0

A global pandemic, apocalyptic fires, and the possible descent of the US into violent anarchy three days from now can do strange things to the soul.

Bertrand Russell—and if he’d done nothing else in his long life, I’d love him forever for it—once wrote that “in adolescence, I hated life and was continually on the verge of suicide, from which, however, I was restrained by the desire to know more mathematics.” This summer, unable to bear the bleakness of 2020, I obsessively read up on the celebrated proof of the unsolvability of the Continuum Hypothesis (CH) from the standard foundation of mathematics, the Zermelo-Fraenkel axioms of set theory. (In this post, I’ll typically refer to “ZFC,” which means Zermelo-Fraenkel plus the famous Axiom of Choice.)

For those tuning in from home, the Continuum Hypothesis was formulated by Georg Cantor, shortly after his epochal discovery that there are different orders of infinity: so for example, the infinity of real numbers (denoted C for continuum, or \( 2^{\aleph_0} \)) is strictly greater than the infinity of integers (denoted ℵ_{0}, or “Aleph-zero”). CH is simply the statement that there’s no infinity *intermediate* between ℵ_{0} and C: that anything greater than the first is at least the second. Cantor tried in vain for decades to prove or disprove CH; the quest is believed to have contributed to his mental breakdown. When David Hilbert presented his famous list of 23 unsolved math problems in 1900, CH was at the very top.

Halfway between Hilbert’s speech and today, the question of CH was finally “answered,” with the solution earning the only Fields Medal that’s ever been awarded for work in set theory and logic. But unlike with any previous yes-or-no question in the history of mathematics, the answer was that there provably *is* no answer from the accepted axioms of set theory! You can either have intermediate infinities or not; neither possibility can create a contradiction. And if you *do* have intermediate infinities, it’s up to you how many: 1, 5, 17, ∞, etc.

The easier half, the consistency of CH with set theory, was proved by incompleteness dude Kurt Gödel in 1940; the harder half, the consistency of not(CH), by Paul Cohen in 1963. Cohen’s work introduced the method of forcing, which was so fruitful in proving set-theoretic questions unsolvable that it quickly took over the whole subject of set theory. Learning Gödel and Cohen’s proofs had been a dream of mine since teenagerhood, but one I constantly put off.

This time around I started with Cohen’s retrospective essay, as well as Timothy Chow’s Forcing for Dummies and A Beginner’s Guide to Forcing. I worked through Cohen’s own Set Theory and the Continuum Hypothesis, and Ken Kunen’s Set Theory: An Introduction to Independence Proofs, and Dana Scott’s 1967 paper reformulating Cohen’s proof. I emailed questions to Timothy Chow, who was ridiculously generous with his time. When Tim and I couldn’t answer something, we tried Bob Solovay (one of the world’s great set theorists, who later worked in computational complexity and quantum computing), or Andreas Blass or Asaf Karagila. At some point mathematician and friend-of-the-blog Greg Kuperberg joined my quest for understanding. I thank all of them, but needless to say take sole responsibility for all the errors that surely remain in these posts.

On the one hand, the proof of the independence of CH would seem to stand with general relativity, the wheel, and the chocolate bar as a triumph of the human intellect. It represents a culmination of Cantor’s quest to know the basic rules of infinity—all the more amazing if the answer turns out to be that, in some sense, we *can’t* know them.

On the other hand, perhaps no other scientific discovery of equally broad interest remains so sparsely popularized, not even (say) quantum field theory or the proof of Fermat’s Last Theorem. I found barely any attempts to explain how forcing works to non-set-theorists, let alone to non-mathematicians. One notable exception was Timothy Chow’s Beginner’s Guide to Forcing, mentioned earlier—but Chow himself, near the beginning of his essay, calls forcing an “open exposition problem,” and admits that he hasn’t solved it. My modest goal, in this post and the following ones, is to make a further advance on the exposition problem.

OK, but why a doofus computer scientist like me? Why not, y’know, an actual expert? I won’t put forward my ignorance as a qualification, although I *have* often found that the better I learn a topic, the more completely I forget what initially confused me, and so the less able I become to explain things to beginners.

Still, there is *one* thing I know well that turns out to be intimately related to Cohen’s forcing method, and that made me feel like I had a small “in” for this subject. This is the construction of oracles in computational complexity theory. In CS, we like to construct hypothetical universes where P=NP or P≠NP, or P≠BQP, or the polynomial hierarchy is infinite, etc. To do so, we, by fiat, insert a new function—an *oracle*—into the universe of computational problems, carefully chosen to make the desired statement hold. Often the oracle needs to satisfy an infinite list of conditions, so we handle them one by one, taking care that when we satisfy a new condition we don’t invalidate the previous conditions.

All this, I kept reading, is *profoundly* analogous to what the set theorists do when they create a mathematical universe where the Axiom of Choice is true but CH is false, or vice versa, or any of a thousand more exotic possibilities. They insert new sets into their models of set theory, sets that are carefully constructed to “force” infinite lists of conditions to hold. In fact, some of the exact same people—such as Solovay—who helped pioneer forcing in the 1960s, later went on to pioneer oracles in computational complexity. We’ll say more about this connection in a future post.

**How Could It Be?**

How do you study a well-defined math problem, and return the answer that, as far as the accepted axioms of math can say, there *is* no answer? I mean: even supposing it’s *true* that there’s no answer, how do you *prove* such a thing?

Arguably, not even Gödel’s Incompleteness Theorem achieved such a feat. Recall, the Incompleteness Theorem says loosely that, for every formal system F that could possibly serve as a useful foundation for mathematics, there exist statements even of elementary arithmetic that are true but unprovable in F—and Con(F), a statement that encodes F’s own consistency, is an example of one. But the very statement that Con(F) is unprovable is equivalent to Con(F)’s being *true* (since an inconsistent system could prove anything, including Con(F)). In other words, if the Incompleteness Theorem as applied to F holds any interest, then that’s only because F *is, in fact*,* consistent*; it’s just that resources beyond F are needed to prove this.

Yes, there’s a “self-hating theory,” F+Not(Con(F)), which believes in its own inconsistency. And yes, by Gödel, this self-hating theory is *consistent* if F itself is. This means that it has a **model**—involving “nonstandard integers,” formal artifacts that effectively promise a proof of F’s inconsistency without ever actually delivering it. We’ll have much, *much* more to say about models later on, but for now, they’re just collections of objects, along with relationships between the objects, that satisfy all the axioms of a theory (thus, a model of the axioms of group theory is simply … any group!).

In any case, though, the self-hating theory F+Not(Con(F)) can’t be *arithmetically sound*: I mean, just look at it! It’s either unsound because F is consistent, or else it’s unsound because F is inconsistent. In general, this is one of the most fundamental points in logic: **consistency does not imply soundness**. If I believe that the moon is made of cheese, that might be *consistent* with all my other beliefs about the moon (for example, that Neil Armstrong ate delicious chunks of it), but that doesn’t mean my belief is *true*. Like the classic conspiracy theorist, who thinks that any apparent evidence against their hypothesis was planted by George Soros or the CIA, I might simply believe a self-consistent collection of absurdities. Consistency is purely a syntactic condition—it just means that I can never prove both a statement and its opposite—but soundness goes further, asserting that whatever I can prove is actually the case, a *relationship* between what’s inside my head and what’s outside it.

So again, assuming we had any business using F in the first place, the Incompleteness Theorem gives us two *consistent* ways to extend F (by adding Con(F) or by adding Not(Con(F))), but only one *sound* way (by adding Con(F)). But the independence of CH from the ZFC axioms of set theory is of a fundamentally different kind. It will give us models of ZFC+CH, and models of ZFC+Not(CH), that are *both* at least somewhat plausible as “sketches of mathematical reality”—and that both even have defenders. The question of which is right, or whether it’s possible to decide at all, will be punted to the future: to the discovery (or not) of some intuitively compelling foundation for mathematics that, as Gödel hoped, answers the question by going beyond ZFC.

**Four Levels to Unpack**

While experts might consider this too obvious to spell out, Gödel’s and Cohen’s analyses of CH aren’t so much about infinity, as they are about our ability to *reason *about infinity using finite sequences of symbols. The game is about building self-contained mathematical universes to order—universes where all the accepted axioms about infinite sets hold true, and yet that, in some cases, seem to mock what those axioms were supposed to *mean*, by containing vastly fewer objects than the mathematical universe was “meant” to have.

In understanding these proofs, the central hurdle, I think, is that there are at least four different “levels of description” that need to be kept in mind simultaneously.

At the first level, Gödel’s and Cohen’s proofs, like all mathematical proofs, are finite sequences of symbols. Not only that, they’re proofs that can be formalized in elementary arithmetic (!). In other words, even though they’re *about* the axioms of set theory, they don’t themselves *require* those axioms. Again, this is possible because, at the end of the day, Gödel’s and Cohen’s proofs won’t be talking about infinite sets, but “only” about finite sequences of symbols that make statements about infinite sets.

At the second level, the proofs are making an “unbounded” but perfectly clear claim. They’re claiming that, if someone showed you a proof of either CH or Not(CH), from the ZFC axioms of set theory, then no matter how long the proof or what its details, you could convert it into a proof that *ZFC itself* was inconsistent. In symbols, they’re proving the “relative consistency statements”

Con(ZFC) ⇒ Con(ZFC+CH),

Con(ZFC) ⇒ Con(ZFC+Not(CH)),

and they’re proving these as theorems of *elementary arithmetic*. (Note that there’s no hope of proving Con(ZF+CH) or Con(ZFC+Not(CH)) *outright* within ZFC, since by Gödel, ZFC can’t even prove its own consistency.)

This translation is completely explicit; the independence proofs even yield *algorithms* to convert proofs of inconsistencies in ZFC+CH or ZFC+Not(CH), supposing that they existed, into proofs of inconsistencies in ZFC itself.

Having said that, as Cohen himself often pointed out, thinking about the independence proofs in terms of algorithms to manipulate sequences of symbols is hopeless: to have any chance of understanding these proofs, let alone coming up with them, at some point you need to think about what the symbols *refer* to.

This brings us to the third level: the symbols refer to *models of set theory*, which could also be called “mathematical universes.” Crucially, we always can and often will take these models to be only *countably* infinite: that is, to contain an infinity of sets, but “merely” ℵ_{0} of them, the infinity of integers or of finite strings, and no more.

The fourth level of description is from within the models themselves: each model imagines itself to have an *uncountable* infinity of sets. As far as the model’s concerned, it comprises the entire mathematical universe, even though “looking in from outside,” we can see that that’s not true. In particular, each model of ZFC *thinks* it has uncountably many sets, many themselves of uncountable cardinality, even if “from the outside” the model is countable.

Say what? The models are *mistaken* about something as basic as their own size, about how many sets they have? Yes. The models will be like *The Matrix* (the movie, not the mathematical object), or *The Truman Show*. They’re self-contained little universes whose inhabitants can never discover that they’re living a lie—that they’re missing sets that we, from the outside, know to exist. The poor denizens of the Matrix will never even be able to learn that their universe—what* *they mistakenly think of as *the* universe—is secretly countable! And no Morpheus will ever arrive to enlighten them, although—and this is crucial to Cohen’s proof in particular—the inhabitants will be able to reason more-or-less intelligibly about what would happen if a Morpheus *did* arrive.

The Löwenheim-Skolem Theorem, from the early 1920s, says that *any* countable list of first-order axioms that has any model at all (i.e., that’s consistent), must have a model with at most countably many elements. And ZFC is a countable list of first-order axioms, so Löwenheim-Skolem applies to it—even though ZFC implies the existence of an uncountable infinity of sets! Before taking the plunge, we’ll need to not merely grudgingly accept but love and internalize this “paradox,” because pretty much the entire proof of the independence of CH is built on top of it.

Incidentally, once we realize that it’s possible to build self-consistent yet “fake” mathematical universes, we can ask the question that, incredibly, the *Matrix* movies never ask. Namely, how do we know that our own, larger universe isn’t similarly a lie? The answer is that we don’t! As an example—I hope you’re sitting down for this—even though Cantor proved that there are uncountably many real numbers, that only means there are uncountably many reals *for us*. We can’t rule out the possibly that God, looking down on our universe, would see countably many reals.

**Cantor’s Proof Revisited**

To back up: the whole story of CH starts, of course, with Cantor’s epochal discovery of the different orders of infinity, that for example, there are more subsets of positive integers (or equivalently real numbers, or equivalently infinite binary sequences) than there are positive integers. The devout Cantor thought his discovery illuminated the nature of God; it’s never been entirely obvious to me that he was wrong.

Recall how Cantor’s proof works: we suppose by contradiction that we have an enumeration of all infinite binary sequences: for example,

s(0) = **0**0000000…

s(1) = 0**1**010101…

s(2) = 11**0**01010….

s(3) = 100**0**0000….

We then produce a new infinite binary sequence that’s not on the list, by going down the diagonal and flipping each bit, which in the example above would produce 1011…

But look more carefully. What Cantor really shows is only that, *within our mathematical universe*, there can’t be an enumeration of all the reals of our universe. For if there were, we could use it to define a new real that was in the universe but not in the enumeration. The proof doesn’t rule out the possibility that *God* could enumerate the reals of our universe! It only shows that, if so, there would need to be additional, heavenly reals that were missing from even God’s enumeration (for example, the one produced by diagonalizing against *that* enumeration).

Which reals could possibly be “missing” from our universe? Every real you can name—42, π, √e, even uncomputable reals like Chaitin’s Ω—has to be there, right? Yes, and there’s the rub: *every real you can name*. Each name is a finite string of symbols, so whatever your naming system, you can only ever name countably many reals, leaving 100% of the reals nameless.

Or did you think of only the *rationals* or *algebraic numbers* as forming a countable dust of discrete points, with numbers like π and e filling in the solid “continuum” between them? If so, then I hope you’re sitting down for this: *every real number you’ve ever heard of* belongs to the countable dust! The entire concept of “the continuum” is only needed for reals that don’t have names and never will.

**From ℵ _{0} Feet**

Gödel and Cohen’s achievement was to show that, without creating any contradictions in set theory, we can adjust size of this elusive “continuum,” put more reals into it or fewer. How does one even start to begin to prove such a statement?

From a distance of ℵ_{0} feet, Gödel proves the consistency of CH by building minimalist mathematical universes: one where “the only sets that exist, are the ones *required* to exist by the ZFC axioms.” (These universes can, however, differ from each other in how “tall” they are: that is, in how many ordinals they have, and hence how many sets overall. More about that in a future post!) Gödel proves that, *if* the axioms of set theory are consistent—that is, if they describe any universes at all—then they also describe these minimalist universes. He then proves that, in any of these minimalist universes, from the standpoint of someone *within* that universe, there are exactly ℵ_{1} real numbers, and hence CH holds.

At an equally stratospheric level, Cohen proves the consistency of not(CH) by building … well, *non*-minimalist mathematical universes! A simple way is to start with Gödel’s minimalist universe—or rather, an even more minimalist universe than his, one that’s been cut down to have only countably many sets—and then to stick in a bunch of *new real numbers* that weren’t in that universe before. We choose the new real numbers to ensure two things: first, we still have a model of ZFC, and second, that we make CH false. The details of how to do that will, of course, concern us later.

**My Biggest Confusion**

In subsequent posts, I’ll say more about the character of the ZFC axioms and how one builds models of them to order. Just as a teaser, though, to conclude this post I’d like to clear up a fundamental misconception I had about this subject, from roughly the age of 16 until a couple months ago.

I thought: the way Gödel proves the consistency of CH, must be by examining all the sets in his minimalist universe, and checking that each one has either at most ℵ_{0} elements or else at least C of them. Likewise, the way Cohen proves the consistency of not(CH), must be by “forcing in” some extra sets, which have more than ℵ_{0} elements but fewer than C elements.

Except, it turns out that’s not how it works. Firstly, to prove CH in his universe, Gödel is not going to check each set to make sure it doesn’t have intermediate cardinality; instead, he’s simply going to count all the reals to make sure that there are only ℵ_{1} of them—where ℵ_{1} is the next infinite cardinality after ℵ_{0}. This will imply that C=ℵ_{1}, which is another way to state CH.

More importantly, to build a universe where CH is false, Cohen is going to start with a universe where C=ℵ_{1}, like Gödel’s universe, and then *add in more reals:* say, ℵ_{2} of them. The ℵ_{1} “original” reals will then supply our set of intermediate cardinality between the ℵ_{0} integers and the ℵ_{2} “new” reals.

Looking back, the core of my confusion was this. I had thought: I can visualize what ℵ_{0} means; that’s just the infinity of integers. I can *also* visualize what \( C=2^{\aleph_0} \) means; that’s the infinity of points on a line. Those, therefore, are the two bedrocks of clarity in this discussion. By contrast, I *can’t* visualize a set of intermediate cardinality between ℵ_{0} and C. The intermediate infinity, being weird and ghostlike, is the one that shouldn’t exist unless we deliberately “force” it to.

Turns out I had things backwards. For starters, I *can’t* visualize the uncountable infinity of real numbers. I might *think* I’m visualizing the real line—it’s solid, it’s black, it’s got little points everywhere—but how can I be sure that I’m not merely visualizing the ℵ_{0} rationals, or (say) the computable or definable reals, which include all the ones that arise in ordinary math?

The continuum C is *not at all* the bedrock of clarity that I’d thought it was. Unlike its junior partner ℵ_{0}, the continuum is adjustable, changeable—and we *will* change it when we build different models of ZFC. What’s (relatively) more “fixed” in this game is something that I, like many non-experts, had always given short shrift to: Cantor’s sequence of Alephs ℵ_{0}, ℵ_{1}, ℵ_{2}, etc.

Cantor, who was a very great man, didn’t merely discover that C>ℵ_{0}; he also discovered that the infinite cardinalities form a well-ordered sequence, with no infinite descending chains. Thus, after ℵ_{0}, there’s a next greater infinity that we call ℵ_{1}; after ℵ_{1} comes ℵ_{2}; after the entire infinite sequence ℵ_{0},ℵ_{1},ℵ_{2},ℵ_{3},… comes ℵ_{ω}; after ℵ_{ω} comes ℵ_{ω+1}; and so on. These infinities will always be there in any universe of set theory, and always in the same order.

Our job, as engineers of the mathematical universe, will include pegging the continuum C to one of the Alephs. If we stick in a bare minimum of reals, we’ll get C=ℵ_{1}, if we stick in more we can get C=ℵ_{2} or C=ℵ_{3}, etc. We can’t make C equal to ℵ_{0}—that’s Cantor’s Theorem—and we *also* can’t make C equal to ℵ_{ω}, by an important theorem of König that we’ll discuss later (yes, this is an umlaut-heavy field). But it will turn out that we can make C equal to just about any other Aleph: in particular, to any infinity other than ℵ_{0} that’s not the supremum of a countable list of smaller infinities.

In some sense, this is the whole journey that we need to undertake in this subject: from seeing the cardinality of the continuum as a metaphysical mystery, which we might contemplate by staring really hard at a black line on white paper, to seeing the cardinality of the continuum *as an engineering problem*.

Stay tuned! Next installment coming after the civilizational Singularity in three days, assuming there’s still power and Internet and food and so forth.

Oh, and happy Halloween. Ghostly sets of intermediate cardinality … spoooooky!

Comment #1 October 31st, 2020 at 3:37 pm

“can’t even its own consistency”

Unless this was meant humorously, I’m guessing the word “prove” is missing from this sentence.

Comment #2 October 31st, 2020 at 4:03 pm

Why can’t God use second order logic? Looking forward to the rest of this sequence.

Comment #3 October 31st, 2020 at 4:06 pm

Joshua #1: Thanks!! Fixed.

Comment #4 October 31st, 2020 at 4:10 pm

I #2: I don’t believe in second-order logic. 🙂

But if God did use it, that would just make the point about Her being able to see outside our model of ZFC even clearer.

Comment #5 October 31st, 2020 at 4:29 pm

Scott, have you seen “Years of living dangerously” and “knowing”(2009)? epidemy because of those aliens?

Comment #6 October 31st, 2020 at 5:33 pm

Lovely post! Things are explained in such a way that I am *just about* able to understand some of it, although the subject itself is well beyond me. Like many Shtetl Optimized posts before it, which is how I came to love the blog. Some very interesting links, too.

All in all, a very welcome distraction.

Thank you, Scott!

Comment #7 October 31st, 2020 at 6:04 pm

Chapter 6 of the book Sheaves in Geometry and Logic by MacLane and Moerdijk (available at libgen or the library nearest you) is also an exposition of the independence of CH, so I believe. They take a topos-theoretic point of view.

Any chance of an exposition of their exposition?

Comment #8 October 31st, 2020 at 6:15 pm

nosy snoopy #5: Huh? What are you talking about, and what does it have to do with anything? 🙂

Comment #9 October 31st, 2020 at 6:17 pm

uhoh #7: Topos-theoretic? Sheaves? Sorry, I’ll pass. There are enough difficult concepts here as it is, without needing to add extras!

Comment #10 October 31st, 2020 at 6:37 pm

Really looking forward to this series! I’ve tried a couple times to understand this ‘models’, ‘large cardinals’, etc. business but never really cracked it. Even reading part one clarified my understanding a lot.

Comment #11 October 31st, 2020 at 6:54 pm

I #2, Scott #4: Let me defend the “standard semantics” or “full semantics” of second-order logic. It can define the natural numbers in a suitable sense, as I hope to explain below. (One reason why you cannot define set theory in a similar suitable sense is simply that you don’t know what exactly you want set theory to be, or worse, the thing that we (or rather Cantor) wanted set theory to be is inconsistent.)

Sean Carroll wrote:

I want to tell you my “understanding” of those subtle issues with second-order logic. My “understanding” was shaped in the context of Stewart Shapiro’s “Foundations without Foundationalism: A Case for Second-order Logic”. He defends the “standard semantics” or “full semantics” of second-order logic as valid semantics. I agree that it is a valid semantics, but I forgot Shapiro’s arguments (read it around 2014), I only remember my own thoughts:

The “full semantics” allows “last word” specifications, and the second-order induction axiom is a prime example of a “last word” specification: The language might initially only contain successor and neither addition nor multiplication. But as soon as you define addition, multiplication, exponentiation, or any other operation, the induction axiom will immediately apply to any property you can define using such newly defined operations. And if you define a powerful set theory with unbelievably strong inaccessible cardinal axioms, the induction axiom will also immediately apply to any property you can define using such a powerful set theory.

On the other hand, the danger with “last word” specifications is that there can be only one “last word”. As soon as there happen to be two (or more) “last words” contradicting each other, there will be a problem (if it is unclear which “last word” should “win”).

Comment #12 October 31st, 2020 at 7:14 pm

What is a “model” metaphysically? Is it completely in the sky? If it’s a countable one, is there maybe a way to think of it in terms of programs? Speaking as a member of your target audience here 😉

Comment #13 October 31st, 2020 at 7:23 pm

Rehashing an old joke from here, by myself and Joseph Hertzlinger:

Kurt Gödel died in 1978, and since he was such a great mathematician, he of course went immediately to Heaven.

When he got there, a welcoming committee met him and as with all new arrivals, gave him wings and a harp, showed him to a nice cloud, and asked him if he had any questions.

The first thing he wanted to know was whether CH was true.

He was told: “Yes, it’s true. The proof is infinitely long, but that’s not a problem up here in Heaven. It was proved by Cantor himself, a while after he got here. Would you like to see the proof?”.

Of course Gödel said yes. He was taken to a room containing a gigantic scroll whose paper got thinner and thinner as it unrolled, to accommodate its infinite length. Gödel began to read the proof, and the other heavenly set theorists gathered around him to hear what he had to say.

Eternity passed, another eternity, an eternity of eternities, an eternity of eternity of eternities …

Finally, Gödel sat up, pointed to a spot in the proof, and said “there’s a mistake at line \(\varepsilon_0 \)!”

Comment #14 October 31st, 2020 at 7:42 pm

I’ve always felt like |(Pow(ℕ))| felt pretty concrete, whereas ℵ1=|the set of countable ordinals| seems much weirder. I guess the “power set” operation must be more mysterious than I’d been giving it credit for – maybe that’s the same mistake you’re talking about.

Thanks for writing this! I miss this subject. Really looking forward to the rest.

Comment #15 October 31st, 2020 at 8:02 pm

So, just to confirm. There is NO set with cardinality b/w the naturals and the set of all subsets of naturals? And that is NOT the statement independent of ZFC. That statement is just true? The real statement is that the reals (continuum) can have a cardinality greater than the set of all subset of the naturals?

Comment #16 October 31st, 2020 at 8:03 pm

Scott, can you give an equally clear explanation of why you “don’t believe in second order logic”? I don’t quite understand why first order logic is the mainstream in logic. My confusion is that when I read that second order logic allows quantification over predicates, that sounds like a very common mathematical technique. Certainly quantifying over functions is standard: “blah blah, we prove that for any continuous function f:A->B there is a blah blah….” But at the same time, I understand that basically all “normal” mathematics (meaning, the kind of stuff that non-set-theorists care about) can be expressed in first order logic.

Comment #17 October 31st, 2020 at 8:24 pm

Jacob #15: No, that’s not right. There are exactly as many real numbers as subsets of the naturals. When e.g. Cohen shoves aleph-2 new real numbers into his model, you could instead say that he’s making new subsets of the natural numbers.

DavidC #14: I think you are exactly correct that the power-set operation is more mysterious than we are inclined to think.

Comment #18 October 31st, 2020 at 8:25 pm

Ah, wow, I think I get it! I believe I had the same misconception you did.

Here’s what I think I understand now: ZFC gives us some “window” onto the reals, based on what’s reachable through construction, but models the rest of the reals even though it can’t actually reach all of them. So the CH question then is how much of the reals that is, and how many times you would have to “boost” ZFC with additional reals until that window encompassed the continuum. Does that sound right?

Comment #19 October 31st, 2020 at 8:45 pm

Mg #12:

What is a “model” metaphysically?

Good question! I’ll say more about it later, but for now, a “model” of some axioms is just a collection of objects, and relationships between the objects, that satisfies the axioms.

So for example, a model for the axioms of a group (identity, inverses, associativity) is just …

a group, with different groups providing different models. A model for the axioms of a field is a field, and a model for the axioms of set theory is … well, itoughtto have a cool name, like “mathematical universe,” but for some reason it’s most often just called a “model of set theory.” Unlike with groups or fields, even the tiniest models of ZFC are extraordinarily complicated and countably infinite objects. But “metaphysically” they’re very much the same kind of thing as an infinite group or infinite field.No, I wouldn’t think of a countable model of ZFC as a computer program—indeed, under a reasonable encoding, there’s not even a computer program to enumerate all the ordinals in such a model (since otherwise the model could have no uncountable ordinals)—although note that there’s certainly a program to enumerate all the statements that are true in all such models (which are just the theorems of ZFC).

Comment #20 October 31st, 2020 at 8:50 pm

asdf #13: Haha!

DavidC #14: Yes,

exactly! Experts will rarely say it explicitly—it’s too obvious to them—but “2^{ℵ0}is natural, ℵ_{1}is weird” is one of the biggest misconceptions one needs to get over in this subject.Jacob #15:

No.Exactly as g #17 said, CH ispreciselythe statement that there’s no set with cardinality between the naturals and the subsets of the naturals, and CH is independent of ZFC. On the other hand, it’s a (trivial) theorem of ZFC that there’s no set with cardinality between ℵ_{0}and ℵ_{1}. The point is that ℵ_{1}might be a lot smaller than the cardinality of the set of subsets of the naturals.Comment #21 October 31st, 2020 at 8:56 pm

Tim McCormack #18: Yes, that’s about right! Every model of ZFC will contain what are called the “constructible” reals, which (at least according to that model)

~~there are exactly ℵ~~(if the models contain non-constructible sets, then there can also be only countably many constructible reals—please see the clarifications below!). The models differ from each other in whether they contain “non-constructible” reals and if so, how many._{1}ofThe main complication here is that even the “constructible” reals, themselves, can differ from one model of ZFC to the next … as can pretty much

anythingwith an uncountable cardinality!If you want something that’s

absolute—that is, the same in all models of ZFC—you could look further inside the constructible reals for, e.g., the computable reals (of which there are only countably many). But of course, given any countable set of reals, you can always diagonalize against it to produce a new real not in the set.Comment #22 October 31st, 2020 at 9:04 pm

“OK, but why a doofus computer scientist like me? Why not, y’know, an actual expert?”

Don’t sell yourself short! Although you’re not an expert in set theory, “merely” the closely related field of theoretical computer science, you are an expert in writing expositions from all of your time blogging here. And as you point out, there isn’t a lot of prior work in this area; in spite of all of the other experts having their chance for so long, this blog series will be one of the first expositions of forcing aimed at such a large audience. If set theory experts don’t like it, challenge them to make something better!

Comment #23 October 31st, 2020 at 9:27 pm

Scott #20: I’m still trying to get some intuition for

\(2^ℵ0\) being weirder. It’s not just that most elements are unnameable, of course — that’s true of anything uncountable.

I guess it’s maybe pretty structureless — (aside from a small countable portion) just a big mess of stuff?

Whereas those countable ordinals that make up ℵ1 are pretty structured.

I imagine your future posts will help too.

Comment #24 October 31st, 2020 at 9:32 pm

Are you aware of the Flypitch project? They recently completed a formal proof of the independence of CH using the Lean theorem prover, which I thought was pretty cool.

Comment #25 October 31st, 2020 at 9:37 pm

I have been wondering about this. Most reals can never be named or exhibited in any way. It what sense do the unnameable reals really exist? I think they only “exist” in the sense that the ZFC axioms can be used to prove statements like “the unnameable reals exist”?

Does this mean that our axioms are *too strong*? There are these objects that the axioms assert to “exist” but which we can never actually encounter. We can prove theorems about them but never actually see any examples. Why not retreat to some weaker set of axioms that only assert the existence of the nameable reals? When we get into these terrible difficulties over which infinite sets actually exist, when we can’t actually produce any of them, are we just dealing with the unfortunate side effects of choosing overly-powerful axioms?

Comment #26 October 31st, 2020 at 9:52 pm

I second Matt’s question: were you (Scott) just joking about not believing in 2nd order logic?

Comment #27 October 31st, 2020 at 10:20 pm

DavidC #14:

IIRC, Roger Penrose remarks in a footnote somewhere in The Emperor’s New Mind that Cohen, in one of his papers (I think the year was 1966), proposes a “reflection principle” based on the Power Set concept that made the Continuum Hypothesis “obviously false”. I think the idea was something along the lines of: the Power Set operation is “obviously” so much more powerful than any of the operations involved in constructing the sequence of infinite ordinals and infinite cardinals, that the Power Set operation, when applied to a particular set (in this case aleph-0) must give us a set that is “larger” than any of the sets we can reach by applying the other operations.

Comment #28 October 31st, 2020 at 10:21 pm

matt #16:

Scott, can you give an equally clear explanation of why you “don’t believe in second order logic”? I don’t quite understand why first order logic is the mainstream in logic.

Truthfully, every time I read about second-order logic I get confused (just like with category theory, lambda calculus, type theory, and other “higher-level languages”), whereas everything I read based on first-order logic seems 1000x clearer to me. So I’m not the best person to field your question!

FWIW, though, the core of my difficulty is: OK, if you want to quantify over functions or relations, that’s fine, but then why not just promote whatever it is you want to quantify over into primitive objects of your theory, and thereafter use first-order logic? Why mix together the question of which objects to quantify over with the seemingly more basic question of

the rules of logic; why not just enforce a clean separation between the two? And then I read from experts that, yeah, youcanenforce such a clean separation, youcansimulate second-order theories using first-order theories, and I think, “then that’s the route for me, case closed,” and my only remaining confusion is about why anyone chooses the other route!But crucially, even if I’m missing some important benefits of second-order logic, virtually the whole discussion of forcing and the continuum hypothesis takes place in the context of ZFC, which is a first-order theory. So the restriction to first-order theories would still make sense for

thissequence of posts.Comment #29 October 31st, 2020 at 10:26 pm

[…] https://www.scottaaronson.com/blog/?p=4974 […]

Comment #30 October 31st, 2020 at 10:34 pm

DavidC #23: Again I think your comment is on the right track! With ℵ

_{1}, the point is that you build up to it in a “rigid” way, by defining larger and larger countable ordinals and then taking the supremum over all of them. Whereas you can only reach the continuum by applying the power set operation, something that could “leap transcendently” through the hierarchy of ordinals in an uncontrolled way and land who knows where. And the fact that, when we’re learning math, we meet the real numbers, the infinite binary sequences, and other examples of sets of cardinality continuum so much earlier and more often than we meet the hierarchy of ordinals, obscures the fact that it’s really the latter that ends up being the skeleton of the universe of set theory.Comment #31 October 31st, 2020 at 11:00 pm

The_Duck #25:

I have been wondering about this. Most reals can never be named or exhibited in any way. It what sense do the unnameable reals really exist? I think they only “exist” in the sense that the ZFC axioms can be used to prove statements like “the unnameable reals exist”?

Does this mean that our axioms are *too strong*? There are these objects that the axioms assert to “exist” but which we can never actually encounter…

These are excellent questions. I think the key to answering them is to remember that the whole concept of a real number’s being “nameable” is always

relative to your naming system.I.e., whenever you realize that some particular real of interest to you lacks a name, you can simplygiveit one (“Rita the Real”), and that process can continue as long as you like (provided you always take care to leave ℵ_{0}unused names). This implies that the “nameable reals” will never form a well-delineated collection.It’s reminiscent of the old paradox of “the least natural number N that’s so uninteresting that it doesn’t deserve its own Wikipedia page.” As soon as someone notices that N, it

becomesinteresting enough to deserve a Wikipedia page, and then we’re back where we started! (Apparently Wikipedia actually needed to create a policy to avoid this issue.)Now suppose someone said, “the axioms of the natural numbers are too strong, because they imply the existence of an infinity of natural numbers that no one will ever care about or use! We ought to find better axioms: ones that only assert the existence of finitely many natural numbers, including all the

interestingones.” The trouble is, the second after you’ve done that, I can’t help myself from getting mightyinterestedin the smallest natural number that doesn’t exist according to your axioms… 😀Comment #32 October 31st, 2020 at 11:02 pm

I just got past your description of the four layers of abstraction, and I think that is a great starting point. But I’m not sure that it’s really accurate to talk about any given model as being “mistaken” about the number of sets it has, because each model is incapable of talking about how many sets it has. While some axiomizations of set theory may allow you to have a universe object, ZFC doesn’t, which means that the logic has to be “turtles all the way down” for reasons even beyond Godel.

Comment #33 October 31st, 2020 at 11:03 pm

Matt #24:

Are you aware of the Flypitch project? They recently completed a formal proof of the independence of CH using the Lean theorem prover, which I thought was pretty cool.

Yes, I came across that in my reading on the subject, and was duly impressed!

Comment #34 October 31st, 2020 at 11:13 pm

scott #28: “and my only remaining confusion is about why anyone chooses the other route… even if I’m missing some important benefits of second-order logic…”

Isn’t one advantage of 2nd order logic that it allows for categorical axiomatizations of theories for which 1st order logic doesn’t? For instance, 2nd-order peano arithmetic has a unique model up to isomorphism.

Also, 2nd-order ZFC is almost categorical (i.e. for any two models, one must be isomorphic to an initial sequence of the other), suggesting that in a 2nd-order context the continuum hypothesis has a definite answer.

Comment #35 October 31st, 2020 at 11:13 pm

Alexis Hunt #32:

But I’m not sure that it’s really accurate to talk about any given model as being “mistaken” about the number of sets it has, because each model is incapable of talking about how many sets it has.

Yeah, I noticed that issue, but then I decided it was OK, because while ZFC can’t assign a cardinality to its entire universe, it can clearly prove a theorem saying informally that “there exist at least 2

^{ℵ0}sets” (or replace 2^{ℵ0}by any other cardinal of your choice). And that’s enough to create the tension with the fact that,from the outside, the universe might well have only ℵ_{0}sets. This is precisely the famous Skolem paradox. The resolution, of course, is that when you prove that the universe contains at least 2^{ℵ0}sets, it only means 2^{ℵ0}setsas viewed from the inside.Comment #36 October 31st, 2020 at 11:26 pm

grue #34:

Also, 2nd-order ZFC is almost categorical (i.e. for any two models, one must be isomorphic to an initial sequence of the other), suggesting that in a 2nd-order context the continuum hypothesis has a definite answer.

I mean that

soundsnice, but if we get zero additional insight aboutwhich“definite answer” it is, or even what the answer depends on, then haven’t we just swept the question under the rug? In what sensehavewe even made the answer more “definite,” as opposed to playing word-games? I gotta say, it reminds me of all the people who think you can dispense with Bell’s Theorem by simply redefining the concept of “locality” so that it includes what everyone previously called nonlocal effects… 😀Comment #37 October 31st, 2020 at 11:39 pm

I #2, matt #16, Scott #28:

Echoing Scott here as one of the second-order skeptics, but I’d like to make some additional points about why. (Although, some of these will just be expanding on what Scott already said.)

I think the first thing to note here is that the word “logic” seems to actually refer to two different things? Well, sort of — you’ll see. Let’s call them “descriptive logic” and “deductive logic”. Descriptive logic is about what structures (models) satisfy what statements. So, you define a language, you define how to interpret the connectives and such of that language, and then if you have a would-be model with meanings assigned to the primitives, you can say which statements it makes true and which statements it makes false. Whereas by deductive logic I mean, what are the rules of inference, and what sets of statements prove what other sets of statements under these rules.

It seems to me that this distinction is often not well-appreciated? I mean yes, you have model theory vs proof theory, but putting it that way frames it as two ways of studying at the same thing, when really there appear to be two different things. Now by Gödel’s completeness theorem, for first-order logic these are actually the same, but… well, it’s not quite that simple.

So here’s the confusing thing. I said above that the word “logic” refers to two things. But as best I can tell, in technical mathematical usage, the word “logic” refers purely to what I called “descriptive logic”. Which is confusing, because going the ordinary usage of the word, you would expect it to mean “deductive logic”. But actually in technical usage that seems to be called “proof calculus”?

But, as mentioned above, in the case of first-order logic, Gö’s completeness theorem lets us elide the distinction. The term “first-order logic” properly refers to the

descriptivelogic, not the deductive logic! And indeed there are various proof calculi for first-order logic, but the distinction ends up not being important to anyone other than proof theorists, because they’re all equivalent. So, whatever; you can refer to any of these proof calculi as simply “first-order logic” and people will get the point. They won’t know which one in particular you’re referring to, but, again, it won’t matter.But for second-order logic we don’t have that. So what do we even mean by “second-order logic”? As far as I can tell, the term “second-order logic” refers

purelyto descriptive logic! You don’t use second-order logic to prove things, you only use it to describe things. This is not my area, but as far as I’m aware, there is neither any proof calculus called “second-order logic”, nor is there one that people could conventionally infer you mean if you just say “second-order logic”. I gather there are various second-order proof calculi, but none of them are complete (because that’s impossible) and there are real differences between them.So when someone says “use second-order logic”, my first question is,

whichsecond-order logic? On the descriptive side that’s specific enough, sure, but on the deductive side it isn’t! And when you’re trying to lay the foundations of mathematics, you need the deductive side. You need to, y’know, actually perform deductions from the axioms.This, btw, is another thing that took me a long time to grasp about mathematical logic — that so much of it is done

withinstandard mathematics! So many things about logic — indeed the entire idea of descriptive logic — only make sense from a point of view where you already have standard mathematics, and you’re now just regarding statements as ordinary mathematical objects, and reasoning about them via some metatheory. And that’s fine for what it is, but obviously that can’t serve as a foundation for mathematics; for a foundation you need that pure deductive logic, where you don’t reasonaboutstatements, you just combine them via the allowed rules of inference to prove new things!Anyway, yeah. Second-order logic is fine for descriptive purposes, but for deductive purposes, including those discussed here, it seems to present more problems than it solves, at least until such time as people can agree on a standard second-order deductive logic (proof calculus). And, again, this isn’t my area but as far as I’m aware that doesn’t exist. Until then, better to make such ideas as sets and functions first-class citizens and use first-order logic on them. Get second-order stuff by emulating it first-order with set-theoretic axioms. (And as it turns out, those are rich enough that you don’t even end up needing any other axioms beyond those to found mathematics on!) I think it’s the standard approach for good reason.

Comment #38 October 31st, 2020 at 11:50 pm

Scott, when you say this—

If there exist models of ZFC that to us are obviously “fake,” then doesn’t this suggest that ZFC is missing some self-evident axioms? Would it be possible for us to add some more axioms (possibly uncountably many more if Löwenheim-Skolem requires it), at least to rule out some of these more obviously fake universes?

Similarly, on this point—

If it’s so easy to add in new real numbers, shouldn’t this mean that something was missing from our original axioms, or that our definition of the reals was incomplete in some way?

Comment #39 October 31st, 2020 at 11:54 pm

Thanks for posting, I guess! Although the relationship of this post to Cohen’s exposition is the same as the relationship of your discussion of Turing machine computability to the parsimony repo that described how to compile Laconic. (I love analogies.)

asdf #13: This is more or less the kind of afterlife that I envision myself living, to the extent that I believe in an afterlife at all.

Scott #19: To preserve the parallelism of groups/group theory, fields/field theory, etc., I hereby propose that models of the axioms of ZFC should be called ZFCs, and the axioms of ZFC as a formal theory should be called ZFCs (and models of PA should be called PAs, etc.).

(Note: I have many other hot takes like this, but I fear that listing them out of the blue here would put me in a generally untrustworthy reference class. So from now on my extremely hot takes on any subject ever (cached if prepared, generated on the fly if not) will be available by request only.) 🙂

Comment #40 November 1st, 2020 at 12:07 am

There’s a guy called Michiel van Lambalgen who wrote a fantastic book about the axiom of choice. He argued that we should drop it in favour of an alternative axiom that states that one can make, in a sense, completely random choices from infinite sets. One of the axioms is that if you do that twice you’ll *never* twice pick the same one. (Think of a uniform distribution on the natural numbers. Let it be!). He showed that this axiom destroyed many monstrosities in ordinary ZFC, and rehabilitated von Mises definition of randomness. The axiom of choice is quite artificial, we just arbitrarily project an intuition about finite things to infinite. OK so it makes quite a lot of things in maths tidy (existence proofs when you want things to exist but don’t actually have an argument). But it also creates monsters like the Banach-Tarski “construction”. And it scuppered von Mises.

And it would be so nice for measure theory if all subsets of the real line were measurable.

Comment #41 November 1st, 2020 at 12:23 am

@Scott #28:

“you can simulate second-order theories using first-order theories”

How would you “simulate” the least upper bound property of the reals (which, as I understand it, is standardly claimed to only be expressible using second-order logic) in a first-order theory?

ISTM that if you could do that, you could construct a first-order theory such that the only sets in any semantic model of that theory that satisfied all the properties of the reals would be uncountable. But if I understand what is being said in this article correctly, in particular the Lowenheim-Skolem theorem, that should be impossible.

Comment #42 November 1st, 2020 at 12:24 am

Sniffnoy #37: Thanks!! That was helpful.

Comment #43 November 1st, 2020 at 12:29 am

Too bad you hadn’t written this sooner — 50 years ago I switched from Math to Physics because this stuff was giving me a headache. (I was careful not to think too hard about what quantum mechanics really means.)

I still think that everything is easy once you know it. There is just lots of stuff I don’t know yet.

Comment #44 November 1st, 2020 at 12:39 am

Richard Gill #40, the real tragedy of the axiom of choice is that King Solomon didn’t know about it. If he did, he could have figured out the Banach-Tarski Paradox. Then instead of cutting the baby into two pieces, he could have suggested 5 pieces and made

bothmothers happy.Comment #45 November 1st, 2020 at 12:39 am

Chris J. #38:

If there exist models of ZFC that to us are obviously “fake,” then doesn’t this suggest that ZFC is missing some self-evident axioms? Would it be possible for us to add some more axioms (possibly uncountably many more if Löwenheim-Skolem requires it), at least to rule out some of these more obviously fake universes?

The trouble with what you’re asking for is that

everyusable system of axioms for set theory will admit “fake” models—that’s the whole point of Löwenheim-Skolem. (Note that, in a practically usable logic, you can’t have uncountably many axioms, since each axiom needs to be encoded by a finite string of bits.) Apparently there were big debates about this in the early 20th century; Zermelo took the fake models as an argument for rejecting first-order logic itself. But second-order logic doesn’t materially improve the situation (see above).More to the point, we can still prove lots of theorems that hold in

allZFC universes, fake and real! From that perspective, the purpose of the fake universes is to teach us about whatdoesn’thold in all the universes, and what we thereforecan’thope to prove about the real universe without adding new axioms. And inventing new axioms and exploring their consequences has indeed been a central preoccupation of set theorists for the whole history of the subject, and by now the Axiom Zoo might put even the Complexity Zoo to shame. 🙂Similar remarks apply to your question about the ZFC axioms failing to capture all the interesting properties of the real numbers.

Comment #46 November 1st, 2020 at 12:42 am

Peter Donis #41:

How would you “simulate” the least upper bound property of the reals (which, as I understand it, is standardly claimed to only be expressible using second-order logic) in a first-order theory?

Simply by using a first-order theory (like, say, ZFC) that’s able to talk about

setsof reals.Comment #47 November 1st, 2020 at 12:44 am

Raoul Ohio #43: I

alsowish I’d written this much sooner! But50 yearssooner would’ve presented some ontological difficulties for me. 🙂Comment #48 November 1st, 2020 at 1:00 am

@Scott #46:

“Simply by using a first-order theory (like, say, ZFC) that’s able to talk about sets of reals.”

Ah, I see–we could express the least upper bound property in terms of sets of reals using ZFC; but since the theory is a first-order theory, it would have a countable model in which the set that the theory calls the “reals” is actually countable, but still has the least upper bound property in terms of the sets that exist within the model.

Assuming that is correct, I still need to wrap my brain around *how* a countable model of such a theory would work. But one thing at a time, I guess.

Comment #49 November 1st, 2020 at 1:09 am

This was great! I look forward to Part 2, assuming society is still intact.

Scott, do you think CH is an important question aside from its obvious significance in history and mathematical culture? In other words – would mathematicians from the planet Zog be as interested in CH as we are? It doesn’t seem clear to me that it is as natural of a problem as “non-meta” math questions such as, say, the Twin Primes Conjecture, since the question seems so specific to ZFC which aliens probably would not have invented. Or is there some sense in which the question naturally appears?

And regarding Morpheus – if he ever did come to enlighten those simple model denizens (or us) about the truth of what the continuum actually looks like, I know just what would he would say.

Comment #50 November 1st, 2020 at 1:28 am

Jacob #14, a model is metaphorically like an answer to one of the old jokes like “What has 18 legs and catches flies?”. The usual answer (“standard model” as it were) is “a baseball team”, but you could also imagine that there is an unusual species of 18-legged aardvarks that also satisfies the description, so it is also a correct answer (an alternate model).

In logic, a “theory” is some axioms and inference rules and the resulting deductions or “theorems”, and a “model” is a set of objects that satisfies the sentences (theorems) in the theory. For example, a theory might say “1) there is an object called Z. 2) for every object X, there is another object called X+ which we’ll call the successor of X”. 3) etc. etc.” The obvious model of this theory is the familiar set of natural numbers, but this is underspecified enough that lots of other models are possible (think of a set with just one member Z, which is its own successor).

In the old days (1920s), there was a belief that the familiar axioms of arithmetic, so called Peano arithmetic or PA, had just one model, the actual natural numbers N. This belief was dispelled by Gödel’s incompleteness theorem in 1931. Gödel proved that PA or any theory like it had to have multiple models. N (the natural numbers) is the “intended” or “standard” model of PA, but there are also “nonstandard” models with perplexing properties, such as numbers that are larger than any of the standard integers. Does this help?

Matt #15, a traditional criticism of SOL is that it is “set theory in sheep’s clothing”. Quantifying over predicates on the integers really means quantifying over the uncountably many subsets of the integers, so now you have to axiomatize those subsets, which means you need a theory of the reals. Similarly, the second-order theory of the reals (where the completeness axiom quantifies over subsets of the reals) has you needing a theory of those subsets. Since that theory has a unique model (the standard reals R), and that model either satisfies CH or else it doesn’t, if we believe in second-order logic we have to say “CH has a unique truth value but we have no way to find out what it is”. In other words second-order logic doesn’t have a useful proof theory.

First-order logic on the other hand has the completeness and compactness theorems (if you can prove some sentence S in a theory T, then S is true in every model of T; and also, you can prove S using only a finite subset of T’s axioms). This makes proof theory much more tractable. The wikipedia article about FOL is very good and explains some of this.

Comment #51 November 1st, 2020 at 1:29 am

Meh, I got some of the above incorrect. I don’t think Hilbert’s school in the 1920’s actually thought PA was complete: rather, they were only trying to prove that it was consistent, and that set theory was consistent, starting from some even weaker axioms that we now call primitive recursive arithmetic (PRA). But from a metaphorical standpoint, what I said was a usable substitute for (non-historical) purposes of the example.

Comment #52 November 1st, 2020 at 1:40 am

Jair #49:

Scott, do you think CH is an important question aside from its obvious significance in history and mathematical culture? In other words – would mathematicians from the planet Zog be as interested in CH as we are? It doesn’t seem clear to me that it is as natural of a problem as “non-meta” math questions such as, say, the Twin Primes Conjecture, since the question seems so specific to ZFC which aliens probably would not have invented.

It’s an excellent question. As I’ll discuss in the next installment, ZFC has some features that seem historically contingent; I’m

not at allconfident that the mathematicians from planet Zog would’ve converged on the same system. On the other hand, one thing I learned in the last few months is that questions like CH can be posed, and the forcing method can be used to build models where they have different answers, in many set theories besides just ZFC! So I’m not sure how much the non-inevitability of ZFC even matters. In any case, the question “what can we say about the next infinity after infinity?” seems so fundamental that I feelat leastas confident that the Zoggians would’ve noticed it, as that they would’ve noticed the twin primes question.Comment #53 November 1st, 2020 at 1:43 am

@Scott #46:

“Simply by using a first-order theory (like, say, ZFC) that’s able to talk about sets of reals.”

One possible answer to the question of “why use second-order logic”, in view of this example, would be: to be able to pin down the theory to just one semantic model. Obviously this cannot be done for the reals in first-order logic, since the “real” reals (the uncountable set that most people have in mind when they say “real numbers”) is *also* a valid semantic model for the first-order ZFC-based theory that uses sets of reals to express the least upper bound property.

Comment #54 November 1st, 2020 at 1:01 am

scott #36: “but if we get zero additional insight about which “definite answer” it is, or even what the answer depends on, then haven’t we just swept the question under the rug? In what sense have we even made the answer more “definite,” as opposed to playing word-games? ”

You almost seem to be implyng that non-constructive proofs can’t provide useful information. Surely it’s a substantive claim that 2nd-order ZFC uniquely determines the answer to CH, even if we don’t *yet* know which answer it entails.

Sniffnoy #37: “Second-order logic is fine for descriptive purposes, but for deductive purposes, including those discussed here, it seems to present more problems than it solves”

I’m still not entirely clear how you’re using the word “deductive”. The set-theoretic question discussed here is whether CH is *entailed* by ZFC (or ZFC2); but such entailment claims are well-defined independent of any particular set of inference rules (which can only capture a subset of entailment claims in the 2nd order case, due to incompleteness). As far as the “deductive” claim as to whether there is a sequence of self-evident logical inferences going from ZFC2 to CH (or not-CH), then it’s an open question whether this exists; but this isn’t a problem caused by 2nd order logic, just a reflection of our current lack of knowledge about CH.

Comment #55 November 1st, 2020 at 1:13 am

grue #54:

You almost seem to be implyng that non-constructive proofs can’t provide useful information.

No, a nonconstructive proof at least tells you

thatsome object exists, even while it doesn’t give you explicit access to it. By contrast, knowing that second-order logic “settles” CH seems to me to providezeroinformation about CH that we didn’t already have.If we can’t make actual deductions about our “one true model,” beyond the ones that we could’ve made anyway in first-order logic, then what’s the point?

Sniffnoy #37 said some of this better than I did.

Comment #56 November 1st, 2020 at 1:18 am

grue #54:

See, this is where I disagree. Mathematics, as most people know it, is about proof, not semantic entailment. When people talk about what questions are answerable within mathematics, they’re talking about proof, not semantic entailment. Etc.

And mathematics

hasto be about proof and not semantic entailment for the reason I discussed above: Semantic entailment, along with descriptive logic generally, is something that only makes sense frominsidemathematics. It’s not something you can found mathematics on; that would be circular. By contrast, proof is something that can stand on its own outside mathematics and that mathematics can be founded on.Comment #57 November 1st, 2020 at 1:50 am

scott #55: “No, a nonconstructive proof at least tells you that some object exists, even while it doesn’t give you explicit access to it. By contrast, knowing that second-order logic “settles” CH seems to me to provide zero information about CH that we didn’t already have.”

Clearly the ZFC2 entailment claim provides information about CH, but perhaps you mean the information is not interesting to you in some sense. In particular, it’s either true that “CH is entailed by ZFC2”, or “not-CH is entailed by ZFC2”. And this substantive claim is not true if we substitute ZFC1 in the above statement, nor is it true if we replace CH with other claims, e.g. an existence claim about inaccessible cardinals (since ZFC2 doesn’t specify the height of the set hierarchy).

But perhaps more importantly, the truth of this entailment tells us something about our set-theoretic axiom systems, i.e. it clarifies that the independence of CH is only true with 1st-order ZFC, and doesn’t hold once we allow quantification over predicates. Which connects back to the notion that it’s entirely possible that we will eventually find a proof of CH, e.g. in ZFC2.

Comment #58 November 1st, 2020 at 3:40 am

Excellent! I particularly enjoyed that clearing of the misconception regarding C and the alephs.

Scott #3: The typo hasn’t been corrected.

Comment #59 November 1st, 2020 at 3:57 am

Scott, what do you mean by talking about the sequence of infinite cardinalities as something fixed or absolute? Doesn’t the fact that models generally “get their own cardinality wrong” show that the cardinality of a set is often model-dependent and subjective?

Truth is, I’m not so convinced that any of this whole discussion has any fixed meaning. I’m much more comfortable in predicative systems, where all the objects are definable and all the sets are countable. Do you see any strong reason to believe in anything beyond that?

Comment #60 November 1st, 2020 at 4:06 am

Does Löwenheim-Skolem imply that there is a ‘model’ of quantum theory based on integers only? And would it help us to distinguish the different interpretations?

I assume here that using C for the amplitudes is the most comfortable way to do physics, but there must be a model using N only – no?

Comment #61 November 1st, 2020 at 5:00 am

> Gödel proves that, if the axioms of set theory are consistent—that is, if they describe any universes at all—then they also describe these minimalist universes. He then proves that, in any of these minimalist universes, there are exactly ℵ1 real numbers.

Can you clarify this last sentence? I am confused, because if a universe has ℵ1 real numbers and this universe is a model of some theory, then by downward Skolem theorem there is a universe with ℵ0 real numbers which is also a model of the same theory.

Moreover, suppose a universe has ℵ1 real numbers. Does it follow that this universe itself thinks there’s no cardinality between naturals and reals? I don’t see how.

My best guess is that I don’t understand what exactly you mean when you say that the universe has ℵ1 real numbers.

Comment #62 November 1st, 2020 at 5:39 am

Sniffoy #37

I am not an expert at all, but as far as I understand, in *topological* (sheaf) semantics, you *do* get completeness theorems for certain systems of high-order logic. (https://arxiv.org/abs/math/9707206)

Comment #63 November 1st, 2020 at 5:49 am

> “we can ask the question that, incredibly, the Matrix movies never ask. Namely, how do we know that our own, larger universe isn’t similarly a lie?”

But the question *is* asked. In the later Matrix movies, Neo is able to display some of his powers outside the simulation, which suggests that the world of Zion is just another simulation for people who feel a need to rebel against the first one.

Looking forward to the rest of the series.

Comment #64 November 1st, 2020 at 6:15 am

Non-experts are indeed often much better qualified to explain things than experts. The following great quote of Mikhael Gromov goes to the heart of it:

“This common and unfortunate fact of the lack of an adequate presentation of basic ideas and motivations of almost any mathematical theory is, probably, due to the binary nature of mathematical perception: either you have no inkling of an idea or, once you have understood it, this very idea appears so embarrassingly obvious that you feel reluctant to say it aloud; moreover, once your mind switches from the state of darkness to the light, all memory of the dark state is erased and it becomes impossible to conceive the existence of another mind for which the idea appears nonobvious.”

Comment #65 November 1st, 2020 at 7:03 am

asdf #50

I’m not Jacob, but that helped me IMMENSELY. Great analogy, example, and history. A slam dunk of elucidation!

Same for this post! Thanks Scott! I feel like I’ve learned a bunch already from just this post, at least for the high level concepts

Comment #66 November 1st, 2020 at 7:11 am

Scott #30: thanks!

Comment #67 November 1st, 2020 at 7:32 am

I’m confused, this sounds like we can prove that the Peano axioms are consistent. In what sense is the word “true” is used here? (I assume we cannot prove within F that Con(F) is unprovable, right?)

Comment #68 November 1st, 2020 at 7:57 am

Andrei #58: Sorry, just tried again! Not the first time WordPress has failed to propagate one of my updates.

Comment #69 November 1st, 2020 at 8:11 am

This is just more anti-Trump fake news. In a March 2018 post you already listed Cohen’s technical monograph on the continuum hypothesis as one of your 30 favorite books of all time so it’s obvious you already knew forcing and there’s no reason to blog about learning it as some kind of strange metaphysical escape from current events weighing so heavily on our collective souls.

Comment #70 November 1st, 2020 at 9:21 am

Wait, so we actually know that ZFC, and any other system in which Con(F) can be stated, is consistent? That seems like a rather crucial thing that should be spelled out in more casual intros to Gödel’s theorems, to not leave layfolks and crackpots alike with the idea that it might be that our whole generally accepted system is actually inconsistent. Or am I the only one that was confused like that?

Comment #71 November 1st, 2020 at 9:22 am

Scott #21:

>Yes, that’s about right! Every model of ZFC will contain what are called the “constructible” reals, which (at least according to that model) there are exactly ℵ1 of.

It’s been an awfully long time since I studied this – someone correct me if I’m wrong – but actually can’t you have a model of ZFC satisfying “there are only countably many constructible reals”?

E.g. if you begin with a model of V=L and, using forcing, “collapse” aleph_1 (so that aleph_2 becomes the new aleph_1) then I don’t think you add any extra constuctible reals, and since we only started with aleph_1 of them, now there are only countably many.

Comment #72 November 1st, 2020 at 9:28 am

grue #57:

But perhaps more importantly, the truth of this entailment tells us something about our set-theoretic axiom systems, i.e. it clarifies that the independence of CH is only true with 1st-order ZFC, and doesn’t hold once we allow quantification over predicates. Which connects back to the notion that it’s entirely possible that we will eventually find a proof of CH, e.g. in ZFC2.

Suppose someone said: instead of your weak, puny

recursively axiomatizableformal systems, why don’t you switch to a stronger formal system—one where all the true statements of arithmetic are simply thrown in for free, as axioms? The Incompleteness Theorem no longer applies to such systems! Sure, it might be a bit harder to use those systems in practice, but the fact that they decide all questions of arithmetic (correctly, no less) raises our hopes that it’sentirely possiblethat we’lleventuallybe able to fulfill Hilbert’s dream and solve the Entscheidungsproblem.The problem here is so blindingly obvious that one would assume the person was trolling, or joking. It’s not merely that the suggestion “throw in every true statement as an axiom” is

hard to work with in practice, rather it’s that itdoesn’t help at all. “If we assumed we had the answer, then we’d have the answer” doesn’t represent the slightest advance over not having the answer.Now, let me hit the ball into your court as hard as I can: I remain to be convinced that the situation with CH and second-order logic is any different from the above.

Comment #73 November 1st, 2020 at 9:37 am

maline #59:

Scott, what do you mean by talking about the sequence of infinite cardinalities as something fixed or absolute? Doesn’t the fact that models generally “get their own cardinality wrong” show that the cardinality of a set is often model-dependent and subjective?

As I said upthread, the infinite cardinalities are

notabsolute, as already shown by Löwenheim-Skolem. I meant only that, in some sense, Cantor’s hierarchy of Alephs (like his closely-related hierarchy of ordinals) ismoreabsolute,morefixed in its properties, than the powerset operation, which can “jump around” in an uncontrolled way—and this turns out to be a key intuition that one needs to acquire in this subject.Truth is, I’m not so convinced that any of this whole discussion has any fixed meaning. I’m much more comfortable in predicative systems, where all the objects are definable and all the sets are countable. Do you see any strong reason to believe in anything beyond that?

Certainly,

defining all uncountable sets out of existencefeels way too constrained for doing lots of ordinary math—in the sense that, if you forced me to do it, I’d just constantly invent circumlocutions for talking about uncountable sets anyway.I’ll write a bit in the next post about the question of predicativity vs. impredicativity. For now, let me say: Gödel’s constructible universe axiom V=L is one concrete implementation of the idea that set theory should be “predicative.” And of course, if V=L then CH holds. So, would you be willing to tolerate uncountable sets in the V=L / CH world?

Comment #74 November 1st, 2020 at 9:41 am

wb #60:

Does Löwenheim-Skolem imply that there is a ‘model’ of quantum theory based on integers only?

Only in the boring sense that it would give you a model of quantum theory where all the amplitudes belong to some countable subset of the complex numbers, and could therefore be

codedby integers. Which we already knew without Löwenheim-Skolem: consider the subset of complex numbers that are computable! I don’t see how this helps in any way with the interpretation of QM.Comment #75 November 1st, 2020 at 9:45 am

Phi #61:

Can you clarify this last sentence? I am confused, because if a universe has ℵ1 real numbers and this universe is a model of some theory, then by downward Skolem theorem there is a universe with ℵ0 real numbers which is also a model of the same theory.

I’ve edited the sentence, to clarify that it only means ℵ

_{1}real numbers from the standpoint of someonewithinthe constructible universe. From an outside observer’s standpoint, it could be ℵ_{1}but it could also be ℵ_{0}. Thanks; good catch!Comment #76 November 1st, 2020 at 9:48 am

Nuño Sempere #63:

But the question *is* asked. In the later Matrix movies, Neo is able to display some of his powers outside the simulation, which suggests that the world of Zion is just another simulation for people who feel a need to rebel against the first one.

I’d forgotten or hadn’t noticed that! It seems like an extremely roundabout way of “asking the question.” Couldn’t it also mean that Zion

isthe real world, and Neo is just so awesome that he has supernatural powers even there?Comment #77 November 1st, 2020 at 10:20 am

Frank #64:

Non-experts are indeed often much better qualified to explain things than experts. The following great quote of Mikhael Gromov goes to the heart of it…

Steven Pinker, in his writing guide

The Sense of Style, calls this “the curse of knowledge,” and argues that it’s not just one butthecentral problem of writing. Having a name for it has helped me to notice the curse afflicting myself and others.Comment #78 November 1st, 2020 at 10:30 am

Tamas V #67 and Ivo #70: No, we don’t know for certain that ZFC is consistent (although I’d probably bet my house on it). But it’s a theorem

within ZFC(in fact, within elementary arithmetic) that “ZFC can’t prove Con(ZFC), if and only if Con(ZFC) in fact holds.”Comment #79 November 1st, 2020 at 10:33 am

Alphahel1x #71: I’d certainly trust someone who’d formally studied this subject over an ignoramus like me. But riddle me this: if there are models of ZFC that can enumerate all their constructible reals, then within those models, why can’t we just diagonalize across all the constructible reals, and thereby produce a new constructible real that’s not in the original enumeration?

(Update: This question is answered below!)

Comment #80 November 1st, 2020 at 10:46 am

Isn’t the fundamental problem that the notion of a continuum, e.g. the reals, or, equivalently, an infinite sequence of anything, is just an impossible concept from a practical point of view? It creates paradoxes.

If space were a true continuum, you could store an infinite amount of information by just making a single notch on a finite stick: if the first bit is zero, pick the upper half of the stick, if one, pick the lower half. Then repeat this subdivision in two for all the bits in the sequence.

Similarly, you can find the description of all the possible universes (finite or infinite) that could ever exist within the expansion of the number PI.

Reality puts some real bounds on all this, i.e. the entire state of the universe is a really big unique bit sequence. That sequence could be infinite, maybe, but there’s also a bound in terms of read/write access, i.e. the universe is a Turing machine with an infinite memory ribbon, but only a finite fraction of that memory ribbon will ever be used.

Would mathematics based on modulo arithmetic (i.e. at least finite in some sense) side step all those paradoxes? Or is this type of math just as problematic?

Comment #81 November 1st, 2020 at 10:50 am

I find the diagonal argument absurd.

For clarity, let’s arrange those infinite bit sequences (least significant bit first), because it highlights the absurdity without confounding:

s(0) = 00000000...

s(1) = 10000000...

s(2) = 01000000...

s(3) = 11000000...

...

The diagonal is all 1’s. I don’t see how “all 1’s” implies existence of some unenumeranted bit sequence?

The diagonal argument assumes that you can make a square matrix out of writing out all bit sequences. This assumption is trivialy false by inspecting the first four natural numbers (to avoid discussion about zero being included or not):

00

10

01

11

Why should square matrix be assumed for “all bit sequences” but fail for most “not all bit sequences” example?

Comment #82 November 1st, 2020 at 10:53 am

Scott #79: Because the enumeration of the constructible reals is not itself constructible! Diagonalizing over this enumeration produces a new

non-constructiblereal, just like how diagonalizing over an enumeration of all computable reals produces a non-computable real.I can corroborate Alphahel1x: A forcing extension which collapses \(\aleph_1^L\) will not introduce any new constructible sets, so \(\aleph_1^L = \aleph_0\) in the forcing extension.

Comment #83 November 1st, 2020 at 10:58 am

>Four Levels to Unpack

Thank you! Someone mentioned that explanations from non experts are frequently better than those from the experts. I disagree. Most explanations from non experts just suck. Truth is: « who can name the bigger number »; « Shor I’ll do it »; … it takes some genius to write this kind of material. So glad to read one more!

*

>if someone showed you a proof of either CH or Not(CH)

I’m afraid that’s a very stupid question, but why don’t you write? Con(CH) ⇒ Not(Con(ZFC+CH))

**

>Recall how Cantor’s proof works (…) which in the example above would produce 1011…

Fun fact that I’m sure you already encountered many times: 1011… [followed only by a sequence of zeros] is equal to 1010… [followed only by a sequence of ones], which means this number may well be in Cantor’s enumeration. I know, I know, there is a simple fix through using decimal representation instead of binary numbers. But to my confuse mind this still raise the question: when you say:

>What Cantor really shows is only that, within our mathematical universe, there can’t be an enumeration of all the reals of our universe.

What prevents this other interpretation?

« What Cantor really shows is only that, this particular way of creating binary sequences, that’s not enough to produce them all. But who knows we can’t come up with a better scheme? »

In a way we know we can -proof above. In another way we know we can’t -no way mathematicians wouldn’t have noticed it if that counterexample was anything but superficial. How do they know, that’s what I still don’t understand. Can you help?

***

>at most ℵ0 elements or else at least C of them.

C is not defined at first occurence, and yes it took me an embarrassing amount of time to figure out what it meant.

****

>The devout Cantor thought his discovery illuminated the nature of God; it’s never been entirely obvious to me that he was wrong.

How I love this sentence 🙂

Comment #84 November 1st, 2020 at 11:12 am

Vanessa Kosoy #62: There can be no completeness for the “standard semantics” or “full semantics” of second-order logic. There simply is no corresponding deductive calculus. And without an explicit deductive calculus, you cannot even talk about completeness. There are two standard deductive calculi for second order logic, namely (a) the calculus you get by only adding the predicative comprehension axioms (set existence axioms), and (b) the calculus you get by also adding the impredicative comprehension axioms. As far as I can tell, the deductive calculus specified in that paper is weaker than (b). So the completeness does not even apply to the “more powerful” standard deductive calculus (b) for second order logic. And even (b) is sound but incomplete for the “standard semantics”. I guess this means that (b) will be unsound for the “topological semantics” proposed in that paper.

Sniffnoy #37: Thanks for steering this into a more productive direction. You said this clearer than I could say it. I also tried to convey that second order logic has its strength as a “descriptive logic”. And my point that we don’t even know what we want set theory to be on the descriptive level got completely lost.

grue #34:

Indeed, and the real numbers are described up to isomorphism as “the complete ordered field”. So in both cases we do know what we want to have, and it is a good thing to have a formal language that allows us to describe exactly what we want.

You just replaced the axiom schemes of ZFC by second order axioms here. But this does not necessarily describe what you really want set theory to be. For the deductive calculus (a) with the predicative comprehension axioms, you will just get back normal ZFC. And for the stronger deductive calculus (b) with the impredicative comprehension axioms, you cannot even be sure whether you will not get an inconsistent theory instead.

If you would have succeeded to describe what you want set theory to be, then that inconsistency would at least tell you something valuable. But your ZFC2 is just as arbitrary as ZFC, and you would just switch to another arbitrary set theory, if your ZFC2 should turn out to be inconsistent.

This is related to opinions of Scott Aaronson about CH. This series of posts wants to focus on forcing and independence of CH from ZFC. But in the past Scott also used CH as an example of a mathematical statement that might not have a clear true or false status, in contrast to arithmetic statements, for which Scott is convinced that they always have a clear true or false status, no matter whether they are independent of ZFC or not.

And part of this is related to the fact that we don’t even know what we want set theory to be. So when W. Hugh Woodin tries to resolve CH by his Ultimate L program, figuring out what we really want set theory to be and arguing for it is just as important as the concrete axioms and proofs that he might come up with.

Comment #85 November 1st, 2020 at 11:15 am

@Scott 79: Because the enumeration isn’t in L, your diagonalization doesn’t produce a real in L.

Comment #86 November 1st, 2020 at 11:26 am

Scott #78:

Great, thanks, that’s surprising indeed. Now I know what confused me: when I read “…and Con(F), a statement that encodes F’s own consistency, is an example of one”, I thought you meant that it has already been proved that Con(F) cannot be proved, by somehow looking at F from “outside”, and without assuming Con(F).

Comment #87 November 1st, 2020 at 11:51 am

Nice post Scott. Looking forward to part 2. I actually took a Graduate Algebra course and undergrad Analysis course from Cohen in the late 60’s. Looking forward to your next post.

Comment #88 November 1st, 2020 at 11:54 am

@Scott: I think you should post the next installment before the Singularity, I don’t want the world to end without having read it 🙂

Comment #89 November 1st, 2020 at 12:03 pm

I have a slightly contrarian view of R and c, as follows:

N is totally concrete.

Doing addition and subtraction on N leads to Z (exactly, and nothing else).

Doing multiplication and division on Z leads to Q (exactly, and nothing else).

Solving polynomial equations on Q leads to A (algebraic numbers, exactly, and nothing else).

Following Scott’s exposition above, I hereby suggest the notation “\mathbb{D}”, or D, for the “dust” of numbers that anyone has ever heard of. Then D contains all real algebraic numbers, and is countable.

But I think going from D to R is NOT OBVIOUS. I personally like the picture of “filling in the gaps in D”, but do we need this or any other picture?

I think the continuum defined by either Dedekind cuts or Cauchy sequences is purely for the convenience of mathematicians wanting to prove results in analysis.

So, the RO view is that N, Z, Q, and A have a reality forced on us by simple math. The set D is a bit slippery, but is countable. But who knows what R is, or do we really need it?

Here is another way of looking at it: We can compute as much of the binary expansion of any d in D as we please. So just define R as the set of ALL possible countable binary expansions. What’s not to like? You get a cool set with a built in cantor construction to prove it is not countable.

But, what good is it?

Comment #90 November 1st, 2020 at 12:08 pm

Scott #47,

I didn’t actually think you could present clear math before you were born. Probably Terrance Tao is the only person that could do that.

Comment #91 November 1st, 2020 at 12:08 pm

scott #72: “instead of your weak, puny recursively axiomatizable formal systems, why don’t you switch to a stronger formal system—one where all the true statements of arithmetic are simply thrown in for free, as axioms? …”“If we assumed we had the answer, then we’d have the answer”””

First, merely allowing quantification over predicates is not the same as simply “assuming” the answer. Also, 2nd order axiomatizations *are* recursively enumerable, *and* each individual axiom is generally self-evidently true. This achieves one of the goals of axiomatizing math, namely finding a set of (enumerable) self-evident axioms which correctly entail all (and only) true claims in the domain. In 2nd order logic this is often possible (e.g. 2nd order peano arithmetic), in 1st order logic it usually isn’t. In your counter-example we don’t even know what the specific axioms are, let alone whether they are self-evident (actually we know most of them aren’t self-evident).

To be clear, no one is suggesting a wholesale “switch” to 2nd order logic, just that it provides some additional useful insight (e.g. that many of 1st-order impossibility results no longer apply if you allow quantification over predicates). And I take this to be the standard mathematical view, i.e. that the various 2nd order results are an accepted and substantive part of the mathematical landscape.

Comment #92 November 1st, 2020 at 12:48 pm

gentzen #84

I know there can be no completeness theory for “standard” (set) semantics, so what? I’m not sure why you felt the need to point it out when everyone in the debate already know it.

Comment #93 November 1st, 2020 at 12:53 pm

Hey, Scott. Thank you as always for the writeup!

I’m noticing that the black-and-white “Follow” button (presumably meaning “subscribe to RSS by email) doesn’t seem to work. It redirects me to https://follow.it/info/please-place-link-on-website

I was able to make it work by going to follow.it and subscribing to https://www.scottaaronson.com/blog/?feed=rss2. Simply https://www.scottaaronson.com/blog didn’t work, it said “feed deactivated”.

Comment #94 November 1st, 2020 at 1:19 pm

About 90+% of this discussion is going over the top of my head.

Can anyone recommend a good resource for getting an overview of the current state of mathematical logic that would allow one to see exactly how concepts like first order logic, 2

^{nd}order logic, higher order logics, model theory, type theory and the systems that are actually implemented in proof assistants, such as the Calculus of Constructions, fit together ?Comment #95 November 1st, 2020 at 1:38 pm

Here is my understanding of the situation regarding the “truth” of axioms. Please tell me if any of this is wrong.

We start with an intuitive, pre-formal understanding of the natural numbers. There are lots of squishy questions about what exactly the natural numbers “are” and whether and in what sense they “exist”, but nonetheless there are a great many facts about the natural numbers that are universally acknowledged (for instance, that for any prime there is a greater prime). It’s difficult to imagine what it would mean for the natural numbers themselves to be “inconsistent”.

Peano arithmetic (PA) is a set of obviously true statements of arithmetic that, together with the rules of logic, entail almost all of the true statements about the natural numbers that anyone has ever thought of. If PA entailed all of the true statements, then it would be possible to write a simple computer program that would determine the truth of any statement. But this isn’t possible, because there are true statements about the natural numbers that are not entailed by PA.

The natural numbers constitute a model of PA. There are other models of Peano arithmetic, but they aren’t of much interest. The natural numbers are not just “a” model, they’re “the” model. A statement like Goldbach’s Conjecuture is said to be “true if indepdendent”. What this means is that if it can’t be proved or disproved in Peano arithmetic, then it’s obvious that it’s true and not false, and any model of PA in which it’s false is deviant.

There is a sense in which statements of arithmetic are “true”, but this is not the case for all theories. There are models of group theory that are commutative and there are models that are not. But the non-commutative models are not considered deviant; they are all legitimate.

We might say that arithmetic is model-driven, while algebra is axiom-driven. In algebra the question is: which objects constitute models for such-and-such axioms? In arithemtic, the question is: what axioms adequately describe the natural numbers? Peano arithmetic is one attempt to answer that question, and there could be better ones. (Actually, there are infinitely many theories that are “better” in terms of completeness, but they aren’t all as elegant as PA.)

Is set theory model-driven or axiom-driven? I’m not sure there’s broad consensus about this. Even worse, those who think that some model of set theory is “the” model don’t agree about which one it is. Cantor and Gödel were both quasi-mystical weirdos who thought that the Continuum Hypothesis was surely true or false, but Cantor thought it was true and Gödel thought it was false.

Arguments for or against certain models, like Freiling’s dartboard argument, take place at the “intuitive” level. This kind of thing tends to look like “philosophy”, and none of it seems to have been very convincing to anyone. A big part of the problem is that statements like the Continuum Hypothesis don’t, as far as anyone knows, have any bearing on any statements about the natural numbers. If they did, then that would be considered decisive evidence one way or the other. But they don’t, so who cares? Or at least that’s Scott’s attitude.

Comment #96 November 1st, 2020 at 1:49 pm

The_Duck #25, usually mathematicians other than set theorists conceive the reals as a complete ordered field, an idea that predates set theory. The idea is simply that if (say) two curves in the plane cross each other, then they actually intersect (there is a point that is in both curves). Much elementary math (you might remember Rolle’s theorem from calculus) depends on this. There’s a topic called “constructive mathematics” or “constructive analysis” which does get rid of the non-constructible reals, but at the expense of eliminating the law of the excluded middle (LEM). So a lot of traditional proofs break, though maybe not as many as one might expect. (Hmm, why does the LEM imply non-constructibles? I’ll think about that one.)

Peter Donis #41, #48: countability is not an absolute or intrinsic property of a set, but rather, it is relative to the set-theoretic universe that the set sits in. What does it mean for an infinite set S to be countable? It means there is a bijection between S and the integers. What is a bijection? A function with certain properties. What is a function? From high school, “a function is a set of ordered pairs such that bla bla”. Aha, a

setof ordered pairs. So “S is countable” means the universe contains S, the integers, and a third set B which is the bijection. What happens if you pass to a smaller universe (a smaller model of the theory, that sits inside the bigger one)? You get rid of some sets: in particular, maybe you got rid of B. So S can be countable in the bigger universe (since the big universe has the bijection) but not the smaller one (no bijection).That’s pretty much what downward Löwenheim-means. You can go from a big model to a smaller one that the big one thinks is countable. The big model can match the small model’s reals into the big model’s integers. But the small model can’t match the small model’s reals to the small model’s own integers, because the bijection needed to do that only exists in the big model. This slightly confusing fact is called Skolem’s paradox but it is not actually paradoxical and is less confusing with a bit of practice. In any case, it’s usually considered easier to understand than forcing ;-).

Comment #97 November 1st, 2020 at 1:50 pm

@Scott #72

i was thinking about the ‘many worlds interpretation’ and the question how many worlds there actually are. Consider a Geiger counter or some other source of quantum clicks 011010111…

The many worlds interpretation suggests that all possible sequences exist, but Cantor’s diagonal argument would then suggest that the number of worlds is not countable (if the universe does not end after a finite time).

But L-S suggest that there is a countable model of QT …

Comment #98 November 1st, 2020 at 1:56 pm

Scott #73: I am talking about predicative systems where all the sets are countable by enumerations within the system! Obviously this does not hold for L, or for any model that satisfies the Powerset Axiom.

And tiny though these worlds may be, they really are not terribly limiting! As you said, all of the real numbers and functions we will ever actually meet live in the “countable dust”. There is no problem with defining a particular real or function predicatively, and “being a real” is a well-defined property. You just can’t use the “completed” set of “all” the reals. They are basically a proper class rather than a set.

Comment #99 November 1st, 2020 at 2:03 pm

if there is 4th extra dimension, then priestesses in “Red Sonya” must have 2 quantum talismans to forecast weather? and Sauron 2 all-seeing eyes, and 4D aliens 3 normal eyes!

but quantum weather computer is only one nomatter how many dimensions?

Comment #100 November 1st, 2020 at 2:11 pm

Nick #95: I think that’s a pretty good summary!

Comment #101 November 1st, 2020 at 2:37 pm

Scott #20

I have never really gotten over the broader feeling that “cardinals are natural, ordinals are weird”. The latter always felt artificial while the former appeal to a natural mathematical intuition. Presumably a real set theorist unlearns this.

Comment #102 November 1st, 2020 at 3:27 pm

Nick #95: “those who think that some model of set theory is “the” model don’t agree about which one it is. Cantor and Gödel were both quasi-mystical weirdos who thought that the Continuum Hypothesis was surely true or false, but Cantor thought it was true and Gödel thought it was false.”

Just because Cantor and Godel had differing intuitions about whether CH would likely turn out to be true, doesn’t mean they had different views about what “the” underlying models is; as an analogy, no doubt some mathematicians had differing intuitions about whether Fermat’s last theorem would likely turn out to be true, but that doesn’t mean they disagreed about what the natural numbers are. Agreeing on exactly what some concept means doesn’t necessarily imply that you know everything that is true/false about that concept (or even have accurate intuitions about what is likely to be true/false about the concept).

Comment #103 November 1st, 2020 at 3:44 pm

A fun anecdote: the connection between the Lowenheim-Skolem theorem and The Matrix was already made in the 80s, when it was noted by Putnam in his famous “Brains in a vat” paper.

Comment #104 November 1st, 2020 at 4:04 pm

Nick #95

> But this isn’t possible, because there are true statements about the natural numbers that are not entailed by PA.

What does it mean to say that some statement is “true” if you can’t prove it formally ?

In the case of your example:

> nonetheless there are a great many facts about the natural numbers that are universally acknowledged (for instance, that for any prime there is a greater prime)

Isn’t that actually a theorem ? Otherwise it doesn’t appear to be that intuitively obvious (ie. enough to be taken as an axiom).

Comment #105 November 1st, 2020 at 4:37 pm

Many thanks for this, I always wanted to understand this proof, and clear up my misconceptions about these various levels of metamathematics. The reputation of forcing as something only specialists of set theory can deal with was frightening me, but now you’ve got me embarked 🙂

Comment #106 November 1st, 2020 at 4:37 pm

Tristan #81

The natural numbers are countable. And infinity is not a

natural number, but you don’t need the diagonal argument to tell you

that. And it takes only log(N) bits to encode the number N, and again

you don’t need the diagonal argument for that.

You should think of the diagonal argument as telling you the following

simple, even “practical” thing:

*** No matter what strategy you try to enumerate all possible infinite binary sequences, YOU WILL FAIL. ***

You can enumerate all natural numbers (in an infinite list, of course) — and you can similarly enumerate all integers, rational numbers, algebraic numbers, and *finite* binary strings.

What the diagonal argument tells you, is that real numbers and INFINITE binary sequences behave differently from those examples: no matter how you try to enumerate them, you’ll necessarily leave some real numbers or some infinite binary sequences off of your list, no

matter that your list is (countably) infinitely long.

For that reason, we say that the reals and infinite binary sequences have a “higher order of infinity” than the natural numbers, or are “uncountably” infinite.

Comment #107 November 1st, 2020 at 5:27 pm

An interpretation of all this is that mathematical truth is growing with time. That is to say, there’s a sort of ‘mathematical time’ associated with the notion of ‘completion’. Not all truths have been decided yet. The growth of knowledge in minds could itself be described mathematically. We could call this ‘mathematical causality’, the rules governing how more complex concepts are built up from similar ones.

Then my conjecture is that all the most abstract (logic-related) parts of math are describing mathematical causality, i.e., group theory is ‘algebraic time’, topology is ‘geometric time’, and category theory is ‘logical time’. Mathematical logic then, doesn’t stand on it’s own. It’s a way of describing how more complex math objects are built up from simple ones.

The space of all concepts is a topological space called Chu space. Your mind is a ‘sphere of knowledge’. The sphere does not expand outwards, but rather it grows inwards! As you ‘learn subjects in more depth’, you’re moving towards the interior of the sphere in Chu space, and ‘mathematical causality’ is precisely the rules describing the connectivity of Chu space (groups, topologies and categories).

Comment #108 November 1st, 2020 at 5:44 pm

Scott, I’m happy to see you embark on this project and I look forward to the upcoming installments.

I agree with what you say about the “curse of knowledge.” In my opinion, the curse is particularly potent in the area of logic and set theory, because there are so many basic confusions for the beginner that don’t come up in other areas of math, and that are easy to forget about once one unconfuses oneself. When I wrote my expositions, I made sure to refer back frequently to my own confused email correspondences with people like Matthew Wiener and Andreas Blass, in order to remind myself of things that weren’t obvious to my former self (and hence would likely be non-obvious to many readers). To any aspiring expositors out there, I recommend this process (saving your own dumb questions and re-reading them later) as a useful general technique for crafting pedagogically helpful explanations.

What you have labeled “My biggest confusion” is an excellent example of something that is going to be helpful to many beginners but that experts might not realize is a potential stumbling block. I’m a huge fan of Martin Gardner, but I believe that one of his rare missteps was his decision, in his article about Aleph-Null and Aleph-One, to assume the continuum hypothesis at one point, and then to use Aleph-One to mean the continuum, thus reinforcing the misconception that the continuum is (by definition?) the first infinity after Aleph-Null. There is no doubt that the cardinality of the real numbers is more intuitive than “the first cardinality after \(\aleph_0\)” but it doesn’t make any expository sense to use \(\aleph_1\) instead of \(C\) or \(2^{\aleph_0}\) to refer to the cardinality of the continuum. I wonder how many people can trace their misconception to Gardner’s article (I know you said you aren’t one of them).

Comment #109 November 1st, 2020 at 5:47 pm

Vanessa Kosoy #92:

The question is whether the “topological semantics” proposed in that paper applies to the (a) predicative or the (b) impredicative deductive calculus. Based on the phrase “the second system is a predicative fragment thereof” in the abstract, the expectation would be that the first system is indeed the impredicative calculus. Now I wrote: “As far as I can tell, the deductive calculus specified in that paper is weaker than (b).” This would be bad. Probably I am wrong, given the phrase from the abstract I quoted above.

I felt the need to point out that the paper cannot talk about the “standard” (set) semantics here, because I felt that I was forced to browse the paper just to guess which semantics/calculus they had in mind. But thinking about it now, it probably was a mistake to assume that they could have in mind anything else than the stronger (impredicative) of the standard calculi, especially since it seemed (to me) that their calculus was stronger than the weaker (predicative) of the standard calculi.

Comment #110 November 1st, 2020 at 5:54 pm

Tristan #81: If you reject the validity of the diagonal argument, you won’t get much out of this sequence of posts. 🙂

But in case it helps: we’re talking about infinite bit sequences, not finite ones, hence the “square.” And in the example you gave, with all 0’s along the diagonal, the sequence 1111… indeed hadn’t been enumerated, so Cantor’s argument did exactly what it was supposed to. Note that, if 1111…

werein the enumeration, then the diagonal couldn’t possibly be all 0’s.Comment #111 November 1st, 2020 at 6:01 pm

Scott, I am curious if you ever read “Naming Infinity” by Graham and Kantor, and what you thought of it if you did.

Comment #112 November 1st, 2020 at 6:04 pm

Itai Bar-Natan #82 and Kameryn W #85: Thanks so much for setting me straight! I just corrected my comment #21 to clarify. I’d been thinking only about constructible universes with different numbers of ordinals—but if we’re talking about the constructible reals within a

non-constructible universe, then I indeed don’t see any bar to having only ℵ_{0}constructible reals. Oops!Comment #113 November 1st, 2020 at 6:08 pm

J #83:

I’m afraid that’s a very stupid question, but why don’t you write? Con(CH) ⇒ Not(Con(ZFC+CH))

Sorry, I don’t understand you at all. CH is obviously consistent

as an isolated proposition! The interesting question is its consistency with the rest of ZFC.What prevents this other interpretation?

« What Cantor really shows is only that, this particular way of creating binary sequences, that’s not enough to produce them all. But who knows we can’t come up with a better scheme? »

Because it’s

notjust about a particular way of creating binary sequences, but aboutanypossible way—or at least, any way available to you within a given model of set theory.Comment #114 November 1st, 2020 at 6:12 pm

Raoul Ohio #89: Your comment looks a lot to me like the motivation for areas such as computable analysis.

Comment #115 November 1st, 2020 at 6:15 pm

grue #91: I don’t really have a problem with second-order logic as a subject of philosophical or metamathematical inquiry. I was just questioning how it could tell us anything new about

CH in particular(or any other concrete set-theoretic question of that kind).Comment #116 November 1st, 2020 at 6:19 pm

Justin Lebar #93:

I’m noticing that the black-and-white “Follow” button (presumably meaning “subscribe to RSS by email) doesn’t seem to work.

Sorry about that!! I just went to the settings for the relevant WordPress plugin, removed the “Follow” button, and while I was at it, added a Facebook share plugin.

Comment #117 November 1st, 2020 at 6:31 pm

Nick #95: I’m in violent agreement with nearly everything you write! 😀

Except:

– According to this summary, while Gödel suspected CH was false for much of his career, he later came around to thinking that maybe either CH holds or C=ℵ

_{2}(which eerily presages Woodin’s program decades later).– Hopefully, the time that I plan to spend on this sequence of posts is sufficient proof that my attitude toward CH is a little more complicated than “who cares?” 😀 Yes, I do lean toward the view that the choice between CH and not(CH) is like that between Euclidean and non-Euclidean geometry—i.e., not one that admits a Platonic answer. But for me, the very existence of mathematical universes where CH holds and of others where it fails is itself a profound (and a-priori non-obvious) fact about Platonic reality.

Comment #118 November 1st, 2020 at 6:35 pm

wb #97: To my mind, the QM measurement problem bares whatever teeth it’s got even when we consider finite sequences of measurement outcomes and finite-dimensional Hilbert spaces! So I’m extremely skeptical that considerations of set theory could shed any light on it, except

possiblyat the level of analogies.Comment #119 November 1st, 2020 at 6:37 pm

maline #98: In that case, I’d have to study the predicative world in question more, before deciding whether it was too confining of a place to live. Do you have a recommended reference?

Comment #120 November 1st, 2020 at 6:37 pm

nosy snoopy #99:

if there is 4th extra dimension, then priestesses in “Red Sonya” must have 2 quantum talismans to forecast weather? and Sauron 2 all-seeing eyes, and 4D aliens 3 normal eyes!

but quantum weather computer is only one nomatter how many dimensions?

Sorry, banned from this comment section.

Comment #121 November 1st, 2020 at 6:42 pm

Gerard #104:

What does it mean to say that some statement is “true” if you can’t prove it formally ?

Hoo boy. 🙂

That Libertarian candidate Jo Jorgensen will not win the election on Tuesday is a true statement, albeit not one for which I have a formal proof.

Likewise the normality of π: no proof, would bet my life on its truth.

The Gödel sentence for a given formal system F can’t be proved

within F, but we can confirm its truth by passing to a slightly larger system.Comment #122 November 1st, 2020 at 6:45 pm

Timothy Chow #108: Thanks so much for your comment, but more than that, for your generous help spanning a period of months! As it happens, rereading my emails with you was the very next step on my list before I write the following posts. 😀

Comment #123 November 1st, 2020 at 6:46 pm

matt #111:

Scott, I am curious if you ever read “Naming Infinity” by Graham and Kantor, and what you thought of it if you did.

Sorry, haven’t read it! Should I?

Comment #124 November 1st, 2020 at 8:03 pm

Zzzkrz #69:

This is just more anti-Trump fake news. In a March 2018 post you already listed Cohen’s technical monograph on the continuum hypothesis as one of your 30 favorite books of all time so it’s obvious you already knew forcing and there’s no reason to blog about learning it as some kind of strange metaphysical escape from current events weighing so heavily on our collective souls.

ROTFL. While

Set Theory and the Continuum Hypothesisis only 150 pages long, it (understatement of the century?) benefits from multiple readings. When I read it as an ~18-year-old student, I made it only to the foothills, got a vague impression of the peaks, and fell in love with what I saw. So this time around I decided to go all the way up.Comment #125 November 1st, 2020 at 8:05 pm

@asdf #96:

“countability is not an absolute or intrinsic property of a set, but rather, it is relative to the set-theoretic universe that the set sits in.”

Thanks, this and your exposition following it was helpful. Thinking of countability as a relative, rather than an absolute, property of a set would seem to be one of the basic insights one has to have to make progress in this area.

Comment #126 November 1st, 2020 at 8:38 pm

I haven’t quite got the patience to read through all the new comments since I was here just yesterday, but I wanted to add some points that I think would be helpful for people new to this sort of stuff. It’s what I would appreciated knowing when I was new, and had to find out the long, hard way.

1. The “next biggest infinity” after aleph_0 is not just some random thing, there’s an explicit set (omega_1) one can construct that is a) bigger than aleph_0: every function N –> omega_1 fails to be onto, and b) any subset of omega_1 is either countable or in bijection with omega_1. The way omega_1 is constructed is entirely different from how P(N) or R is constructed, so it’s not clear how they are related.

1a. Well-ordered subsets of R, where they inherit the ordering from the real numbers, are just the countable ordinals, so trying to link the reals with ordinals doesn’t get very far, without completely throwing out the ordering on R. R is just too dense, though not massively long (compared to, say, the long line, which has omega_1 copies of [0,1] glued together).

1b. Alternatively, if we take P(N) instead of R, then P(N) is partially ordered (by inclusion) and very, very far from being totally ordered like an ordinal. There is a countable sequence from the bottom element of this lattice to the top element, so P(N) is wide, rather than long or dense.

2. The CH can be fruitfully stated as a top-down version that doesn’t reference generating new sizes of infinity from old ones: every infinite subset of R is either in bijection with R or in bijection with N. In this formulation there’s also not much obvious why it should be true or not.

3. When talking about smaller models of ZFC, it’s *crucial* to remember that functions between sets are also sets, and what makes the model “think” it contains an entire universe of sets is that it doesn’t contain the bijections that would otherwise indicate that many of them are the same size, viewed from outside the model. The whole mysterious notion of “going inside a smaller universe” is more mystical than it needs to be, imho. There are just (!) more functions outside the model than inside the model.

3a. This kind of thing happens when forcing in the other direction: adding a bijection between an uncountable cardinal (like |R|) and omega_1. The new model tells us that CH holds, but we don’t think of the original model as some mysterious thing that paradoxically doesn’t know that CH holds, just because the witness for CH (a bijection omega_1 –> R) is outside it.

4. The sheaves etc approach to forcing actually modularizes the construction of forced models in a conceptually useful way, though I know you (Scott) don’t want to go there. And it uses pieces of mathematics useful, and used, in other fields. I think it worth knowing this.

Comment #127 November 1st, 2020 at 8:38 pm

Scott #73:

“As I said upthread, the infinite cardinalities are not absolute, as already shown by Löwenheim-Skolem. I meant only that, in some sense, Cantor’s hierarchy of Alephs (like his closely-related hierarchy of ordinals) is more absolute, more fixed in its properties, than the powerset operation, which can “jump around” in an uncontrolled way—and this turns out to be a key intuition that one needs to acquire in this subject.”

I’m not sure about that. In the same way that you can violate CH by adding \(\aleph_2\) reals while leaving the aleph sequence fixed, you can make CH true by adding a bijection between \(\aleph_1\) and the continuum without adding reals, so I don’t really see how you can say that one is more fixed than the other.

Comment #128 November 1st, 2020 at 8:57 pm

Scott #113

>Because it’s not just about a particular way of creating binary sequences, but about any possible way—or at least, any way available to you within a given model of set theory.

Cool indeed. But can you help understanding how mathematicians go from “straightforward enumeration doesn’t produce all binary sequences” (Cantor’s proof) to “and that’s forever true for any kind of enumeration one can cook up” (mainstream interpretation for Cantor’s proof)?

Maybe a specific exemple would help: let some numbering scheme where each digit represents the initial tape for a TM that can run for as long as the digit is far. It seems trivial that this scheme includes all straightforward numbers, plus some others. How do we know beforehand that this scheme is either not a valid model of set theory or vulnerable to diagonalisation?

Comment #129 November 1st, 2020 at 9:16 pm

J #128: Again,

everyenumeration scheme is “vulnerable to diagonalization.” The diagonal argument doesn’t care about the details of how you enumerated; it works regardless. That’s the entire point of Cantor’s proof, and is why wecallit a proof.It’s exactly as if someone just proved the Pythagorean Theorem, and you said “cool indeed, except how do you know it will work for

all right triangles?” I’m not sure what to do in such a case except to go over the proof again, slower!Comment #130 November 1st, 2020 at 9:27 pm

Ben Goodman #127:

I’m not sure about that. In the same way that you can violate CH by adding ℵ2 reals while leaving the aleph sequence fixed, you can make CH true by adding a bijection between ℵ1 and the continuum without adding reals, so I don’t really see how you can say that one is more fixed than the other.

I didn’t mean the idea of the ℵ sequence being “relatively fixed” as more than a rough heuristic—one that helped me to understand what Cohen was doing in building models of not(CH)—and you and others have succeeded in pointing out cracks in the heuristic!

On the one hand, the ℵ sequence is “fixed” in the sense that, whatever the “true” cardinality of each ℵ

_{α}“as seen from the outside,” the ℵ’s are always there as a “scaffolding” for the powerset operation to then populate, either densely (as in GCH universes) or sparsely. On the other hand, you’re right that, while Cohen (as he writes) makes a firm decision to leave the ordinal hierarchy untouched, onecouldcollapse cardinals if one wanted to, so the ℵ’s certainly aren’t fixed inthatsense!Comment #131 November 1st, 2020 at 10:27 pm

Scott, here is a review of Naming Infinity https://www.ams.org/notices/201401/rnoti-p62.pdf

I doubt you have much free time at all to read books now. But I can confidently say that it is the best book you will ever read on the intersection of Russian mysticism and descriptive set theory.

btw, your comment that quantum field theory remains sparsely popularized is a good one. The biggest problem is that up until Ken Wilson (or thereabouts), people didn’t really know what quantum field theory was, despite having a couple decades experience using it to make very accurate calculations. So, there’s a lot of confusing statements in earlier popularizations that make it sound more mysterious than it actually is.

Comment #132 November 2nd, 2020 at 12:15 am

matt #131: Aha! I actually have a copy of

Naming Infinityin the office where I now sit. I have no idea why—could I have been sent a free review copy?? But Ihadpreviously noticed a bizarre book on my shelf about the intersection of set theory and Russian Christian mysticism in the early 20^{th}century. So, motivated by your comment, I just read the first ~40 pages and enjoyed them. Except, whatwasthis great mathematical insight that eluded the rationalistic French mathematicians, and that only revealed itself to heretical Russian mathematician-priests who chanted the name of God over and over? If I reach the end of the book without finding out, it’s going to be such a letdown. 😀Comment #133 November 2nd, 2020 at 1:52 am

Thank you for a nice writeup. However it was not quite idiot enough for me because it assumes that the reader knows what a model of ZFC is. From the text I now understand that a model is some concrete construction that satisfies the ZFC axioms. What is surprising to me is that there can be multiple such constructions. I thought that there would be just one ZFC system of real numbers up to isomorphism. My new understanding is that there are somehow multiple ways to define the real numbers based on ZFC axioms, and in some of those CH holds but in some it does not. But this seems to imply that we don’t really have a solid definite definition of the real numbers, which I find unsettling. I thought I knew what the real numbers are.

Comment #134 November 2nd, 2020 at 1:56 am

Scott #121:

But we have to be careful not to pass it to F+Not(Gödel sentence)… or would that be inconsistent? (I’m not yet cursed by knowledge, but I’d love to be.)

Comment #135 November 2nd, 2020 at 3:58 am

Prof. Aaronson,

First: It’s really wonderful to see interest in these topics from people working in theoretical computer science, and trying to make forcing more available to a wider audience! I’m very much looking forward to the next instalments 🙂

Second: I think you’ve hit on something that’s really important to emphasise when looking at forcing; the idea that a lot of mathematics concerns *intensional* concepts that can vary in their extension according to what universe they’re interpreted in. Set theorists use this *a lot* (under the name `absoluteness’) and it’s the interplay between this intensionality/absoluteness and the fine control we have via forcing that facilitates the study of the zoo of set-theoretic universes we like to explore nowadays. So I think referring to the study of models of set theory as *engineering* problems is a really neat analogy, and a lot of set-theoretic advances post 1963 have involved developing new forcing technologies (i.e. posets) that allow you to engineer new universes.

BUT. This comes with a caveat. Throughout you seemed to speak as though the natural numbers can’t be engineered. But if you assume they’re non-absolute (i.e. you allow omega-non-standard models into the picture), there’s engineering techniques you can use there too (e.g. ultrapowers). At the other end of the spectrum is where we just assume that the power set of a given set is absolute, and then you can’t use forcing to engineer new universes (any universe obtained by forcing would be “power-set-non-standard” if you will). So the position here is one where we’re taking the natural numbers to be absolute but the powerset to not be absolute (and presumably should provide some motivation for this position). So why is forcing so problematic here (when we don’t have the same intuitions about natural numbers)? Absolutely key to the philosophical import of the forcing technique is that it keeps the models *standard* in that the ground model is a transitive inner submodel of the forcing extension. This means that whilst the powersets of sets can change, a *huge* amount of the natural set-theoretic properties we’d like models to have (being well-founded, being nicely stratified into an iterative hierarchy, having well-behaved natural numbers, having the same ordinals) all remain constant. So long as you assume a generic is available, these models look *standard* in some sense.

Third: I was a little puzzled by this part of the text.

“(Technical note added: even when our models of ZFC are only countably infinite, they still won’t be countably infinite sets; indeed the ability to construct such a set within ZFC would violate the Incompleteness Theorem. They’ll instead be “proper classes,” infinite onions that grow more and more sets as we allow more and more instances of ZFC’s Axiom of Replacement. More about this in a future post!)”

Countable models (of any theory) are always sets (you can’t have an unbounded countable sequence in the ordinals), whereas proper classes aren’t sets. There seems to be a mix-up here between talk of “proper classes” (which are classes unbounded in the universe of sets) and obtaining countable (transitive) models of ever larger fragments of ZFC (which you can do using the reflection theorem in ZFC) rather than assuming the existence of a countable transitive model of ZFC (which is substantially stronger than ZFC). I’m not sure what’s intended by the technical note.

Thanks for such a nice post!

Best Wishes,

Neil

Comment #136 November 2nd, 2020 at 4:00 am

So if Biden wins Texas then we have an ask you any question session? lol.

Comment #137 November 2nd, 2020 at 4:42 am

Scott #119: Predicativism is the commitment to defining things, stage by stage, in terms of what has already been defined. It rejects powersets as ill-defined (as seen, for instance, by the independence of CH), because the powerset of S consists of “all sets in the model that are subsets of S” – but we don’t yet know what sets are in the model! This limitation ends up confining us to countable sets.

A great place to start on the topic of “small” mathematical worlds is Simpson’s book, Subsystems of Second Order Arithmetic. The first chapter, available for free here, outlines the results very nicely.

The most straightforward predicative model is probably ARITH, the minimal model of the system ACA_0. This system can be considered equivalent to PA, and yet it is able to express and prove the vast majority of ordinary (non set-theoretic) mathematics. See the link for an outline of how this is done.

But I’d say it’s pretty clear that ACA_0 is not “all there is”; we can certainly assume more while still keeping everything clearly defined and predicative. The question is just how much more.

Feferman played a very central role in this,here [paywalled, sorry] is his account of how his predicative system W covers all “scientifically relevant” math.

Weaver has readable summary of (his take on) what the limits are, based on this article where he disputes the arguments of Feferman and others about precisely which ordinals can be defined predicatively.

Comment #138 November 2nd, 2020 at 8:20 am

I’m confused about why the diagonal argument doesn’t apply to the set of definable reals between 0 and 1. I can’t work, because they are clearly countable, corresponding by definition to finite bitstrings, but I don’t see what is the problem.

By assumption, there is an enumeration of the definable numbers between 0 and 1. Let \(s(i)\) be the \(i\)th such number. Let then \(\alpha\) be the real number such that its \(i\)th bit is the negation of the \(i\)th bit of \(s(i)\). Then \(\alpha\) must be different from all \(s(i)\), but it seems perfectly well-defined.

Comment #139 November 2nd, 2020 at 8:40 am

grue #91: In #57 you speculated that we might eventually find a

proofof CH in ZFC2. For that, we’d need a clear notion of proof, not just of entailment (and in the absence of a completeness theorem, the distinction between proof and entailment is crucial). Just saying “second-order logic” doesn’t magically give us the ability to prove more things than we used to.A century of experience has taught us that anything that mathematicians would recognize as a legitimate proof of CH can be captured by a proof, using first-order logic, from the axioms of ZF, possibly augmented by some additional axioms (choice, large cardinals, projective determinacy, etc.). You’re free to dream up some exotic logic if you like, but it’s basically a foregone conclusion that at the end of the day, when you have specified exactly what your allowable rules of inference are (assuming those rules are generally recognized by mathematicians as being valid), you’ll be able to reformulate your seemingly exotic notion of provability in first-order terms. This is not a theorem per se, but a sociological fact about what mathematicians think a “valid proof” is. This is why there is so much emphasis on first-order logic.

Now, you could still ask whether someone might come up with new and universally recognized axioms to adjoin to ZFC (that would resolve CH), or whether the mathematical community could even be persuaded to change its notion of what constitutes a valid proof. It could happen, but I personally doubt it. If for example you look at Woodin’s work on \(\Omega\)-logic and the \(\Omega\)-conjecture, you’ll get a sense of the controversy that always seems to be sparked as soon as a plan for “proving” or “disproving” CH is proposed.

Comment #140 November 2nd, 2020 at 9:35 am

Thanks for sharing this very fun project with the world!

Maybe this is a good place to ask some experts about something that has nagged me for a long time.

Any two models of the 2nd order Peano axioms must be isomorphic. Similarly, any two totally ordered complete fields must be isomorphic. As a non-logician mathematician, I find these proofs completely unobjectionable (and beautiful!).

On the other hand, I think I am beginning to understand why using 2nd order logic is asking for trouble, in line with Scott #28, Grue #34, Scott #36, Sniffnoy #37. And I’m beginning to appreciate the utility of non-standard models!

Here’s the problem nagging me: I do NOT feel at all confident that all the activities we do in our mathematical lives (safely far from logic) are reducible to 1st order logic. I wouldn’t be surprised if they are, I just don’t have a good feel for it. The unobjectionable proofs above feel pretty close to proofs I might write in everyday life.

On yet another hand, this worry is in contradiction with my sense, following Scott #28, that in any particular case I can throw any 2nd order stuff I’m doing into a recursively enumerable set of 1st order axioms.

I guess my very ill-defined question is: I’d like to have some intuition of which proofs (like categoricity proofs above) are somehow “irreducibly” 2nd order, as opposed to which ones can be safely translated back to 1st order, per Scott #28. (Or, alternately, a clear idea of why this question is misguided!)

Comment #141 November 2nd, 2020 at 9:41 am

grue #102:

> Just because Cantor and Godel had differing intuitions about whether CH would likely turn out to be true, doesn’t mean they had different views about what “the” underlying models is…

It’s possible that Cantor and Gödel believed in the same model but had different intuitions about CH. It’s also possible that they believed in different models. How can these two cases be distinguished? (Of course, Cantor lived 1845-1918, so it really isn’t fair to impute to him any “beliefs” about “models”, but still.)

Comment #142 November 2nd, 2020 at 10:05 am

Jarno #133: Sorry about that! We’ll have

muchmore to say about models later (both of formal systems in general and of ZFC in particular), but I just edited the post to add a placeholder definition of models for now.Yes, if these posts don’t unsettle you at least a little about the real numbers, and our ability to get a handle on them using finite sequences of symbols, then they won’t have done their job!

On the one hand, the reals are the unique Dedekind-complete ordered field, up to isomorphism. On the other hand, if we try to characterize that field using first-order axioms (so in particular, without using the concept of “Dedekind-completeness,” which talks about

infinite sequencesof reals), then by Löwenheim-Skolem, we’ll inevitably find that there are models of our axioms with only countably many elements (and in fact, with every possible infinite cardinality).Comment #143 November 2nd, 2020 at 10:11 am

Tamas V #134:

But we have to be careful not to pass it to F+Not(Gödel sentence)… or would that be inconsistent? (I’m not yet cursed by knowledge, but I’d love to be.)

I said we could confirm the truth of Con(F) by passing to

aslightly larger system, not by passing toanyslightly larger system! 🙂Again, by the incompleteness theorem, F+Not(Con(F)) is a perfectly consistent system, assuming F itself is. But if we were using F in the first place, then presumably it’s because we believed F was consistent, in which case F+Not(Con(F)) is

semantically unsound. Basically, it’s a Gödelian monstrosity, something important for metamathematical discussions that one would never want to use in practice.Comment #144 November 2nd, 2020 at 10:22 am

Timothy Chow #139: “Just saying ‘second-order logic’ doesn’t magically give us the ability to prove more things than we used to. A century of experience has taught us that anything that mathematicians would recognize as a legitimate proof of CH can be captured by a proof, using first-order logic”

I never suggest that CH couldn’t be proven in first order logic, or even that it would be easier to prove in 2nd order logic. As far as proofs (as opposed to entailment), I just said it’s possible that CH could be proven from the axioms of ZFC2, whereas this is impossible with ZFC1 (which is true). Of course, a first-order proof could also be possible (just not from ZFC), i.e. by adding new axioms.

“we’d need a clear notion of proof, not just of entailment”

There are already well-known formal proof systems for 2nd order logic; but most mathematical proofs are not developed in fully formalized systems, and I was explicitly referencing the informal notion of proof in that context, i.e. I wrote: “As far as the ‘deductive’ claim as to whether there is a sequence of self-evident logical inferences going from ZFC2 to CH (or not-CH), then it’s an open question” (though if such an informal proof were developed in ZFC2, likely it would later be formalized; and to be clear this was a conceptual point, not a practical recommendation for tackling CH).

The main advantage I cited for second-order ZFC is that it can offer categorical axiomatizations in cases where FOL can’t. And these categorical axiomatizations can effectively serve as definitions of the underlying domain (e.g. number theory, sets, etc), in a way that first order axiom systems can’t (since they don’t entail all true claims). Also, the fact that ZFC2 uniquely entails an answer to CH is an argument (though not definitive) in favor of there being an objective true answer to CH, in contrast to Scott’s arguments to the contrary.

Comment #145 November 2nd, 2020 at 10:27 am

Andrew M #140: I don’t think that the categoricity proofs are “irreducibly 2nd order” if I understand what you mean by that term.

The

2nd-order Peano axiomsare 2nd order by definition, but theproof that they are categoricaltakes place “outside the system.” Moreover, the standard proof is set-theoretic: What is a model, after all? A model is a kind of set (it’s a tuple consisting of a ground set together with some operations, and tuples can be encoded as sets using the Kuratowski construction, and operations are sets, etc.). So the categoricity proof can be formalized in ZFC using first-order logic.Even when you are doing number theory, and invoking the 2nd-order induction axiom, your argument can be formulated set-theoretically. The “property” that you’re inducting on can be viewed as a set. Hence you will again be able to formalize your argument in ZFC using first-order logic.

Basically, second-order statements about other types of objects can be regarded as first-order statements about sets. Any logical inference that we recognize as being valid can be captured as a valid first-order logical inference about sets.

Comment #146 November 2nd, 2020 at 10:34 am

Neil Barton #135: Thanks so much for your kind words, which give me inspiration to continue! To respond to your two most important points:

(1) I added that “Technical Note” in haste, in response to someone who criticized me on Hacker News for not being clear enough about the distinction between set models and proper class models. On reflection, though, I don’t think anything in this post actually depended on that distinction (which in any case I’ll get to later), and my note introduced more confusion and error than it removed! Thanks; I’ve deleted it now.

(2) Yes, I’ll certainly talk later about the importance of the fact that forcing constructs standard transitive models. Beyond that, though, were you pointing to anything in this post you thought I should revise?

FWIW, my philosophy of mathematics could be summed up by saying that I’m a fanatical absolutist about arithmetical statements, and a kindly, moderate relativist about everything else. On the one hand, I think it’s absurd to throw away set theory the way Kronecker wanted to—let a thousand Cantorian universes bloom! On the other hand, I also think that formalizing set theory is a fundamentally different enterprise from formalizing arithmetic. As others were arguing upthread, in the former case, the axioms could reasonably taken to

definewhat a set-theoretic universe is; but in the latter case, they’reincompletely describingsomething (the natural numbers) for which we had a clear conception prior to formalizing anything. And we know that because, if we lacked such a conception, then we couldn’t even manipulate finite strings of symbols, which means we couldn’t do logic, which means we couldn’t get started on the very discussion we’re having now! 🙂Comment #147 November 2nd, 2020 at 10:37 am

Arul #136:

So if Biden wins Texas then we have an ask you any question session? lol.

Sure!

Comment #148 November 2nd, 2020 at 10:50 am

Mateus Araújo #138:

I’m confused about why the diagonal argument doesn’t apply to the set of definable reals between 0 and 1. I can’t work, because they are clearly countable, corresponding by definition to finite bitstrings, but I don’t see what is the problem.

What do you mean by a “definable real”?

There are the

arithmetically definablereals, which are countable, but diagonalizing across them produces a real that’snotarithmetically definable—just like diagonalizing across the computable reals produces an uncomputable real. (One way to see the issue is that the notion of “arithmetic definability” is not, itself, arithmetically definable!)Then there are the

constructiblereals in a given model of ZFC—but as we were discussing upthread,ifwe have constructive access to the set of all constructible reals, then by the very fact that we have such access, there must be ℵ_{1}constructible reals and not just ℵ_{0}! We know that because otherwise, we could diagonalize across all the constructible reals to produce a new constructible real not in the list.Comment #149 November 2nd, 2020 at 10:54 am

Living in Switzerland makes me giggle at the overlapping abbreviations. I believe William Tell made major contributions to the independence of CH in 1307.

Comment #150 November 2nd, 2020 at 10:55 am

Andrew M #140: I would rephrase your question as follows.

Is there even a single example of a proof that’s “irreducibly second-order” and that I’m able to understand?Until I see such an example, I’ll continue to think, along with Timothy Chow and others in this thread, that any valid second-order reasoning whatsoever can be “compiled down” to first-order reasoning that quantifies over sets.Comment #151 November 2nd, 2020 at 10:55 am

Mateus #138

I was swayed by this at first .. but I think I have a resolution, in the form of a few snags that arise when you try to make things more concrete:.

First big problem: your scheme requires you to be able to determine the ith digit of any definable real number in order to define your number. I think there are uncomputable numbers, so this fails and the scheme can’t define your diagonal counterexample properly.

So, suppose we restrict to computable reals, which we’ll define as “a real which possesses a corresponding long-running TM that will run forever and emit its digits in order as it goes”, and enumerate these reals by listing all such programs. Now, we write a program that implements the diagonalization scheme you suggest, and let it run! But there’s a rub: this program itself appears somewhere in our list. When it reaches itself, and tries to compute its own ith digit, it either runs forever (fails our definition) or fails its prime directive and outputs the same digit …… and indeed, the same computable real …. as itself, thus making it part of the enumeration.

Comment #152 November 2nd, 2020 at 11:16 am

Scott #143: you say F+Not(Con(F)) is perfectly consistent. But then it’s not obvious to me why we can’t prove, somehow non-constructively, within this weird system that there must necessarily exist a statement whose truth and falsity are both provable, as we assume F’s inconsistency as an axiom. So this system could prove its own inconsistency! (Take this as a challenge that your post is not fully idiot-proof. 😊)

Somebody told me once Bertrand Russell’s definition of mathematics:

“Mathematics is the subject in which we don’t know what we are talking about and whether what we say is true or false, and furthermore we don’t give a damn.” 😀

(Not sure if Russell ever wrote it down exactly this way though.)

Comment #153 November 2nd, 2020 at 11:22 am

Nick #141: “It’s possible that Cantor and Gödel believed in the same model but had different intuitions about CH. It’s also possible that they believed in different models. How can these two cases be distinguished?”

Well, my comments on this were in response to Nick, who argued that their different intuitions about CH demonstrated that they had different underlying set models in mind (similar to Scott’s view that their are different equally legit models of set theory); so I was just saying that it doesn’t provide meaningful evidence for this, which it sounds like you agree.

As far as how you could tell whether they had different models in mind, one possibility is that you could show them both the axioms of second-order ZFC, and if they each agreed that the axioms were self-evidently true about *their* conception of sets, then that would be strong evidence that they have the same models in mind (at least modulo the height of the hierarchy, which ZFC2 doesn’t specify; but that isn’t relevant to CH). Note that it would not work to just show them the axioms of ZFC1.

Timothy Chow #139: In my response to your objections I wrote about the advantage of “second-order ZFC” but that was a typo; I meant advantage of “second-order logic” (of which ZFC2 is just a special case).

Comment #154 November 2nd, 2020 at 11:30 am

Scott #148: What I had in mind was a number whose digits are uniquely defined by a finite text in English. But I see that this is not good enough. I feel there’s a joke in there somewhere, with the set of definable numbers not bein well-defined…

G #151: You’re simply pointing out how the diagonal argument fails for the computable numbers. That’s fine, but there are plenty of definable numbers that are not computable, for example the Busy Beaver numbers.

Comment #155 November 2nd, 2020 at 11:40 am

Scott #150 “Until I see such an example, I’ll continue to think, along with Timothy Chow and others in this thread, that any valid second-order reasoning whatsoever can be “compiled down” to first-order reasoning that quantifies over sets.”

For what it’s worth, I’ve never disputed this claim. And to be clear, it doesn’t contradict the fact that (for example) 2nd order peano arithmetic is categorical and 1st-order isn’t. Sure, any specific reasoning process can be translated into FOL by adding new axioms to your formal system; but the second order systems can already entail all of these results with a fixed set of axioms, which is a conceptual advantage. You may not find this conceptual advantage interesting/valuable, but the translation claims are not particularly relevant to it.

Comment #156 November 2nd, 2020 at 11:41 am

Tamas V #152: I looked up the actual Russell quote, which is from way back in 1901:

Pure mathematics consists entirely of assertions to the effect that, if such and such a proposition is true of anything, then such and such another proposition is true of that thing. It is essential not to discuss whether the first proposition is really true, and not to mention what the anything is, of which it is supposed to be true. Both these points would belong to applied mathematics. We start, in pure mathematics, from certain rules of inference, by which we can infer that if one proposition is true, then so is some other proposition. These rules of inference constitute the major part of the principles of formal logic. We then take any hypothesis that seems amusing, and deduce its consequences. If our hypothesis is about anything, and not about some one or more particular things, then our deductions constitute mathematics. Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true. People who have been puzzled by the beginnings of mathematics will, I hope, find comfort in this definition, and will probably agree that it is accurate.

In context, then, he’s just saying that in pure math you take certain hypotheses as given and then work out their consequences. I wonder if Russell ever came to regret this quote, as I’ve come to regret my now zombielike quote about how “if P=NP then everyone who could appreciate a symphony would be Mozart”? 🙂 In any case, the part about “not giving a damn” didn’t sound like Russell at all.

Regarding F+Not(Con(F)): yes, it can prove its own inconsistency—because it’s got its own inconsistency right there as an axiom! The point you need to understand is that

this is not, itself, an inconsistency in F+Not(Con(F)).It would only become one if F+Not(Con(F)) could also prove its ownconsistency. But if F itself is consistent, then it can’t.Comment #157 November 2nd, 2020 at 11:45 am

Mateus Araújo #154:

I feel there’s a joke in there somewhere, with the set of definable numbers not bein well-defined…

The indefinability of definability is one of the cornerstones of 20th-century logic, but yes, it’s also a pretty good basis for jokes 🙂

Comment #158 November 2nd, 2020 at 12:11 pm

Scott #156:

I think I throw a RuntimeException here… 😀

(And I still like the unofficial Russell quote, I can believe he said that, nobody can behave like a perfect gentleman all the time.)

Comment #159 November 2nd, 2020 at 12:25 pm

Andrew M #140, I think you may be asking for Gödel’s completeness theorem (not to be confused with his incompleteness theorem), which says for a given first-order theory T, a sentence S is true in every model of T exactly when S is provable in T.

Comment #160 November 2nd, 2020 at 12:48 pm

Scott,

Thank you for taking the time to write this series of posts. I think I speak for many of us when I say that we share your curiosity and enthusiasm for this subject, but benefit from having our fearless leader(you) blaze a trail forward through the jungle with his machete.

Posts like these are the reason that S-O is the best blog on the internet.

Tu

Comment #161 November 2nd, 2020 at 1:14 pm

grue #144 and #153: Let me try to put it this way. I claim that, if our goal is to determine whether there could be a

proofof CH, then second-order logic doesn’t buy us anything that we can’t get with first-order logic.For example, the “definiteness” that second-order logic seems to buy us is

notproof-theoretic. We still have no guarantee that there will be either a proof or a disproof of CH in any sense of the word “proof” that corresponds to what mathematicians recognize as a proof. So this “definiteness” is not any kind of advantage if our interest is in proof/disproof. If you say, well, at least we know that the axioms eitherentailCH orentail\(\neg\)CH, then my response is, so what? If entailment is divorced from proof then I don’t think mathematicians are going to be terribly interested in this kind of “definiteness.” It’s just going to seem like playing word games. There is no material advantage to be had.Similarly, if we someday find a “proof” of CH in ZFC2, then I claim that it’s only going to be accepted as a proof by the mathematical community if it can be translated into a proof from ZFC + (additional axioms) using first-order logic. So again, second-order logic isn’t buying us anything that first-order logic can’t.

This isn’t to say that second-order logic isn’t worth thinking about or studying, but it does mean that if (say) Scott decides to ignore second-order logic, then he isn’t committing the error of fundamentally overlooking some possible avenues to a proof or disproof of CH.

Comment #162 November 2nd, 2020 at 1:23 pm

This is my favorite form of escapism.

Can’t wait for the sequel.

Comment #163 November 2nd, 2020 at 1:24 pm

“there can’t be an enumeration of all the reals of our universe.”How can one move one’s hand through space without enumerating the reals?

Comment #164 November 2nd, 2020 at 1:52 pm

fred #163: How do you know space is a continuum, rather than (say) a discretuum at the Planck scale? No one disputes that you can enumerate the integers up to 10

^{35}… 🙂Comment #165 November 2nd, 2020 at 2:58 pm

Tamas #152: I think your question is: if we take Not(Cons(F)) is an axiom, doesn’t that give us a coontradiction, so that F + Not(Cons(F)) is inconsistent?

I don’t know very much about this area, but let me take a whack at it. I think the problem is the difference between consistency and soundness. Consistency just means that the system can never prove a contradiction. Soundness means that all off the statements that the theory proves are *actually* correct when viewed from the outside. “From the outside” here means that we apply some semantics: e.g., we might say that \(\exists\) really means “there exists” and is not just a symbol. An unsound theory may still be consistent; that would just mean that the rules of inference are not powerful enough to derive any contradiction. F + Not(Cons(F)) is one such theory. From the outside, we know that F + Not(Cons(F)) is unsound: if F is consistent then Cons(F) is true, so that F + Not(Cons(F)) is false, a contradiction; and if F is inconsistent then F is unsound, and so F + Not(Cons(F)) is unsound as well.

The thing that really used to confuse me is: what is this weird Cons(F)? How on earth would you say “F is consistent” as a statement within F? But Godel came up with a kind of hack so that F can say a statement about integers that is *equivalent* to the consistency of F. This equivalence can only be seen by going outside of the system and “translating” various statements about integers into statements about F itself. Cons(F) is some complicated statement about integers that encodes the statement that the system is inconsistent, and Godel’s second incompleteness theorem tells us F cannot prove Cons(F). Viewed from within F, Cons(F) is some weird, seemingly unmotivated number-theoretic proposition; apparently, it says there’s some particular polynomial in like a zillion variables can never be 0 when evaluated at integers. Only viewed from outside the system to we see that it is in fact equivalent to the system’s inconsistency. So Not(Cons(F)) is not a statement like “A and not A” or anything that can be used in any way to derive a contradiction, at least within F.

I like to imagine mathematicians “living in” F; if they ever encounter Cons(F), and decide it’s important for some reason, they’ll keep getting their Cons(F) proof attempt papers rejected over and over for various mistakes, never knowing that Cons(F) is in fact true but unprovable. Mathematicians living inside F + Not(Cons(F)), on the other hand, will go on happily proving various untrue statements and never deriving any contradictions, remaining blissfully ignorant of the fact that all their research is nonsense.

I have to admit I never quite understood the importance of the second incompleteness theorem. The first seems quite startling – no matter how powerful your system is, there will always be statements in the theory that are true, but that you can’t prove. But the fact that a theory can’t prove its own consistency… well, that’s a very weird thing to ever expect a theory to do, so it’s not surprising it can’t. Of course the second implies the first, but that’s not obvious at all until you know that consistency of F can be expressed in F.

Comment #166 November 2nd, 2020 at 3:27 pm

Timothy Chow #161:

“If entailment is divorced from proof then I don’t think mathematicians are going to be terribly interested in this kind of “definiteness.” It’s just going to seem like playing word games.”

The fact that various 2nd-order axiomatizations are categorical, or that ZFC2 uniquely *entails* the answer to CH, are rigorous and provable claims, so not sure why you imply that these results are word games. Perhaps you are tacitly making formalist assumptions about the centrality of proof, but plenty of mathematicians take a more realist/platonist vantage point.

“This isn’t to say that second-order logic isn’t worth thinking about or studying, but it does mean that if (say) Scott decides to ignore second-order logic, then he isn’t committing the error of fundamentally overlooking some possible avenues to a proof or disproof of CH.”

I wasn’t objecting to Scott’s provability claims; but rather to his comments that he doesn’t believe in 2nd-order logic, or understand why anyone chooses to use it, or that 2nd-order categoricity merely “sounds nice”, etc. Also, I think it can be misleading to the general public to convey the various first-order impossibility results, without acknowledging that the situation is often less clear-cut if you allow quantification over predicates (e.g. that ZFC1 doesn’t uniquely determine the truth of CH, but ZFC2 does, etc). Scott also said he leans towards thinking that there is no single correct answer to CH, but this claim has be reconciled in some way with the fact that ZFC2 does uniquely fix the answer.

“if we someday find a “proof” of CH in ZFC2, then I claim that it’s only going to be accepted as a proof by the mathematical community if it can be translated into a proof from ZFC + (additional axioms) using first-order logic.”

If someone finds a proof of CH in ZFC2, then of course mathematicians will accept it. Presumably if there is a proof, then there is some sequence of inferential steps from ZFC2 to CH, in which each individual step is self-evidently correct; such a proof would convince mathematicians that CH is true, so they would and should accept it. Of course, I assume they would in fact be able to translate it to FOL with additional axioms, but then this doesn’t say anything about mathematical skepticism towards 2nd-order proofs or justify scare-quotes around 2nd order “proof”.

Comment #167 November 2nd, 2020 at 4:08 pm

Scott #164,

“How do you know space is a continuum, rather than (say) a discretuum at the Planck scale? ”

I hope that the level of this question is not so far below the level of the discussion hitherto that it does not warrant a response:

To what degree are you sympathetic to the idea that the disagreement between GR and QM arises from the reliance of GR on the notion of a continuum? That is, do you think that GR built on top of a mathematical framework that is fundamentally discrete and finite would be a closer approximation to the truth?

Put another way, are you sympathetic to the idea that fundamental theories should not be reliant on a mathematics that employs the Axiom of Choice (differential geometry, for example)?

Comment #168 November 2nd, 2020 at 4:09 pm

I very much admire this post. I set myself the goal of trying to learn General Relativity “properly” beyond being able to answer the not very hard exam questions from my University course years earlier.

I don’t think I managed to learn much more than answer some more difficult exam questions, though I at least understood Penrose’s contribution quite well for his recent Nobel Prize.

Comment #169 November 2nd, 2020 at 4:20 pm

Jair #165:

Thanks a lot, this helped! Sorry for the late reply, was busy devising my S&P 500 shorting strategy for tomorrow 🙂

Yes, my question was as you wrote, and yes, I really had this idea in my mind that Not(Cons(F)) was actually a non-constructive statement like “there exists a proof that concludes A and not A”.

So you say if we want to talk about Gödel’s incompleteness theorems, we *have to* believe that arithmetics is *really* true? That is, our interpretation based on the assumed *truth* of arithmetics is a crucial ingredient here.

Comment #170 November 2nd, 2020 at 4:54 pm

J #83:

> Fun fact that I’m sure you already encountered many times: 1011… [followed only by a sequence of zeros] is equal to 1010… [followed only by a sequence of ones], which means this number may well be in Cantor’s enumeration. I know, I know, there is a simple fix through using decimal representation instead of binary numbers. …

I think the question of how Cantor’s proof handles the fact that some reals have two binary representations is a legitimate one. It’s well known, and I’m sure there’s a standard answer (presumably in his original paper), but I don’t know it, so I’ll try to answer it below. But first: passing to a decimal representation doesn’t remove this issue — it just turns it into a question about repeating 9s vs repeating 0s, rather than about repeating 1s vs repeating 0s.

To restate your question: since binary .101100000… == binary .101011111…, how do we know that the diagonally constructed binary sequence, which we proved is not in the hypothetical enumeration, isn’t just a different representation of some real number already represented in the list, rather than a representation of an actual missing number?

As far as I know, it could be! This is why the question needs a real answer. Fortunately there’s a simple one, bassed on the fact that every real number has at most two binary representations. (Of course, to understand this, you first need to fully understand how the original proof works for “binary sequences”.)

Revised proof: suppose we could enumerate the reals in [0,1) (possibly with repetition). Do so, expressing each one as a binary sequence. But for a real that has two valid forms, instead of listing it once (using whichever form you like), list it both ways! Surely this new requirement doesn’t make your job harder, since each real has at most two valid forms. But now, all its forms appear in your list, so the diagonally created exceptional binary sequence can’t agree with any of its forms (that is, it will differ in at least one bit position, from any valid binary form for each real you included in your list). But it will still represent some real number. Therefore, that number is not anywhere in your list.

(This proof would not be valid if any real had uncountably many valid forms! And if a real could have a countable number of valid forms, this proof would have to be modified to not try to list all those forms at once. But since each real has at most finitely many valid forms (in fact, at most 2), it works.)

Comment #171 November 2nd, 2020 at 4:57 pm

Timothy Chow #140, Scott #150: Thanks!! Timothy, you got to the heart of my misunderstanding: I was imagining that the proof of categoricity for Peano was somehow 2nd order logic, but I couldn’t see how that was so; it seemed to all be reducible to 1st order. The distinction between 2nd order *axioms* and 2nd order *proofs* was what I wasn’t seeing. Which I realize now is what Sniffnoy #37 was getting at.

And then Scott #150 asked and answered my follow-up question: there don’t seem to be any *proofs* which are “irreducibly” 2nd order.

My picture of the mathematical universe is suddenly much more settled, thank you!

Comment #172 November 2nd, 2020 at 5:36 pm

Scott – Many thanks for taking this on! I look forward greatly to further installments.

I took a year of graduate model theory as an undergraduate math major, and found myself bewildered by forcing. So having an explanation that will be infused with a theoretical computer science perspective is delightful. And this post got me to stop thinking about the election for a few minutes, so yay.

Comment #173 November 2nd, 2020 at 5:37 pm

Typo: “ability to reason about about infinity”

Comment #174 November 2nd, 2020 at 6:28 pm

grue #166: We’re skirting dangerously close to the realm of pure opinion, so I hesitate to pursue the discussion much further, but I do firmly believe that your claim that “it can be misleading to the general public to convey the various first-order impossibility results, without acknowledging that the situation is often less clear-cut if you allow quantification over predicates” is exactly backwards. What I think is misleading to the general public is the suggestion that somehow second-order logic opens up genuinely new possibilities (regarding the provability or disprovability of CH) that are not covered if one restricts oneself to first-order logic.

This has nothing to do with platonism versus formalism; I am rather sympathetic to platonism myself. Rather, it has to do with the way you make it sound like the concepts of categoricity and entailment (in the context of second-order logic) might be relevant to the question of whether CH is provable, when they’re not. Platonists and formalists do not differ when it comes to the question of determining whether or not a given argument constitutes a valid proof. They’ll apply exactly the same standards when refereeing papers. This is why mathematicians are able to achieve such a high level of communal agreement as to what constitutes correct mathematics, even though mathematicians may disagree wildly on philosophical questions such as platonism and formalism.

I think you probably understand the points I’m making, but the way you’ve chosen to express yourself strikes me as potentially very confusing to other readers of these comments.

Comment #175 November 2nd, 2020 at 7:53 pm

@Scott #156:

“Regarding F+Not(Con(F)): yes, it can prove its own inconsistency—because it’s got its own inconsistency right there as an axiom!”

I think it’s a bit subtler than that.

The formula you are referring to as Not(Con(F))–the one we adjoin to F as a new axiom–does not quite say “F is inconsistent”. What it says is “there exists some number that is the Godel number of a proof that F is inconsistent”.

However, because F itself is consistent, this number cannot be a standard natural number (if it were, F would be able to prove itself inconsistent, and hence could not be consistent); it must be a non-standard number. And, as you noted in a much earlier post, a “proof” that has a non-standard number as its Godel number is really not a proof, so much as a promise that a proof will be forthcoming, which is never actually delivered. (As I understand it, this is because the “proof” has an infinite number of steps and so never actually terminates.)

So it seems to me that what is really happening when we add Not(Con(F)) to F as a new axiom is that we are forced to reinterpret what the sentences of our new formal system are saying. The things that we are now forced to call “proofs”, because the symbols that appear in the relevant sentences are the same ones that led us to call things “proofs” when it was just formal system F we were looking at, now are not all actually “proofs” in the same sense–they don’t all have just a finite number of steps. So it’s not so much that F+Not(Con(F)) “proves” its own inconsistency, because the new axiom we added actually doesn’t say that F is inconsistent in the old sense; it only says it in the new sense where we count as a “proof” a thing that wasn’t a “proof” in the old sense.

Comment #176 November 2nd, 2020 at 7:56 pm

@Scott #150:

“I would rephrase your question as follows. Is there even a single example of a proof that’s “irreducibly second-order” and that I’m able to understand?”

Would your proof in #148 that the cardinality of the set of all constructible reals must be aleph-1 count as an example of this? That proof says “if we have constructive access to the set of all constructible reals”; but this notion of “the set of all constructible reals” looks to me like one of those second-order notions (like power set) that can’t be “compiled” down to first-order notions.

Comment #177 November 2nd, 2020 at 7:57 pm

Jair #165: Insofar as the question makes sense at all, a “person living within F” perfectly well

canrealize that Con(F) is equivalent to the consistency of F. (Sure, we need to code reasoning about F in the language of F, but the same would be true forany other area of math!) What they can’t do is just toproveCon(F).Even if you don’t care about its philosophical import, the second incompleteness theorem has loads of applications—for example, sometimes you show that some other statement S is unprovable in F by showing that, if F proved S, then it would also prove Con(F).

Comment #178 November 2nd, 2020 at 8:01 pm

Bruce #170: Thanks for sharing these thoughts!

> every real number has at most two binary representations

Well yes, that’s exactly what I don’t understand. Isn’t this property due to the specific details of our usual systems of numeration? What if we were used to a non-integer base of numeration? We usually say that 3.14… means Pi, but in truth 1.00… could also mean Pi, and actually any sequence means Pi in some well chosen base of numeration. So, I guess what you meant is *once you set the base of numeration*, every real number has at most two binary representations. But how do we prove that for all possible bases of numeration? Or why we don’t need to prove that, if we don’t need to prove that?

> passing to a decimal representation doesn’t remove this issue

The idea I’ve seen was: construct the diagonal using two numerals only, say 2 and 3 (replace all diagonal digits by 2s, except the 2s that are replaced by some 3s). The resulting sequence can’t have two distinct representations (nor be in the list), so hurray.

[of course this doesn’t change anything for me, still confused by the idea that maybe this sequence of 2s and 3s might mean Pi, or any number really, so how do I know it doesn’t have two distinct representations?]

Comment #179 November 2nd, 2020 at 8:12 pm

David Roberts #126:

I can’t resist pointing out a surprising fact related to your claim 1b, about the fact that the powerset \(\mathcal{P}(\mathbb{N})\), ordered by inclusion, contains the maximal chain \(\emptyset, \{0\}, \{0,1\}, \{0,1,2\}, …\) of size only \(\aleph_0\), and therefore should be considered to be wide, rather than long. (For those wondering what a “chain” is, it is a subset of a partially ordered set which has the property that every pair of things in it can be compared.)

Actually, it turns out that \(\mathcal{P}(\mathbb{N})\) also contains a

differentmaximal chain, whose size is exactly \(2^{\aleph_0}\)! (The proof of this is a wonderful exercise which I don’t want to give away the solution to – it’s just so satisfying if you manage to solve it yourself!) So the partially ordered set \(\mathcal{P}(\mathbb{N})\) isn’t “ranked”, and its height is actually equal to its size. So \(\mathcal{P}(\mathbb{N})\) actually is quite long, as well as being quite wide.Actually, there is a more general fact, which can be proved in ZFC: for every set \(X\), the powerset \(\mathcal{P}(X)\) (ordered by inclusion) always contains a chain \(C \subseteq \mathcal{P}(X)\) such that \(C\) has a

strictly largercardinality than \(X\), that is, \(|C| > |X|\). This is a much trickier version of the previous exercise, which is itself quite tricky on its own – again, I’ll refrain from spoiling the solution!Comment #180 November 2nd, 2020 at 8:14 pm

I’ll try to follow your popular exposition on CH Scott, but I must admit, I’m not a mathematician, I only do *philosophy* of math 😉

Extracted from ‘The Sphere Of Knowledge’ is my wiki-book on ‘Mathematical Logic’ , my alphabetic ordering of all the key concepts in logic linked to Wikipedia articles, currently 660 entries, this should come in handy for the readers to follow along with the posts when they want to look up terms:

https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Mathematical_Logic

It’s interesting, that according to my ordering of fields of knowledge by complexity , pure math should actually be in some sense , the ‘simplest’ subject, much simpler than the most complex subjects like biology, social sciences, and psychology. Yet humans have much more difficulty reasoning about pure math, even though we can navigate the social world OK! So it’s not actually the complexity that’s the problem, it’s the fact that pure math is so alien and removed from everyday experience. But we should all keep in mind that pure math is actually simple ! (Interestingly, I believe that artificial general intelligence is at the exact mid-point between most simple and most complex subjects – so even AGI is in some sense ‘much simpler’ than biology and social sciences!).

Comment #181 November 2nd, 2020 at 9:48 pm

grue #166: So if I understand you correctly, your claim is that, while it’s true that second-order proofs can always be “compiled down” to first-order ones, second-order logic can be a useful “high-level programming language” for coming up with those proofs in the first place? If so, do you have any examples of where that’s happened that would provide any reason for optimism about CH?

Comment #182 November 2nd, 2020 at 10:32 pm

Tu #167: I’m

verysympathetic to the idea that the spacetime continuum is only an approximation to something deeper, plausibly involving holography and/or discrete structures at the Planck scale. But, y’know, so are most of the actual experts in quantum gravity and the black hole information problem these days, and my moral support for their quest won’t earn me a Nobel Prize!But even if it

werethe continuum all the way down, I’d still be astonished if the Axiom of Choice were ever needed for physics. More likely, it seems to me, that just a small fragment of ZF would suffice for all the analytic and geometric questions that were physically relevant.Comment #183 November 2nd, 2020 at 10:36 pm

Dave Lewis #173: Thanks!! Fixed.

Comment #184 November 2nd, 2020 at 10:40 pm

Peter Donis #175: Yes, I understand all that you say. It’s just that I have no problem describing the situation informally by saying that the theory F+Not(Con(F))

(1)

asserts thatF is inconsistent, and(2)

is wrongabout it.Comment #185 November 2nd, 2020 at 10:46 pm

Scott #177: Well, I suppose I was using “living in F” to really just mean “someone doing ordinary math in F” without knowing about Godel encoding etc.

Regarding the second incompleteness theorems, it’s not that I don’t care about the implications so much as I don’t know exactly what they are. Just revealing my general ignorance here 🙂 Certainly didn’t mean to denigrate it! Thanks for explaining.

Comment #186 November 2nd, 2020 at 10:48 pm

Zeb #179: Oh, that is *so* weird! (I think I solved your puzzle. I won’t give it away. Highly recommended!)

I thought I’d seen weird things before, like not only countability, but finiteness, being relative to point of view… but this one, I still don’t yet “really believe”, even though it’s pretty definite!

Comment #187 November 2nd, 2020 at 10:59 pm

Peter Donis #176: No, the concepts I was talking about can all be formalized in ZFC.

Comment #188 November 2nd, 2020 at 11:33 pm

J #178:

> … Isn’t this property due to the specific details of our usual systems of numeration? What if we were used to a non-integer base of numeration? … I guess what you meant is *once you set the base of numeration*, every real number has at most two binary representations. But how do we prove that for all possible bases of numeration? Or why we don’t need to prove that, if we don’t need to prove that?

I was only talking about “standard base 2”. And in fact, in “base pi”, it might no longer be true that “every real number has at most two representations” (I’m not sure, but if you permit 4 digits so you can represent all numbers, then I guess not). At least if you use base 2 but permit 3 digits (0 or 1 or 2), some numbers have uncountably many representations.

But none of this is a problem. Given *any* single well-defined way of representing real numbers, you can try using it to prove real numbers are uncountable (using this method or any other method). Maybe the proof has to be different for different systems of representation, and maybe for some of them you can’t find a proof that works at all. But as soon as you find a proof using any one system, you’re done! Since what you are proving is really about the real numbers themselves, and about your ability to make lists of things. The representation of numbers as strings of digits is just something you *use* to make the proof work.

> … construct the diagonal using two numerals only, say 2 and 3 (replace all diagonal digits by 2s, except the 2s that are replaced by some 3s). The resulting sequence can’t have two distinct representations (nor be in the list), so hurray.

I think that is a valid proof, so that is another correct way of doing it — perhaps simpler, since you only have to list each number once (and you can list it in any valid form).

Comment #189 November 3rd, 2020 at 12:19 am

Scott #181:

“So if I understand you correctly, your claim is that… second-order logic can be a useful “high-level programming language” for coming up with those proofs in the first place?”

No. As I wrote above: “I never suggest that CH… would be easier to prove in 2nd order logic.” and “…to be clear this was a conceptual point, not a practical recommendation for tackling CH”.

My main claims are the same as in my initial comment in this thread, namely that 2nd-order logic offers some conceptual advantages that can clarify our understanding of set theory, including categoricity, and entailing a definite answer to CH.

Timothy Chow #174

“What I think is misleading to the general public is the suggestion that somehow second-order logic opens up genuinely new possibilities (regarding the provability or disprovability of CH)”

I never suggested this, and already told you that this is not my view. I did mention that it’s an open question whether CH could be proven from the axioms of ZFC2, which I don’t think you dispute.

“This has nothing to do with platonism versus formalism… Platonists and formalists do not differ when it comes to the question of determining whether or not a given argument constitutes a valid proof.”

Sure, but that isn’t the context in which I raised the issue of Platonism; rather, you had suggested it was “word games” for me to mention the *fact* that ZFC2 entails a definite answer to CH, and I had simply wondered if this objection was driven by formalism.

Comment #190 November 3rd, 2020 at 1:11 am

Scott #177:

Ok, I assume this also means, due to rules of classical logic, that a person living in F knows that Not(Con(F)) is equivalent to “F is inconsistent”, and he/she understands what (in)consistency means. Thus a person living in F+Not(Con(F)) should know the same. But then, the same person in F+Not(Con(F)) must also know (understand) that F+Not(Con(F)) is inconsistent (most likely he/she then would commit suicide). So why doesn’t this knowledge qualify as a proof in F+Not(Con(F))? That’s my confusion.

It’s like when a person knows beyond doubt that the Goldbach conjecture is true, and everybody in the world agrees with the argument, but cannot publish it because mathematicians say it doesn’t qualify as a proof.

Comment #191 November 3rd, 2020 at 2:11 am

Scott #4

Is there a deep psychological connection between today’s election and your current focus on decidability? Will you prepare a friend of the court brief on Godel’s theorems for the Supreme Court in the event of their involvement? 🙂

I remember when I went through Cantor’s diagonalization and Godel’s theorems, they were such tasty intellectual morsels. I never used them as an engineer 🙂 so the luster faded. If I thought I could regain that sense of wonder I would dedicate the time to seriously go through Cohen and test if my mind has been permanently and irretrievably altered by practicality.

Comment #192 November 3rd, 2020 at 5:12 am

Scott, given that the real numbers that have a name and “can be talked about” are countable,

does this not make reasoning about the cardinality of the other reals kind of weird? How can numbers that cannot even be described be relevant other than just knowing that they exist?

Comment #193 November 3rd, 2020 at 5:53 am

Many commenters here thanked Scott for this post. But it is also the comments and Scott’s engagement in those comment discussions (including moderation) which makes this place a very unique resource.

I wonder how to thank Timothy Chow, or better how to describe why one should thank him. The FOM mailing list is still alive after many years, and can tolerate long periods of silence. Part of this is that Tim will answer to the occasional mail trying to start a new topic, and try to engage with it on a sympathetic and meaningful level.

There is another thing about Tim: He is interested in communication and in the “mental image” of other persons (about a subject), not just in the subject itself. This became important to me at one point. I once wondered how it is possible that the model existence theorem (i.e. Gödel’s completeness theorem) can be proved. Basically the question for me was which ontological commitments enter into the proof. (My current answer: the ontology of “computation in the limit” is sufficient.) As side effect I became convinced that I can accept that every Turing machine either halts or does not halt if started on the empty tape, and still not accept that arithmetic statements always have a clear true or false status. Because Tim wondered about the mental images of strict finitists like Edward Nelson, or the mental images of “defenders of a multiverse perspective of mathematical truth” like Joel David Hamkins, I was motivated to explicitly write down descriptions of models (mental images) of the natural numbers (basically just using the well known mental image of real numbers as infinite sequences of digits and interpreting it in the context of natural numbers) that might help explain why I believe it is possible to not be convinced that every arithmetic statement must be either true or false. (And I am really thankful to Tim that he responded both to my mails to FOM and also to my private mails. He did give valuable feedback (which enabled me to restructure my exposition in a way that at least I myself was much more happy with it) while simultaneously making it clear that the descriptions were not convincing for him.)

Comment #194 November 3rd, 2020 at 7:55 am

Ooh, Set Theory Since Democritus!

As a lowly “lay” software developer, I’m looking forward to this series. I bought and enjoyed your book on quantum computing, and although I don’t think I totally understood everything, I think given my weaker mathematical foundation I still learned more than I could have hoped. I love the way you break down and explain complicated topics, and I appreciate you putting such interesting content on your blog here for free. Thanks!

Comment #195 November 3rd, 2020 at 8:11 am

Tu #167: Reliance on the continuum and reliance on the axiom of choice are two very different things. It’s not so easy to develop general relativity without ever considering real numbers, but it is much easier to avoid the axiom of choice. For example, the book

Constructive Analysisby Bishop and Bridges shows how to develop the usual mathematical tools used in physics, including the continuum, without the axiom of choice or other non-constructive assumptions.In fact, it’s rather hard to use the axiom of choice in any essential way in physics. AC is nominally used in certain branches of mathematics (e.g., the Hahn–Banach theorem in functional analysis, which is used in quantum theory) but not in a way that leads directly to physically testable predictions. For example, there’s something called the Shoenfield absoluteness theorem that, among other things, implies that AC can never matter for a question of the form, “If I start this computer program running, will it ever halt?”

If you want to see examples of the “use” of AC in physics, you can ask Google Scholar about “axiom of choice svozil”. But I think most people react to these examples not by becoming convinced that AC is relevant to physics, but by questioning whether something counts as physics just because it uses physics terminology.

Comment #196 November 3rd, 2020 at 8:18 am

Scott #164

“How do you know space is a continuum, rather than (say) a discretuum at the Planck scale? No one disputes that you can enumerate the integers up to 1035”I don’t, but that’s my point, is the fact that I can move my hand through space evidence that space isn’t a continuum?

In other words, are all the difficulties around counting reals evidence that continuums are flawed abstractions, i.e. they may be handy tools to do powerful symbolic manipulation, but can’t be the basis for physical reality.

“True” reals are impossible to implement on computers (unless with symbolic manipulation), all we ever do is manipulate countable subsets of the reals, and pretend they’re a continuum (e.g. do rigid body simulation on a computer using 64bit floats) because most of the models we use are about things being linear in the small scale, where interpolation between elements of countable sets is interchangeable with a continuum.

But I guess that’s not true for models that use fractal objects, which are very counter intuitive, but are also very practical tools to understand reality.

Comment #197 November 3rd, 2020 at 8:40 am

Bruce Smith #188,

>as soon as you find a proof using any one system, you’re done!

Thanks! Let me try to rephrase your point, to make sure I got it:

« Once we find a proof that we can’t enumerate all binary representations of the real numbers themselves, then we don’t need to prove it again for any other base of numeration, because all real numbers have a binary representations and what we care about are the real numbers themselves, not their representations. »

If this reformulation is correct, you might guess my follow-up question: how do we know that all these real numbers out there can be captured using our usual binary representation? Or how can we remove this assumption, if we can remove this assumption?

>At least if you use base 2 but permit 3 digits (0 or 1 or 2), some numbers have uncountably many representations.

Ouch, uncountably many? I was thinking your idea from #170 (« make sure you enumerate all possible writings each time you write a number ») would help treat these cases. Sorry if that’s obvious, but how do we know there’s uncountably many?

>that is another correct way of doing it

Not my idea, I probably stole it from the french wikipedia page about Cantor’s diagonal argument.

Comment #198 November 3rd, 2020 at 8:43 am

Scott #164

“How do you know space is a continuum, rather than (say) a discretuum at the Planck scale? No one disputes that you can enumerate the integers up to 1035”One last thing is that discretuum aren’t necessarily easier to understand than continuum either.

Continuums lead to Zeno paradoxes, but what does it mean for an object to move through discrete space?

What does it even mean for an object to be in only two possible states, what happens when there’s a transition from one state to the other?

A bit like the questions around teleportation – during a transition, does the object exist briefly in two states at once? Or disappears momentarily from reality and then reappears in the new state? How does that work with the idea of identity? What about when time itself becomes discrete?

Discrete models are just as strange as continuums, but maybe with less theoretical complications and inconsistencies?

As humans, our first instinct is that continuums seem more natural, there is really no true examples of discretuums either – when we are counting beans and moving them between jars, things are continuous when we actually move the beans between the jars. Just like when we say a computer manipulates 0s and 1s, each transistor behaves as a continuous device, until we start counting electrons, i.e. discrete units of charges, but which positions in space are continuous.. and then we get to the quantum realm, where the model is both continuous and discrete at once (*).

So it’s more precise to say that humans constantly flip-flop between the continuous and the discrete to understand the world.

(*) Talking about this, when it comes to the validity of the Many World interpretation of QM, what happens when we consider the decay of a single particle? It’s not a clear binary fork like a spin up/down situation. The decay of a single particle can happen at any point in time, no? Therefore the wavefunction of the universe would have to split into an infinite amount of branches? Unless we consider time discrete, which would put a bound on the number of possible decay events?

Comment #199 November 3rd, 2020 at 8:52 am

Fred #106, Scott #110

NOTE: I am refuting only the diagonal argument, the other Cantor proofs of apparently the same thing are too dense for me at this time to fully understand.

—

I think a formal way to reject the diagonal argument is to point out that it is non-constructive (since it uses proof by contradiction; assumes the principle of excluded middle).

—

Here’s another attempt:

Claim 1: comment #81 way of constructing all infinite bit sequences is valid.I think it is valid. Maybe add a rule about appending 0s as needed to make the growth explicit in the procedure.

Claim 2: Diagonal argument requires a square matrix.I think this is required for diagonal argument to be true.

Claim 3: The matrix I construct is not a square matrix (width != height).We can observe the growth of the construction and conclude that the width of the matrix grows

`log n`

and the height of the matrix grows`n`

. To get a square matrix, we require that`(log n)/(n) -> 1`

and`(n)/(log n) -> 1`

as`n -> Infinity`

.But

`(log n)/(n) -> 0`

as`n -> Infinity`

.And

`(n)/(log n) -> Infinity`

as`n -> Infinity`

.—

Perhaps the difference in our reasoning is that I reject non-constructive proofs. But even if I accepted proof by contradiction, I don’t see how the assumption of a square matrix holds.

Comment #200 November 3rd, 2020 at 9:24 am

Thanks for writing this. I also always wanted to learn about this proof and never got to it, so thank you!

Since this guide is aimed at complete idiots, I feel entitled to make a small request: can you clarify the following sentence a bit: “In any case, though, the self-hating theory ZF+Not(Con(ZF)) can’t be arithmetically sound: I mean, just look at it! It’s either unsound because F is consistent, or else it’s unsound because F is inconsistent.”

It is quite in the beginning so it is dishartening to get stuck so early on an seemingly irrelevant point.

One question I have is if ZF and F are the same thing in this context. But the more pressing question is what aritmetically sound means. How is sound different from consistent?

Some googling revealed the excellent explanation by Jair in comment 165, but for the complete idiots among us it would be nice to have some (shortened) version of that in the text itself

Comment #201 November 3rd, 2020 at 9:28 am

Scott #181: “So if I understand you correctly, your claim is that… second-order logic can be a useful “high-level programming language” for coming up with those proofs in the first place? If so, do you have any examples of where that’s happened…”

I responded above that this wasn’t the claim *I* was defending, but then I just came across this [1] article which seems like it may actually offer some evidence for the idea that it’s easier in practice to reason in higher-order logic (and comparing set-theory to machine code); e.g. they say:

“Anecdotal evidence suggests that, for equivalent kinds of theorems, proof in higher order logic is usually easier and shorter than in set theory. Isabelle users liken set theory to machine code and type theory to a high-level language”

Anyway, curious if you see this as relevant to your question above.

[1] https://www.cl.cam.ac.uk/archive/mjcg/papers/holst/TPHOLs.pdf

Comment #202 November 3rd, 2020 at 9:35 am

fred #198,

>What does it even mean for an object to be in only two possible states, what happens when there’s a transition from one state to the other?

I’ve once read a draft about a theory that would say: “It means these two possible states are mirror-image of themselves. Transition is what happen when you change the point of view.”. Maybe something to do with the concept of symmetry.

Comment #203 November 3rd, 2020 at 9:41 am

That said, I really loved the ‘four levels to unpack’ section. It clarifies not only the things to come but also some older confusion I had around set theory.

Comment #204 November 3rd, 2020 at 10:17 am

If God gave you a nonstandard number, how would you ever know it was nonstandard? You could try to count up to it, but you’d never reach it, and you could never tell if this was because it was actually nonstandard, or because it was just ridiculously large. That’s my intuition for why we can’t pin down the standard numbers using our finite first-order logic.

—

I believe that when we do informal reasoning as Scott #184 does, saying that ZFC+Not(Con(ZFC)) is wrong about being inconsistent, we intuitively require ZFC to not have any nonstandard numbers. So basically, if we tried to formalize it, we’d have to say something like “assuming that ZFC is consistent, and that ZFC has the same set of natural numbers as us, then ZFC+Not(Con(ZFC)) is wrong about being inconsistent”. Even though this is not something that can normally be done in first-order logic, I believe there *is* a way to formalize this reasoning in first-order ZFC.

In ZFC, we could make a statement pointing to “the class of all inner models such that the inner models have the same set of natural numbers as our ‘outer’ ZFC”. Naturally, if the outer ZFC is consistent, then all these inner models will also be consistent, because by hypothesis they can’t have any more numbers (or any more proofs) than the outer ZFC does. In other words, with this extra condition on models, ZFC+Not(Con(ZFC)) has no model.

If we think of the outer ZFC as representing us humans, who are from the outside looking in to the inner ZFCs, then the previous paragraph, which I believe can be formalized in ZFC, says basically the same thing as our informal reasoning. As far as I know this doesn’t actually buy us much, I just think it’s interesting that the reasoning can be formalized this way[1].

—

[1] Actually, this idea is based on omega-logic, which is an infinitary extension of first-order logic which I believe effectively has the same result as what I did above, restricting what we allow as models to only those which have the same collection of natural numbers as the logic itself.

https://en.wikipedia.org/wiki/%CE%A9-consistent_theory#%CF%89-logic

I disagree with the first sentence of the section though, that it is a theory of arithmetic “whose integers are the true mathematical integers”, because the infinitary rule only makes sure that the model has the same natural numbers as the logic itself, but it can’t force the logic itself to have only the standard numbers. The section seems to mention this later, with the statement “(Other models of T may interpret these symbols nonstandardly; the domain of N need not even be countable, for example.)”.

Comment #205 November 3rd, 2020 at 10:39 am

Timothy #195

“It’s not so easy to develop general relativity without ever considering real numbers”There are also very interesting questions around more trivial things than GR, like Navier Stokes equations:

https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_existence_and_smoothness

“In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.”

It’s again interesting to ask how well continuous functions can model reality. Since turbulence is involved, it brings up fractals (some form of self-similarity at difference scales).

Comment #206 November 3rd, 2020 at 10:48 am

In #19, Scott says that there’s not even a computer program to enumerate all the ordinals in such a model (since otherwise the model could have no uncountable ordinals)—although note that there’s certainly a program to enumerate all the statements that are true in all such models (which are just the theorems of ZFC).

I have 2 questions. First, we can see that the program only outputs a countable number of ordinals. But how do know that the program itself doesn’t ‘see’ an uncountable number of ordinals?

2. How does the program determine which statements are true? How can we be sure that some diagonally created statement out of the enumerated theorem statements is not true?

Comment #207 November 3rd, 2020 at 11:09 am

Zeb #179: Thanks, that’s a mindblowing observation that was new to me!

SPOILER ALERT

To construct a chain of subsets of N of cardinality the continuum, I believe one can simply do this. First, let π be any bijection between N and the rational numbers in [0,1]. Second, given a real number p∈[0,1], let S(p)⊆N consist of all n∈N such that b(n)≤p. Now {S(p) : p∈[0,1]} is the chain we want.

What makes this counterintuitive, I think, is that when you hear the word “chain,” you think about

well-orderedchains—but those, of course, can have at most countable cardinality.Comment #208 November 3rd, 2020 at 11:20 am

Tamas V #190:

But then, the same person in F+Not(Con(F)) must also know (understand) that F+Not(Con(F)) is inconsistent (most likely he/she then would commit suicide). So why doesn’t this knowledge qualify as a proof in F+Not(Con(F))? That’s my confusion.

Let me say it one more time: “consistency” just means that you can’t prove both a statement and its negation. It’s purely a syntactic condition; it’s not a metaphysical claim about your beliefs “making sense” from some higher standpoint. In fact—and directly relevant here—you can perfectly well have a consistent theory that proves both

“there exists a natural number n such that P(n) is true,”

but

also, for eachspecificn∈N,“P(n) is false.”

The reason is that combining the above statements into a proof of inconsistency would require checking infinitely many n’s, which is not allowed. The condition that you can’t have situations like the above is called “ω-consistency” in logic, and it’s a stronger condition than mere consistency.

If F is consistent, then the self-hating theory F+Not(Con(F)) is consistent but not ω-consistent.

(Incidentally, Gödel’s Completeness Theorem

doesgive a semantic condition that’s equivalent to consistency, but that semantic condition involves “nonstandard models,” which have their own baggage to unpack! One can understand everything I said above, I think, without yet saying anything about models.)Comment #209 November 3rd, 2020 at 11:31 am

Roland #192:

Scott, given that the real numbers that have a name and “can be talked about” are countable, does this not make reasoning about the cardinality of the other reals kind of weird? How can numbers that cannot even be described be relevant other than just knowing that they exist?

Again, the trouble with your question is that “the set of reals that have names / can be talked about” is not a well-defined collection!

Each particular naming conventionwill give you a subset of only ℵ_{0}reals, but as you consider more and more powerful naming conventions, you’ll get bigger and bigger countable subsets (the rationals, the algebraic numbers, the computable reals, the arithmetically definable reals, etc). And there can’t be any “ultimate” naming convention, since supposing there were, we could just use diagonalization to name a real beyond anything nameable using that convention! This means that we can’t have an enumeration of all the possible naming conventions either, since such an enumeration would itselfconstitutean ultimate naming convention, which could then be diagonalized against. So we’re led to the conclusion that, at least fromourstandpoint, there must be uncountably many possible naming conventions, and therefore at least ℵ_{1}reals when you take the union over all the reals they name.Now you might object: “But that doesn’t make sense! Clearly, each naming convention can be described using a finite string of bits, and there are only countably many finite strings!” From some godlike perspective, yes. But this comes back to “the undefinability of definability”: we can’t even mathematically define what constitutes a sensible naming convention

for us, in order to enumerate all the possible sensible naming conventions that we could use. If we could, then again, we could diagonalize across them to produce a new “sensible naming convention” that wasn’t in the original enumeration.Comment #210 November 3rd, 2020 at 11:52 am

fred #196 and #198:

I don’t, but that’s my point, is the fact that I can move my hand through space evidence that space isn’t a continuum?

This reminds me of the arguments of Democritus and Zeno and the other ancient Greeks! On the one hand, they were smart guys; on the other hand, I’d like to think that math and physics have advanced at least somewhat since their time. 🙂

Calculus and real analysis do let us make sense of the idea of a continuous motion, which passes through infinitely many points in a finite time. And again, the

partsof calculus and analysis that have proved relevant for physics don’t actually depend on the strange questions of Cantorian set theory that are my interest in these posts.At the same time, I think one could also make sense of a discrete spacetime. What’s happening in a movie in between the frames, or between one pixel and a neighboring pixel? If

thatquestion doesn’t demand an answer, then why should the analogous question about the real world be different?Comment #211 November 3rd, 2020 at 12:02 pm

Tristan #199

The diagonal argument is constructively valid. It does NOT make use of the law of the excluded middle. It isn’t even really a proof by contradiction. It’s just a straightforward proof of a negative claim: that there is no enumeration of the infinite sequences of naturals (or reals, etc). A statement of the form ¬P is equivalent to the statement P → ⊥. A statement of the form A → B is proved by assuming A and then deriving B, so in this case P is assumed and then ⊥ is derived.

Suppose that we have an enumeration E of the infinite sequences of naturals. E is a function from naturals to infinite sequences of naturals such for any infinite sequence of naturals S, there is a natural N such that E(N) = S; in other words, every infinite sequence of naturals “shows up” in E.

E purports to be an enumeration. This can be shown to be false by explicitly constructing an infinite sequence of naturals that does not show up in E. First, define a function D from naturals to naturals as follows: D(k) = 1 if the kth element of E(k) = 0 and D(k) = 0 otherwise. Let Q = { D(n) | n ∈ ℕ } (in other words, map D over ℕ). By assumption, there is q such that E(q) = Q. But Q can’t be the qth entry in E, because Q and E(q) differ at the qth digit: if the qth digit of E(q) is 0, then the qth digit of Q is 1, and if the qth digit of E(q) is not 0, then the qth digit of Q is 0. Every digit is either 0 or not, so in all cases there is a contradiction.

You might think that “every digit is either 0 or not” is an instance of the law of the excluded middle, but in fact it can be derived from the definition of the natural numbers without appeal to LEM. The range of proofs that require unavoidable appeal to LEM is much smaller than is often imagined.

It could be objected that we don’t know what the diagonal sequence is, so the proof isn’t constructive. But that’s because we aren’t talking about a specific claimed enumeration, and the diagonal sequence is defined on a per-enumeration basis. This is what fred #106 means when he says: “No matter what strategy you try to enumerate all possible infinite binary sequences, YOU WILL FAIL.” The proof defines a constructive method for invalidating purported enumerations of infinite sequences of naturals.

Comment #212 November 3rd, 2020 at 12:14 pm

Vincent #200:

Since this guide is aimed at complete idiots, I feel entitled to make a small request: can you clarify the following sentence a bit: “In any case, though, the self-hating theory ZF+Not(Con(ZF)) can’t be arithmetically sound: I mean, just look at it! It’s either unsound because F is consistent, or else it’s unsound because F is inconsistent.” … How is sound different from consistent?

Thanks! Since this point is absolutely crucial, I just added some new material to the post clarifying the distinction between consistency and soundness, using the example of a conspiracy theorist (who

consistently but unsoundlybelieves that any evidence against their hypothesis must have been planted by George Soros or the CIA).Comment #213 November 3rd, 2020 at 12:22 pm

Venky #206:

But how do know that the program itself doesn’t ‘see’ an uncountable number of ordinals?

What does it mean for a program to “see” an ordinal? Certainly, the program runs for at most a countable number of steps.

How does the program determine which statements are true? How can we be sure that some diagonally created statement out of the enumerated theorem statements is not true?

Let’s be careful here: just by syntactically applying the rules of inference, the program can enumerate all the

consequences of the ZFC axioms(i.e., all the theorems of ZFC). And by Gödel’s Completeness Theorem, this is equivalent to enumerating all the statements that are true in all models of ZFC. On the other hand, by theIncompleteness Theorem, the enumeration will necessarily miss some statements but are true but only in “reasonable” models of ZFC—for example, only in the models that don’t contain nonstandard integers.More generally, there’s

nocomputer program to enumerate all the statements that hold in “reasonable” (say, arithmetically sound) models of ZFC. For if there were, then we could use it to solve the halting problem: given a program P, we’d just have to enumerate all the true statements, until we found either “P halts” or “P runs forever.”Comment #214 November 3rd, 2020 at 12:35 pm

Nick #211, It would help me if you could point out which one of my claims in comment #199 are invalid.

Comment #215 November 3rd, 2020 at 12:47 pm

@Scott #207:

“What makes this counterintuitive, I think, is that when you hear the word “chain,” you think about well-ordered chains—but those, of course, can have at most countable cardinality.”

I’m confused; from the definition of “chain” given in #179 I thought a chain had to have a smallest element. Since a chain is totally ordered, that would mean *any* chain is well-ordered.

Is it not the case that a chain must have a smallest element?

Comment #216 November 3rd, 2020 at 1:06 pm

So, one can sort the reals, but the “reals” are so dense that you can’t iterate through them (i.e. you can’t generate them all).

Then in what sense do reals form a mere *one* dimension space?

They’re closer to a two dimensional positive integer space (but way denser, still), which aren’t necessarily obvious to enumerate either: for example what does it mean to enumerate all the points between (1,0) and (2,0)? You may consider doing (1,0)->(1,1)->(1,2)->(1,3)->…->(1,infinity)->(2,0). This scheme/path takes an infinity of steps to go from one point to some of its neighbors.

You can define a scheme/path that’s more “compact” with (0,0)->(1,0)->(0,1)->(0,2)->(1,1)->(2,0)->(3,0)->(2,1)->(1,2)->(0,3)->(0,4)->… (it fills the space diagonally) this way it takes only a finite number of steps to go from one point to any one of its neighbors.

Comment #217 November 3rd, 2020 at 1:33 pm

Peter Donis #215: Sure, we can require the whole chain to have a smallest (and largest) element; that doesn’t matter. The question is whether

everyelement has a next greater and next smaller element. If we want a chain inside P(N) to be uncountable, then the answer has to be “no.”Comment #218 November 3rd, 2020 at 2:38 pm

J #197:

“Let me try to rephrase your point, to make sure I got it…”

It looks like you did!

“how do we know that all these real numbers out there can be captured using our usual binary representation?”

That is a theorem from the axioms of real numbers, and (I would argue informally) is also intuitively obvious, or at least plausible, by imagining the real line, and the process of dividing it into pieces, first with all pieces of length 1, then cutting each piece in half, cutting each of those in half, etc, forever, and always being able to find out, about the number you’re trying to label, which piece it’s in (or whether it’s on a boundary between adjacent pieces).

If you think that at some point those questions stop making sense, then you don’t believe in the usual concept of the “continuum” as conventionally understood/axiomatized. If you don’t, then in whatever system you do believe in, perhaps whatever is analogous to the real numbers *are* countable! But in practice, it seems to be much harder to formalize a system like that, even if you *do* question the meaningfulness of arbitrarily small numbers — it’s easier to formalize the reals like we’ve done, then focus on theorems which try to treat tiny numbers as having only tiny importance (for example, theorems about smooth functions with bounded slope, etc).

[paraphrasing:] “how do we know that, if you use base 2 but permit 3 digits (0 or 1 or 2), some numbers have uncountably many representations?”

Because in that system, 0.00200 == 0.01000, so for a number like 0.001001001… you can make the decision independently, about whether to represent it locally using 10 or 02, in a countable number of places.

Comment #219 November 3rd, 2020 at 2:59 pm

@Scott #217:

“Sure, we can require the whole chain to have a smallest (and largest) element; that doesn’t matter. The question is whether every element has a next greater and next smaller element. If we want a chain inside P(N) to be uncountable, then the answer has to be “no.””

Ah, I see: if the chain is dense (I think that’s the word for the property that between any two elements we can always find another element, so there is never a next greater or a next smaller element), it won’t be well-ordered even if it has a smallest element, because a non-empty subset that doesn’t contain that smallest element could fail to have a smallest element (the same way an open interval of the reals does).

Comment #220 November 3rd, 2020 at 3:03 pm

Scott #116: “I … removed the “Follow” button…”

So what *is* the official way to follow the blog? I can’t find anything about that on the blog itself, and I long ago told wordpress.com to follow it (it still thinks I am subscribed), with no effect. Should I do what Justin Lebar #93 did?

Comment #221 November 3rd, 2020 at 3:11 pm

Bruce #218,

>It looks like you did!

Thanks for your help!

>That is a theorem from the axioms of real numbers, and (I would argue informally) is also intuitively obvious, or at least plausible

Actually I agree it’s intuitive! It’s just that, to me it is both equally intuitive that all numbers have a binary representation, and that one can mechanically enumerate them all. 😉 But we know the latter is wrong (or, as you say, plain in contradiction with the usual concept of the “continuum” as conventionally understood/axiomatized), which is why I started worrying about the former in the first place. But if that’s a theorem, then that’s a theorem! That completly fills the “pseudo loophole” I was confused about, thank you! (if you have a reference in mind I’ll take it, otherwise I’m confident I can find/recognize this theorem now I know it exists).

>Because in that system, 0.00200 == 0.01000, so for a number like 0.001001001… you can make the decision independently, about whether to represent it locally using 10 or 02, in a countable number of places.

Ok, I think I got this, but does that proves that there are *un*countably many representations?

Comment #222 November 3rd, 2020 at 3:13 pm

Bruce Smith #220: Yes! I don’t think RSS is even well-supported anymore by WordPress, but if anyone still wants it, then do what Justin Lebar did!

Or does anyone else have a better suggestion?

Comment #223 November 3rd, 2020 at 3:28 pm

Timothy Chow # 195:

Thank you very much for the direction.

Comment #224 November 3rd, 2020 at 4:19 pm

Tristan #214

All that matrix talk is barking up the wrong tree. An enumeration of some set S is nothing other than a surjective function from the natural numbers onto S. It has nothing to do with matrices. An enumeration might be represented as some kind of grid, but that’s just for convenience.

Cantor’s theorem implies that the set of infinite bit sequences cannot be enumerated, which is to say that any proposed enumeration of infinite bit sequences will leave some off. It looks like your comment #81 proposes a certain enumeration, then exhibits an infinite bit sequence that is not contained in that enumeration. So it seems to me that you should agree with the theorem. Or do you think that the sequence 111… will show up eventually? If so, when? Meaning, for which k does s(k) = 111…?

Comment #225 November 3rd, 2020 at 4:23 pm

Thx Scott #213. I went and reread the relevant chapter from Democritus.

Comment #226 November 3rd, 2020 at 4:56 pm

J #221: glad to help! It’s fun to understand things, and even more fun to help other people do so!

As for a reference, I have no better suggestion than looking up “Cantor’s diagonal argument” — though Wikipedia’s version looks like it covers the “ambiguous representation issue”, but in a much less clear way — or just “bookmarking this blog post”.

About numbers in “base 2 with 3 digits” — you understand we can look at a number like 0.001001001… and change its representation between two forms, independently in each of a countable number of places; but you ask “does that prove there are *un*countably many representations [of that one number]?”

It does, and the argument is exactly the same diagonal argument as before! That argument applies to any objects which are each equivalent to an “infinite list of choices”, whatever form those objects come in and whatever they are “used for” or “mean”. And it tells us “if all combinations of those choices are possible (i.e. they result in actual, distinct objects of that kind), then there are uncountably many of those objects”. In this case, the objects are not different numbers, but different representations of one number, but the theorem works just as well.

Comment #227 November 3rd, 2020 at 5:01 pm

Scott #222: Just to be clear, I don’t care about RSS specifically — I just want an email alert about new posts (i.e. to “subscribe”), and that sounds like a possible way to get one. Anyway, I’ll try it and let you know whether it worked.

Comment #228 November 3rd, 2020 at 5:55 pm

Peter Donis #215:

When talking about partial orders, a “chain” just means a subset that is totally ordered. Although you do also see the terms “ascending chain” to mean a subset isomorphic to ω, and “descending chain” to mean a subset isomorphic to ω*.

Comment #229 November 3rd, 2020 at 6:08 pm

Tristan #214

If I could jump in: Claim 1 is not valid. Your enumeration only contains bit sequences that end in 0000……—all 0’s. So, for example, you don’t include the infinite bit string 01010101… — a rational number. Likewise, the infinite bit string encoding the unary halting language.

Thus, you have only exhibited a countable list, as expected (which is why it can be diagonalized — 1111…. indeed never shows up in the enumeration). I think?

Comment #230 November 3rd, 2020 at 7:17 pm

Bruce #226,

Of course! Many thanks 🙂

Comment #231 November 3rd, 2020 at 10:40 pm

The usual example of a number with two decimal representations is 0.9999… = 1.0000… .

Regarding the idea that ZFC2 fixes a truth value for CH: well ok, but that is useless for finding out what the truth value is! There’s a viewpoint these days (see Joel David Hamkins’ “Multiverse” writings at http://jdh.hamkins.org/themultiverse/) that the old-fashioned Platonist viewpoint that the matter of CH is out there waiting to be settled by new axioms is outdated and that there are many universes to choose from, with differing values of CH. In particular, any universe with CH has a forcing extension with not-CH and vice versa, making CH a “toggle” rather than a “button” if I remember his terminology.

Regarding “proofs” in ZFC2: I don’t think there is even a clear notion of what a ZFC2 proof *is*. With ZFC1 you can in principle write a program that checks a purported proof for validity, but there is not such a procedure for ZFC2. You might be better off considering infinitary proofs, which are interesting (https://en.wikipedia.org/wiki/Infinitary_logic). Woodin’s Ω-logic is an example, and my joke about Gödel and the infinitely long proof of CH is related to that idea.

Some people like Nelson aren’t even Platonists about the integers, and are skeptical about the unrestricted induction axioms that quantity over integers that “haven’t been invented yet”. I’m a little bit more sympathetic to Platonic integers now than before, after considering JDH (and others’) work on infinite chess (http://jdh.hamkins.org/tag/infinite-chess/). Given a generalized chess position consisting of a finite number of pieces on an infinite, edgeless board under where draws somehow don’t exist, consider the problem of figuring out which player (White or Black) has a winning strategy (one of them must). If I understand JDH’s article properly, it is an open question whether this problem is arithmetically complete. But even if it’s not, maybe some further modification of chess can make it so. That gives a way to turn an arbitrary arithmetic formula (quantifiers nested to any depth) into a chess adjudication problem. Should such a problem have a definite answer, even if there is no way to determine it? “Yes” is a fairly reasonable answer, and that reasonableness seems like an argument for arithmetic platonism. (Does this make any sense?)

Comment #232 November 3rd, 2020 at 10:45 pm

Note in the above, I think it is necessary that the strategies are allowed to be uncomputable.

Comment #233 November 4th, 2020 at 2:58 am

Came across a book from Smullyan that seems easily accessible: Gödel’s incompleteness theorems. Highly recommended for “complete idiots” 🙂

Comment #234 November 4th, 2020 at 4:21 am

Scott #212: thanks for taking this up so quickly! The extra explanation is very clear and engaging!

Comment #235 November 4th, 2020 at 11:08 am

Somehow I managed to stumble through life without learning almost anything about “pop set theory.” My first introduction to notions like \(\aleph_2\) was in a colligate model theory course, and I strongly concur with the idea that’s been a breakthrough for David #14, Scott #20, and similar comments. To me, the aleph numbers are the thing that is understandable and \(\mathbf{c}\) is the thing that is bizarre. It had never occurred to me until I read this post that many people might form an impression the other way around, and frankly I still struggle to understand that point of view.

Our very first homework assignment emphasized the weirdness of \(\mathbf{c}\) with a challenge problem that I still break out to blow people’s minds almost a decade later: Write \(\mathbb{R}^3\) as the disjoint union of circles of radius one (hint: proof by induction). The proof of this counterintuitive theorem boils down to the fact that you can always push the previous circles around enough to make room for new ones. You set up an iterative procedure that will ensure that every point is on at least one circle and then as you carry out the procedure you make sure that no circle intersects a previous circle. The fact that \(\mathbb{R}^3\) is malleable enough to do this is quite strange.

Comment #236 November 4th, 2020 at 11:20 am

asdf#232

I think that you misunderstand JDH’s work on chess (or perhaps I misunderstand you). In a 2012 paper titled “The mate-in-n problem of infinite chess is decidable” JDH and two others prove just that. The arithmetic hierarchy theorem therefore ensures that arithmetic contains many mysteries that cannot be embedded as chess problems, at least not ones with finitely many pieces.

If the embedding of logical systems in games influences your thoughts on this matter, I’m curious as to if you’ve read my articles “Magic: The gathering is Turing complete” and “Magic: the Gathering is as Hard as Arithmetic.” In the first paper my coauthors and I prove that being able to tell who will win a game of Magic: the Gathering is at least as hard as the halting problem (by explicitly embedding a UTM into the game), and in the second I use this along with some rather standard concepts in computability theory to prove that the mate-in-n problem for Magic is at least as hard as \(\Delta_n^0\). This appears to be the result that you desire especially as, like JDH’s work on chess, we only use a finite number of cards (both papers provide decklists).

Comment #237 November 4th, 2020 at 12:17 pm

Stella #235: Cool!! Can that filling of R

^{3}be done explicitly or does it require the axiom of choice?Comment #238 November 4th, 2020 at 12:21 pm

Dear Scott,

Many thanks for this accessible and insightful exposition!

Comment #239 November 4th, 2020 at 3:04 pm

Scott #237: The proof Stella has in mind assumes that the continuum can be well ordered. The same kind of transfinite induction argument yields many similar results; see for example

Set Theory for the Working Mathematicianby Krzysztof Ciesielski. Which of these types of results can be proved without AC is a good question. Joel David Hamkins asked that question on MathOverflow but as of this writing there seems to be no definitive answer. The specific question of tiling \(\mathbb{R}^3\) with unit circles has a very interesting “near-miss” in the form of the Hopf fibration: One can explicitly tile the three-sphere \(S^3\) (thought of as the subset of \(\mathbb{C}^2\) defined by \(|z_1|^2 + |z_2|^2 = 1\)) with congruent circles in a way that shows that the higher homotopy group \(\pi_3(S^2)\) is nontrivial: take the orbits of the Hopf flow \((z_1,z_2) \mapsto (e^{it}z_1, e^{it}z_2)\). But I don’t know of an explicit tiling of \(\mathbb{R}^3\) with unit circles.Stella #236: Your work on the complexity of Magic is really cool! In this connection, I can’t resist mentioning a puzzle from the 2013 MIT Mystery Hunt. Click on one of the Magic cards and ask yourself if the position could arise in a legally played Magic game, and if so, whose turn it must be. Question: What is the computational complexity of deciding this type of question in general?

Comment #240 November 4th, 2020 at 6:29 pm

Stella #236, the question of who can win an arbitrary infinite chess position is not “mate in n” for a given fixed n (that is decidable), but whether a win exists regardless of length. The win length can be a transfinite ordinal, conjectured (and in the 3-D case, proved) to be able to be any countable ordinal at all, and thus bounded only by ω

_{1}. I believe this is the problem that might be arithmetically complete (not just Turing complete). Am I confused? Does the number of pieces have to possibly be infinite? If yes, does it have to possibly be uncomputable? Thanks.Comment #241 November 5th, 2020 at 12:49 am

@Zeb #179

OK, that’s cool.

I was thinking more in terms of cofinality, though. Going by merely containing a long chain, the reals contain a chain of size …. |R| ! But they have a countable cofinal subset, much as P(N) does. So in this sense, P(N) having long chains is more like going the long way around from the bottom to the top. Since omega_1 has uncountable cofinality, it’s a way in which it is different from R and P(N)

with their natural order structures. I just wanted to illustrate that any (doomed!) “normal” approach to linking R and/or P(N) and omega_1 for the purposes of CH just can’t use any of the structure that’s available, like a mathematician normally would.Comment #242 November 5th, 2020 at 5:46 am

> As soon as someone notices that N, it becomes interesting enough to deserve a Wikipedia page, and then we’re back where we started!

Right now, the smallest number not of interest to Wikipedia is 2028,

while all N < 2028 have their own https://en.wikipedia.org/wiki/N page

2028 will become interesting automatically as we approach it and people start planning events in it.

Comment #243 November 5th, 2020 at 6:29 am

Scott #78 No, we don’t know for certain that ZFC is consistent (although I’d probably bet my house on it). But it’s a theorem within ZFC (in fact, within elementary arithmetic) that “ZFC can’t prove Con(ZFC), if and only if Con(ZFC) in fact holds.”

Is this Iff relation, as you wrote in the OP, a consequence of both the incompleteness theorem and that an inconsistent system can prove anything? (The arithmetic part presumably requires more.)

Comment #244 November 5th, 2020 at 6:42 am

Venky #243: Yes.

Comment #245 November 5th, 2020 at 8:35 am

Scott #237: Chow #239 says pretty much everything I was going to say.

asdf #240: You are correct, I was misremembering DJH’s work. There are positions in chess that are not mate-in-n for any fixed n but which are nevertheless forced wins for white.

I do want to clarify for anyone reading that “mate-in-(\omega\)” or a “game value of \(\omega\)” does not mean that it takes infinitely many moves to win. Rather, it means that black can forestall a loss arbitrarily long. The idea is that for sufficiently complicated games the notion of “the number of moves it takes to win” and “the complexity of the winning strategy” become uncoupled. The number of moves it takes to win is any (large) integer of black’s choice in general, white the complexity of the winning strategy is tied to white’s ability to win from any position however large black chooses. The notion of “game value” generalizes the “mate-in-n” problem as follows:

We recursively define the game value of a board as follows. By convention we will always have white be the player who is playing to win and black the player who is playing to draw.

1. Games where white has already won have game value 0.

2. Games where it is white’s turn to move have game value one more than the lowest game value of a position white can legally move to.

3. Games where it’s black’s turn to move have game value equal to the supremum of the game values of the positions black can legally move to.

It is standard in game theory, as well as the handful of real-world games (such as Magic), to call a game a draw if it will never end. Note that not all boards have a game value under the definition above, those games are exactly the ones where black can force the game to go on forever (as opposed to merely arbitrarily long). If we say that an undefined game value is “greater than every ordinal” then using the above definition black seeks out draws and white seeks to avoid them in the expected way. For a thorough treatment of these ideas and accompanying diagrams, see “Transfinite Game Values in Infinite Chess” by Evans and Hamkins.

Note that we can now pose questions like “mate-in-\(\omega\)” in a meaningful way. Also, it’s worth noting that \(\omega = \omega_0\), the order type of the integers in their usual order. All games in chess with defined game values have countable game values, and so despite the rich structure discussed here we never even get to \(\aleph_1\)! The fact that all game values in chess are countable is closely connected to my above comment about how all chess games with a winner have a winner in finitely many moves.

Using the language of “Transfinite Game Values in Infinite Chess,” the question “what is the largest game value of a position in chess with a forced win” is “what is the \(\omega_1\) of chess.” Recall that \(\omega_1\) is the first uncountable ordinal and has size \(\aleph_1\) we can also ask this question with various restrictions on moves, such as “what is the largest game value of a position in chess with a forced win that can be forced with a computable strategy” or “… with an arithmetic strategy.” The best bounds known (AFAIK) for the general question is \(\omega\cdot\omega=\omega^2\leq\omega_1^\mathbb{Ch}\leq\omega_1^{CK}\) where \(\mathbb{Ch}\) denotes that we are talking about the omega-one of chess and \(\omega_1^{CK}\) denotes the Church-Kleene ordinal, the first non-hyperarithmetical ordinal.

Comment #246 November 5th, 2020 at 9:19 am

Scott #156 says that Regarding F+Not(Con(F)): yes, it can prove its own inconsistency—because it’s got its own inconsistency right there as an axiom! The point you need to understand is that this is not, itself, an inconsistency in F+Not(Con(F)). It would only become one if F+Not(Con(F)) could also prove its own consistency. But if F itself is consistent, then it can’t.

It’s helpful (and hopefully correct) for me think of this as F + Not(Con(F)) being able to prove that F implies 0 = 1, but not being able to prove 0 = 1.

Comment #247 November 5th, 2020 at 9:27 am

venky #246: Right, except you need to be careful with “F implies 0=1.” In F+Not(Con(F)), there’s a “theorem” stating that by starting with the axioms of F and applying the rules of inference, you can eventually reach 0=1. But there’s never any actual demonstration of how, because in fact you

can’treach 0=1 by starting with the axioms of F and applying the rules of inference.Comment #248 November 5th, 2020 at 9:31 am

Wow! I seem to have hit the comment limit for the first time. As a side note, googling “rich HTML” doesn’t seem to give me any usable info on how to improve the formatting of my comments. I would greatly appreciate any resources.

For games with infinitely many pieces, though I believe a board with game value \(\omega^5\) has not yet been demonstrated. DJH conjectures that \(\omega_1^\mathbb{Ch}=\omega_1^{CK}\) and that for infinitely many pieces the omega-one of chess is the actual ordinal \(\omega_1\). In both cases this would mean that the omega-one of chess is as large as possible, with the upper bounds coming from trivial counting arguments rather than anything meaningful about chess. One final note is that that there is no such thing as a chess board with uncountable many pieces. The boards we are discussing have only countable many cells.

Now for game values in Magic. I have considered the problem of game values in Magic a few times, but have yet to come up with something that seems sufficiently “complete” to write up. It’s easy to see that there are much better lower bounds on game values in Magic than in chess, with my personal record being in excess of \(\omega^\omega\). To make these things a little more concrete, let’s build up to what a game value of \(\omega^\omega\) looks like in Magic.

Suppose White has a creature that can attack and deal 1 damage each turn, and black cannot prevent this (in Magic, the main way to win is to reduce your opponent’s life points to 0). However black can use a combination of cards to gain what is colloquially referred to as “infinite life” but is really “arbitrarily large amounts of life.” If it is back’s turn and black can execute this combo once, then the game value is \(\omega+k\) where \(k\) is black’s currently life I’ll drop the \(k\) from now on but there is always going to be a \(+k\). If black is in a position where they can pick any number they like and execute this combo that many times then the game value is \(\sup_n n\cdot\omega = \omega^2\). In this example you can think of there as being a two-layer loop in programming. I would write it out explicitly but I don’t know how to format code non-atrociously.

For \(\omega^\omega\), we need to be able to let Black have a choice of moving to a game of value \(\omega^z\) for every integer \(z\). This meaning nesting the “as many times as black wishes” so that the number of layers in the aforementioned loop is as many as black pleases. In general, the exponent reflects the number of layers in the loop (though that association becomes more and more strained as we go up the ordinal hierarchy).

I don’t have a great grip on what game values like \(\omega^{\omega^{\omega}}\) look like, but I strongly believe that I would be able to easily realize that in Magic if I did. Furthermore I have the following metamathematical conjecture: for all countable ordinals there exists a level of understanding of transfinite game theory such that exhibiting a game of Magic with game value equal to that ordinal is trivially easy.”

I really ought to get to work, but after work (or after I stop pretending to work) I will return to discuss the question of whether determining the legality of a position is computable. This hinges on some very interesting computational aspects of Magic, and in particular that the usual notion of “turn” is probably the wrong notion for Magic.

Comment #249 November 5th, 2020 at 11:59 am

Scott #247:

By “theorem”, do you mean it can be

syntacticallyproved? (At least that was my understanding of what a theorem is, but after the last two days, I’m not sure about anything anymore.)Comment #250 November 5th, 2020 at 12:22 pm

Tamas #249: Yes, I mean it can be syntactically proved—though in this case, the “proof” is just a trivial restatement of the axiom.

Comment #251 November 5th, 2020 at 12:30 pm

Scott #250: Ok, assume I syntactically prove a “theorem” stating that by starting with the axioms of arithmetics and applying the rules of inference, I can eventually reach 2+2=4. Do I understand it correctly that this does

notqualify as a proof of 2+2=4, unless I give anactual demonstrationof reaching 2+2=4?Comment #252 November 5th, 2020 at 12:47 pm

Tamas #251: What you’ve described is not a proof that 2+2=4, but rather a proof that 2+2=4 is provable. A proof that 2+2=4 would be if you actually gave the derivation—though even then, it would only be a proof

from the given axioms, so its usability would depend on those axioms being consistent.Comment #253 November 5th, 2020 at 1:03 pm

Scott #252: I see, thanks, this was new to me. Until now, I was under the impression that (in non-constructive mathematics) proving the provability of statement S

isa proof of S. At least I would believe S without hesitation in such a case (given the axioms, of course). But yes, I know, belief is not a syntactic thing 🙂Comment #254 November 5th, 2020 at 3:12 pm

Stella, “I have the following metamathematical conjecture: for all countable ordinals there exists a level of understanding of transfinite game theory such that exhibiting a game of Magic with game value equal to that ordinal is trivially easy.”

I think that can’t be right since I’d hope “trivially easy” at least means “computable”, which I think automatically gives an upper bound of ω

_{1}^{CK}. But getting to there is also very difficult. A lot of head scratching has gone into creating ordinal notations that run out of gas much sooner. I think you are basically saying Magic games form an ordinal notation for arbitrary countable ordinals. That would be awesome but I don’t think it is likely.There can’t be uncountably many pieces in infinite chess of course, but if there are no restrictions on where the pieces can be, then there are continuum many possible positions.

JDH uses fancy typography for the distinction but I’ll say IC0 to denote infinite chess where the number of pieces is required to be finite, and IC1 where it can be infinite. So is it possible to encode the halting problem as an IC0 position and/or as a Magic game? Hmm.

Comment #255 November 5th, 2020 at 4:19 pm

Great post!

It only reinforces my sense that continuity is something like an eldritch horror from another dimension though.

The reals are real weird man!

Comment #256 November 5th, 2020 at 9:38 pm

Wow, I think I fall squarely in the target audience of this essay, and my mind is appropriately blown.

Does this mean that uncountable sets may not exist? That is, can one construct a model of ZFC and an uncountable set in that model such that the set is uncountable in any extension of that model? (“Extension” might not be the right word here, intuitively I’m trying to describe a bigger model that you’d make by adding sets in to your original model).

Comment #257 November 5th, 2020 at 10:09 pm

Dan Staley #256: What do you mean by uncountable sets “existing”? Within any given model of ZFC, the reals exist and are uncountable. The trouble is just that they might be countable from a godlike perspective outside the model. Countability is not an absolute notion.

Comment #258 November 5th, 2020 at 10:34 pm

Tamas V #253 (and others confused about “proving one’s own inconsistency”): Perhaps this might help: Don’t think of formal theorems of PA (first-order Peano arithmetic) as “saying” anything directly about “provability.” Instead, think of them as asserting facts about “\(\mathbb{N}\)-lookalikes” (a word I just made up); i.e., algebraic systems that look a lot like the natural numbers \(\mathbb{N}\). \(\mathbb{N}\) itself is an \(\mathbb{N}\)-lookalike but it’s not the only one.

Specifically, it is tempting to look at the notation “\(\neg\)Con(PA)” and infer that it is “saying” that PA is inconsistent. But really, it’s some kind of complicated assertion that if you take certain elements of an \(\mathbb{N}\)-lookalike and manipulate them in a certain way, then you’ll get a certain result. The crucial point is that

if your \(\mathbb{N}\)-lookalike happens to be \(\mathbb{N}\) itself, then these manipulations of natural numbers faithfully mimic syntactic manipulations of syntactic objects—specifically, they mimic a syntactic proof of 0=1 from the axioms of PA. However, if you take someother\(\mathbb{N}\)-lookalike, then there’s no reason to think that the interpretation of “Con(PA)” in that \(\mathbb{N}\)-lookalike has any relationship to the consistency of PA, or indeed to provability at all.If you think about the situation this way, then I think it ceases to be so surprising that \(\neg\)Con(PA) can be adjoined to PA without immediately causing a contradiction. Instead of saying that \(\neg\)Con(PA) is “false,” just say that \(\neg\)Con(PA) doesn’t hold in \(\mathbb{N}\) but might hold in some \(\mathbb{N}\)-lookalikes. If someone thinks that \(\neg\)Con(PA) must fail in

all\(\mathbb{N}\)-lookalikes—well, the onus is on them to explain why (i.e., to prove that Con(PA) is atheoremof PA—which of course it isn’t).Comment #259 November 5th, 2020 at 11:42 pm

Staley #256:

This is the wrong moral to draw from Scott’s story, and I think going a little deeper into what it means for a set to be uncountable will help.

Definition:a set is infinite if there is a bijection between the set and a strict subset of the set.Definition:Comment #260 November 6th, 2020 at 1:59 am

Timothy Chow #239: Let’s call a family of unit circles \( \mathcal F\) a “maximal disjoint family of unit circles” if the members of \( \mathcal F\) are pairwise disjoint and \( \mathcal F\) can’t be extended to a larger family of pairwise disjoint unit circles. Obviously, a partition of \( \mathbb{R}^3\) to unit circles is a maximal disjoint family of unit circles. Also, by the same argument that shows the existence of such a partition from AC, you can show that the cardinality of such maximal families is always uncountable. This is now becoming somewhat similar to the situation with mad families (which can’t be analytic) and other similar maximal families of reals. So the natural question is: is it known whether there is a Borel maximal disjoint family of unit circles?

It’s almost 3am, so I might be missing some trivial geometric construction due to sleep deprivation and compulsively following the news (but if there is no such trivial construction, then it might turn out to be a quite interesting question).

Comment #261 November 6th, 2020 at 6:19 am

Dear Prof. Aaronson, thanks for this. I remember reading a comment of yours on mathoverflow outlining your “philisophical” position w.r.t. set theory and CH in particular (I hope I remember correctly and do not misrepresent your position): platonist about arithmetic, no fact of the matter about CH. Has your deepened undertandig of the independence proof changed this position? KR, Gregorio

Comment #262 November 6th, 2020 at 7:43 am

Stella #259: Did something get left out from your comment?

Comment #263 November 6th, 2020 at 8:21 am

Gregorio #261: My basic position—what I described as “fanatical absolutist about arithmetical statements, kindly relativist about everything else”—was only confirmed and deepened by learning more about the subject. The biggest change is that, while I knew that I had no clear conception of P(P(N)), I’d

thoughtthat I had a pretty clear conception of P(N). Learning more—for example, about the independence from ZFC of the statement “all reals are constructible”—has made me realize that my conception of P(N) was pretty hazy as well.Comment #264 November 6th, 2020 at 8:52 am

Scott #263: By a classical result of Woodin, large cardinals (more precisely, a proper class of Woodin cardinals) imply that the theory of \(L(\mathbb R)\) is not changed by forcing (so trivially, the truth value of “there is a nonconstructible real” remains the same, though this can also be shown directly without Woodin’s theorem). Does that change your hazy conception of \( \mathbb R\)?

Comment #265 November 6th, 2020 at 9:49 am

Haim Horowitz #264: Clearly, if V=L then all reals are constructible, while if CH fails then there are non-constructible reals, no? So, if I regard both V=L and not(CH) as within the realm of intuitive plausibility, then shouldn’t this make my view of P(N) at least somewhat hazy?

Comment #266 November 6th, 2020 at 10:40 am

Scott #257: I tried to explain what I meant by an uncountable set “existing” in the sentence after my question, but I think my unfamiliarity with logic/set theory is making it hard for me to explain myself. Let me try to be a little more clear.

Another way to phrase my question would be, “Is it possible that God can count every set?” or conversely, “Can we prove the existence of a set that not even God can put in bijection with the natural numbers”?

My clumsy attempt at formalizing this was to try to define “model extensions”, where we would say B (a model of ZFC) is an extension of A (another model of ZFC) if everything in A is also contained in B and all the same relations hold (this might not be the “correct” definition to match my intuition – as I said, I’m pretty naive when it comes to logic/set theory). An example is when you talk about Cohen starting with a minimalist model of ZFC and then adding in new real numbers – that would be creating extensions of the minimalist model and then continuing to make extensions of the extensions.

If that definition (or a similar one) makes sense, then we could make my question more formal by asking whether we can exhibit a model M of ZFC with a set A, such that A is not only uncountable in M, but in any extension of M.

I *think* this question makes sense, though again it might just be that I don’t understand all the logical/set theoretical subtleties here. As an illustrative example, I *think* the answer to the question would be “yes” if we replaced the word “uncountable” with “infinite” – even God cannot find a bijection between an infinite set and a finite one (or for that matter, between two finite sets of different sizes). Or maybe I’m wrong?

As a side note, I believe that if we could prove the existence of a set that God cannot count, we would have to use the Axiom of Choice to do it, since any explicit construction lies in the minimalist, countable-by-God model of ZFC, basically by definition.

Comment #267 November 6th, 2020 at 10:51 am

Sorry to double-post, but I think I may have just rubber-ducked myself in typing out my last paragraph. Let me see if I can answer my own question:

Even though using AOC (the axiom, not the Congresswoman) isn’t constructive, it’s still a formal operation that uses a finite number of symbols. I was confused about the minimalist model of ZFC, which isn’t the

constructiblesets, it’s thedefinablesets. So any set we can exhibit within ZFC lies within the minimalist model.So maybe my question really boils down to: Can these larger models also could be seen as only having a countable number of “definable” sets (in which case God can count them), or is there some trick Cohen uses that prevents this from working?

Comment #268 November 6th, 2020 at 11:14 am

Timothy Chow #256: thank you very much, makes perfect sense. I will only be able to really understand it after I’ve seen what exactly that “complicated assertion” is… working on it! But this means it’s not possible within PA to express Not(Con(PA))

directly, such that it means the same for any\(\mathbb{N}\)-lookalike, right? What a pity 🙁Comment #269 November 6th, 2020 at 11:55 am

I accidentally submitted this comment before I finished writing it. Please ignore Stella #259

Staley #256:

This is the wrong moral to draw from Scott’s story, and I think going a little deeper into what it means for a set to be uncountable will help.

Definition:a set is infinite if there is a bijection between the set and a strict subset of the set.Definition:a set is countable if there is a bijection between the set and a subset of \(\mathbb{N}\).We are used to working with a very permissive view of the world. If you can write down a function in a “reasonable” way that function exists (most of the time…). It’s consistent with ZFC for there to be far

fewerfunctions than we typically assume though. Even if it is “really the case” that two sets, \(S,T\) have the same number of elements, it’s possible for a model of ZFC to not know this fact because it lacks the function that witnesses it.Conversely, there are models where the real numbers

are countable. In fact, there is a model of ZFC whereeverythingis countable. This (when juxtaposed with the fact that ZFC proves the existence of a countable set) is called Skolem’s paradox, and is a direct result of one of the most important theorems in first order model theory:A language is the set of non-logical symbols in the theory. For example, in ZFC we have \(\in,\subseteq\) as two symbols in our language. When talking about the theory of the natural numbers we have \(0,1,+,\times\) in our language. This means that there are models of the theory of the reals that are countable and models of the theory of the natural numbers that are uncountable.

The main thing to take away is that the

terms of conversationabout mathematics can differ between models. Someone from a non-standard model might come up to me and say “here’s my proof that ZFC is inconsistent” and when I look at it I point out that there are infinitely many steps in the proof. The person from the non-standard model looks at it and goes “no there aren’t. There are \(\gamma\) steps in the proof.”To me, \(\gamma\) is not a natural number. To the non-standard mathematician it is. And so she has no objection to proofs of length \(\gamma\) even if they are clearly infinitely long (and therefore not valid proofs) to me. In first order logic there is no way to tell who is right and who is wrong. In fact, without appeal to mappings to the external world there is no such thing as being right or wrong. For this reason, logicians typically call it “the intended model of arithmetic.” Using second-order logic we can uniquely identify the intended model of arithmetic. But using first-order logic we cannot.

Comment #270 November 6th, 2020 at 12:37 pm

Tamas V #268: I feel the urge to title my response, “How I Learned to Stop Worrying and Love Incompleteness.” I regard the existence of \(\mathbb{N}\)-lookalikes other than \(\mathbb{N}\) as a feature and not a bug. We want to understand what can and can’t be proved from certain axioms, right? When a statement P can’t be proved, it’s good to have a concrete example of something that satisfies the axioms but that doesn’t satisfy P.

People sometimes ask whether Fermat’s Last Theorem can be proved in PA. It seems very likely that it can be, but suppose it can’t. Then don’t you think it’s nice that there’s some number system out there in which \(x^n + y^n = z^n\) for some \(x,y,z\) and some \(n>2\)? It would be some amazing and unexpected mathematical identity—much more interesting IMO than a bare statement of unprovability.

Comment #271 November 6th, 2020 at 12:48 pm

Scott #265: If you maintain that all universes considered by set theorists are equally plausible, then of course, the concept of P(N) has to be hazy. You might even go further (as Solomon Feferman did) and claim that CH is not a definite problem. However, there are also set theorists who believe that some universes are more plausible than others, and if you believe that universes with many large cardinals are more plausible than L, don’t you think that an absoluteness result such as the one I mentioned makes the concept of P(N) a bit more robust?

Comment #272 November 6th, 2020 at 12:49 pm

asdf #254

My remark was intended less formally than you’re interpreting it. It was intended to be a claim about people rather than about mathematics. I believe that the hard part of such a construction is having a good enough grasp of what it would mean to have a game with game value equal to that ordinal. And if we did want to get hyper technical, my framing is non-uniform and therefore (I think) doesn’t run into that upper bound. If you give me an ordinal I can use it to specify the board state.

I think you’re making a slight category error here. It is possible to encode TMs as Magic games, so that determining which Magic games will terminate (and therefore not be draws) requires solving the halting problem. Any specific instance of the halting problem can be easily solved, it’s only when you look across infinitely many instances that things get wonky. Am I capturing what you are after, or no? I’m not certain I understand what you have in mind.

Chow #239

Thank you!

That’s a

veryinteresting question. I have only considered a very stripped down version of the question before: given a game state and a move, is the move legal? I believe that that question is computable (though likely extremely hard – not contained within EXPTIME for example), but yours is more difficult. Strictly speaking, in a tournament setting the answer is that the question doesn’t have an answer, as there are some rules corner cases that are adjudicated by judges and whose resolution is not contained within the rules themselves.Moving away from those corner cases, the first thing that pops out to me is the idea of setting up a TM that, say, searches ZFC for a proof of ~Con(ZFC) and halts when it finds one. The problem is ensuring that there is no alternative path to that board state. If you restrict the question slightly to “given two board states A and B, is there a legal sequence of moves that takes A to B” then it seems rather easy to extend the arguments in my papers to show that this is non-arithmetic.

The easiest way to make process on this (assuming that it is in fact hard) is probably to find a “narrowing point,” a sequence of board states such that the only legal way to get to the end is from the beginning. Then we would try to combine that with an encoding of a hard-to-solve problem.

Comment #273 November 6th, 2020 at 1:02 pm

Jair #165 wrote:

That’s a fun question: why is Gödel’s second incompleteness theorem important?

To me it’s obviously a surprising and beautiful result. It’s surprising – or at least it was when I first learned it – that Peano arithmetic is powerful enough to reason about itself and contain statement Con(PA) that comes close to meaning “Peano arithmetic is consistent”. And it’s very beautiful how Gödel showed PA could only prove Con(PA) if PA is

inconsistent.To me any surprising and beautiful theorem counts as “important”.

But if that’s not yet enough to make the second incompleteness theorem as “important”, though, what does?

It’s easy for arguments about “importance” to become tedious, swamped in subjective questions of taste. I want to avoid that.

For a mathematician, a theorem – even an unsurprising and somewhat ugly one – counts as important if it helps you settle a lot of questions you were wondering about. So, we could ask:

what theorems in logic follow from Gödel’s second incompleteness theorem?I’m afraid I’m blanking out on this one; I feel I knew examples but I can’t remember any now. Someone better at logic should know.One thing I hope everyone realizes is that Gödel’s second incompleteness theorem shouldn’t reduce our confidence in the consistency of PA one whit. Suppose I didn’t know Gödel’s second incompleteness theorem (or suppose, counterfactually, that it wasn’t even true.) Suppose PA

couldprove Con(PA). Would I feel this counted as evidence for the consistency of PA? No, of course not, because if PA were inconsistent thenof courseit could prove Con(PA): it could proveanything.A theory proving a statement that asserts its own consistency is kinda like someone saying “Trust me! I would never lie to you!” Is there anyone out there who is actually convinced by such remarks? I’m afraid there are. But not me.

Comment #274 November 6th, 2020 at 1:33 pm

Stella #256: Thank you for the detailed explanation! I still feel unsatisfied, though. Maybe I can explain why, and if you’re so inclined maybe you can help me understand further?

I’m with you up until the point where you say ” models of the theory of the natural numbers that are uncountable.” What does the word “uncountable” mean here?

As I am learning in this blog post, “uncountable” isn’t a hard-and-fast term about the size of a set, but instead depends on what other sets exist in our model. But when discussing the size of a model itself, I’m figuring there’s some sort of.. “encompassing model” we need to be working in? This would allow us to measure the size of a model without running into things like the set-of-all-sets paradox. If I’ve got the wrong intuition here, I’d appreciate any details you can offer to correct me.

But if I’ve got the right intuition, then isn’t “uncountable” still kind of relative, because there could be an even larger encompassing model in which the given model of the theory of the natural numbers becomes countable? That’s my understanding of what Scott wrote – that a set \(A\) that is uncountable in one model (call it \(M_1\) can become countable in a larger model (call it \(M_2\)), because there can be a bijection between \(A\) and \(\mathbb{N}\) in \(M_2\) that is not in \(M_1\).

Again, if I’m completely off-base in any of the above, I appreciate corrections or guidance to help me understand!

So this brings me to my question: Are there actually any sets \(A\) that are

intrinsically uncountable, in the sense that we can never find a model that has a bijection between \(A\) ) and \(\mathbb{N}\)? Or is it the case that, for all we know,anyset can be countable in a sufficiently larger model?It seems like some sets should have intrinsic properties, for example being finite. It seems like if I have a set with two elements, no encompassing model should be able to exhibit a bijection between that set and a strict subset of itself. But can we say the same about uncountability?

Comment #275 November 6th, 2020 at 1:46 pm

Dan Staley #267:

FWIW, “axiom of choice” is usually abbreviated “AC”, not “AOC”. 😛

Comment #276 November 6th, 2020 at 2:27 pm

Timothy Chow #270:

Yes, I agree, it’s nice, a long as the statement means something comparable in that other number system as well 🙂

Comment #277 November 6th, 2020 at 2:55 pm

Timothy Chow #270: please ignore my previous answer, I didn’t want to annoy you. Was just wondering: I expect we can express Con(PA) by more than one different statements, \(P_1\), \(P_2\), etc., in the number system \(\mathbb{N}\). So Gödel’s second incompleteness theorem basically says that whatever statement \(P_i\) we come up with, we can always find a corresponding number system \(S_i\) in which \(\neg P_i\) can be proved. Is this correct? (Scott, I promise this was my last question today!)

Comment #278 November 6th, 2020 at 2:59 pm

G #299

Javacript:

for (let i = 0; i < 200; i++) { console. log((i >>> 0).toString(2).split('').reverse().join('')) };

produces:

…

01

…

0101

…

010101

…

01010101

…

It is counting.

Counting will produce every bit sequence eventually.

Zeros must be appended with each new digit in order to maintain the matrix, but before every expansion, there will always be a 01010101… sequence in there, until it turns into 010101010… sequence with next digit, and so on…

Comment #279 November 6th, 2020 at 3:09 pm

Nick #224

I want to repeat what I said in comment #199. I am not arguing Cantor is wrong, as there are prior proofs by Cantor before using the diagonal argument.

I am arguing that the diagonal argument is unsound for three seemingly elementary reasons.

I am yet to hear something that points out the mistake in my reasoning about the

diagonal argument.As to specifically the point about my critique not generating the sequence 111… I do not understand. This program will generate a sequence 111… of all 1s eventually (it just counts up in binary with least significant bit on the left):

for (let i = 0; i < Infinity; i++) { console.log((i >>> 0).toString(2).split('').reverse().join('')); }

Comment #280 November 6th, 2020 at 3:10 pm

Nick #224

Forgot to clarify that language in comment #276 is Javascript.

Comment #281 November 6th, 2020 at 3:35 pm

John Baez #273: You know this already, but it seems worth stating explicitly for the benefit of other readers. Gödel’s 2nd is important because it showed that Hilbert’s program could not be achieved (at least not in its strongest form). Hilbert’s idea was that there are certain kinds of mathematical reasoning that are “finitistic” (roughly equivalent to what we nowadays call primitive recursive arithmetic) and that are completely trustworthy, but that reasoning using infinite sets is more doubtful. He was hoping for a finitistic proof that ZFC (or some other axioms for infinite set theory) is consistent. That way, even if we aren’t completely sure whether infinite sets “really exist” or that our statements about infinite sets are “meaningful,” we could nevertheless freely use infinitary reasoning, and never have to worry that we would run into a contradiction.

Even today, Hilbert’s program sounds plausible to many people. People ask, do we know that ZFC is consistent? Implicitly, they’re hoping for a

proofthat ZFC is consistent. And the weaker the assumptions needed for the proof, the better. Gödel’s 2nd tells us that any such hopes are doomed. Not only is primitive recursive arithmetic unable to prove that ZFC is consistent, it’s even unable to prove that primitive recursive arithmetic is consistent.As for consequences of Gödel’s 2nd, specific examples include Löb’s theorem and the fact that Henkin sentences (“this statement is provable”) are provable. But I think that more important than “quotable” consequences is the general picture that Gödel’s 2nd implies, namely that axiomatic systems are partially ordered by

consistency strength. For example if you posit the existence of a strongly inaccessible cardinal, then you can create a model of ZFC, and so you can’t hope to prove in ZFC that strongly inaccessible cardinals are consistent with ZFC. Set theorists live in a world where they deal with a hierarchy of stronger and stronger axioms and theyknowthat the consistency of these axioms is not provable in any ordinary mathematical sense. This feels pretty weird to most ordinary mathematicians, who are used to unproven conjectures but generally expect that the conjectures will eventually be “settled” one way or the other. The consistency of large cardinal axioms simplycan’tbe “settled” (except in the unlikely event that they are shown to beinconsistent).Comment #282 November 6th, 2020 at 3:41 pm

Dan Staley #274 wrote:

First: what does “model” mean here?

Models are things that live within set theory. A model of the theory of natural numbers is a set S, a bunch of operations on this set like + and ×, and so on… obeying the axioms of the theory of natural numbers.

In set theory we can talk about cardinalities of sets. In particular we have a concept of “uncountable set”. So, an “uncountable model” of the theory of natural numbers is one where the set S is uncountable.

And it turns out that if there are any models of the theory of natural numbers at all – that is, if this theory is consistent – then this theory must have uncountable models! This follows from the Löwenheim–Skolem theorem.

True, true! Instead of working within set theory as I’d just been doing, we can work in a

modelof that set theory! And sets that were uncountable in that set theory can be countable in the model! This again follows from the the Löwenheim–Skolem theorem.There’s no end to the number of times we can play this game. So it’s like one of those dreams where you “wake up” and everything is different… but then “wake up again” and everything is different yet again! As the guy said:

But the cool part is, this ability to keep going out to another model doesn’t stop us from doing math in a completely precise way. We just need to keep careful track of what we’re talking about.

Comment #283 November 6th, 2020 at 4:01 pm

Timothy Chow #276 wrote:

Okay, great – that’s the kind of consequence of Gödel’s second incompleteness theorem that I was looking for: not a “showy” consequence but a “structural” one, if you know what I mean. I actually knew this one, but I’m sleep-deprived (for some mysterious reason) so I forgot.

Are there some other good ones? Something not insanely technical?

(In my work on category theory I actually use strongly inaccessible cardinals a bit, just so I can relax and use the “category of all categories” without Russell’s paradox biting my butt. The trick is to talk about the category of all

smallcategories, meaning those where the set of objects and the set of morphisms live in a Grothendieck universe.)Comment #284 November 6th, 2020 at 4:47 pm

Dan Staley #274

These are excellent questions! Let’s start with breaking down what model theory is about.

When you read these words, you understand that they have some sort of semantic meaning. They aren’t just formal strings of symbols, but instead they combine according to pre-specified rules to convey specific ideas. An equation, say \(\forall x \;\exists y\;20 + x > e^y\) is the same way. It is a formal string of symbols that you perceive as having a concrete semantic meaning, and in particular you can say “yes it is true.”

When we talk about a mathematical theory, be it the theory of arithmetic or the theory of a linear order without endpoints, we are referring to the combination of a syntactic system for writing down formal strings of symbols, a way of assigning semantic meaning to strings of symbols to express specific ideas, and a division of those interpreted statements into two categories: “True” and “False.” Formally, we we often bundle the syntax and semantics into a single unit called a language. So

a theory is a language, plus a set of sentences in that language that are true.It’s important to note that some theories are consistent, while other theories are not. The fact that something is a theory says absolutely nothing about whether it’s a sensible theory in terms of it’s mathematical content. That is where the model comes in.

The intuition for a model is that it is a universe in which a theory holds.A theory is a collection of contentful mathematical sentences and a model of that theory is an actual mathematical structure that the theory describes. Concretely, the natural numbers are a model of the language of arithmetic and the real numbers are a model of the language of real ordered fields.Some theories have multiple models. You can name many models of the theory of a dense linear order without endpoints: \(\mathbb{Q},\mathbb{R},\overline{\mathbb{Q}},\mathbb{Q}[\sqrt{2}]\) are all examples. This gives rise to a natural question:

which theories have models?The Model Existence Theorem says that there is a model for a theory if and only if that theory is consistent. Note that by “consistent” I am referring to the

internallogic and assignment of true and false.Axioms are a convenient way to summarize a theory: they are a small set of statements such that the set of all logical consequences of the axioms equals the set of true statements in the theory. So PA is an axiomatization of the theory of arithmetic because any statement in the theory of arithmetic can be (eventually) proven from just the statements in PA. There’s a very meaningful sense in which a theory is theclosureof a set of axioms.So we have a theory (a set of syntactically and semantically meaningful sentences with assigned truth values), an axiomatic system (a short list of sentences such that all true sentences are logical deductions starting from the axioms), and a model (an actual mathematical object that the theory talks about). What model theory studies is the interplay between these three types of objects.

Now we can begin to answer your questions.When I talk about the size of a model I am talking about the number of objects that exist in that world. This is often called the “universe” or the “domain” of the model. Different models of the same theory can have different domains, as is shown in the dense linear order without endpoints example. So when I say that there is an uncountable model of the theory of the natural numbers, I am saying that there is a mathematical structure that satisfies every true statement in the the theory of the natural numbers and which has uncountably many elements. When I say “uncountably many elements” I am speaking from outside the model looking in. I look at the model and count the elements are find there to be \(\aleph_1\) of them.I am a mathematician and I reason with (what I’ve been told are consequences of) ZFC. We can say that ZFC proves that the model is uncountable. However we cannot (in general) say that someone “inside the model” knows that it is uncountable. Some theories prove that their models are uncountable and others do not. So I live in a world in which the model is uncountable, but other people who live in other worlds may see it as countable.

Your intuition about finiteness is correct: any theory that is

complete(meaning it has an opinion about the truth of every sentence) and has a finite model has a unique finite model. This is because you can write down sentences that say “there is one object” “there are two objects” “there are three objects” etc. for every natural number and (by assumption) the theory has to have an opinion about the truth of all of them. Since theories with models are consistent, it must think exactly one of them is true.When it comes to infinite models however, there is no such thing as a “privileged” model the vast majority of the time. This is where your philosophy of mathematics comes into play. Some people say that there is a “correct” model for a theory, others take a “multiverse” view and simply say that different statements are true in different universes and all models are equally valid, and others say that talking about “correct” or “incorrect” models is missing the point, that mathematics is just the formal manipulation of strings of symbols and talking about “correct” vs “incorrect” models is meaningless.

It’s common to combine the first two options and say “I believe there is a correct model up to a certain point and after that there isn’t.” Scott believes that there is a correct model of PA, if I recall correctly. asdf mentions above that the fact that you can embed PA in a game like chess or Magic makes them more inclined to believe that there is a true model of PA because “is there a winning strategy in this game” should have a definite answer. I believe in some stronger logical principles, namely that I can apply algorithms to well ordered sets.

Comment #285 November 6th, 2020 at 5:11 pm

Ok, I think in #277, the statements \(P_1,P_2\), etc. must be all equivalent, so it’s enough to check only one of them. And instead of saying that “\(\neg P_i\) can be proved”, it’s more accurate to say that we engineer \(S_i\) so that we know that \(\neg P_i\) holds in it. (I’m gonna save all my dumb questions, so that 10 years later I can explain this topic to a beginner, without the curse of knowledge :-))

Comment #286 November 6th, 2020 at 5:50 pm

Tristan #278 & #279

I don’t dispute that you can make a program that will produce every finite bit sequence eventually.

But will your list include every *infinite* bit sequence?

That’s what Cantor’s argument about the reals is dealing with … the fact that every *inifinite* bit sequence is a member of the real interval [0, 1]. And you can’t list them all.

You can make a sequence that lists every finite bit sequence, and still have never listed a single infinite bit sequence (well, okay, your sequence itself will be an infinite bit sequence, but you won’t have listed more than one infinite bit sequence. And the goal is to list

every possibleinfinite bit sequence.)Again, Claim 1 was that comment #81 was a way to list every possible

infinitebit sequence. Where on this list would the infinite sequence encoding the unary halting language be? What elements would come before and after it?Comment #287 November 6th, 2020 at 6:56 pm

Tristan,

Your program will never enumerate the number \(\sum_{i=0}^\infty 2^{-p_i}\) where \(p_i\) os the \(i^{th}\) prime. It does not matter how long you run it for, there is no step at which it writes down that number. I can replace \(p_i\) with any strictly monotonic sequence too. So it won’t ennumerate \(\sum_{i=0}^\infty 2^{-i^2}\) either.

You seem to be under the impression that if you let the program run forever it will enumerate all

infinitesequences. That is not the case. Running your algorithm forever producesall eventually constantbit sequences (well, all eventually \(0\) but you can trivially modify it to produce all eventually constant ones).If that doesn’t make sense to you, let’s be more precise. Do you agree or disagree with the following:

I can easily supply a proof of the above statement, but I want to make sure we are on the same page about what constitutes a refutation of your claims.

Comment #288 November 6th, 2020 at 7:22 pm

John Baez #283: One technique that has been used (by Harvey Friedman for example) to prove that certain combinatorial statements are unprovable in a weak logical system S is to show that the combinatorial statements imply the well-foundedness of some system of ordinal notations, which in turn implies the consistency of S. (Steve Simpson gave some references in an old post to the Foundations of Mathematics mailing list.) Maybe this doesn’t count, though, since arguably the heavy lifting is done by the ordinal analysis and not by Gödel’s 2nd.

Part of the reason that Gödel’s 2nd doesn’t have more far-reaching consequences is that in many cases one can “almost” prove consistency. For example, while ZFC doesn’t prove Con(ZFC), it proves the consistency of any finite fragment of ZFC, and a finite fragment is all that gets used in any actual proof. Technically, working strictly within ZFC, we can’t produce any models of ZFC, yet in practice we can talk about countable transitive models of ZFC and exercise detailed control over their structure. Somehow, the unprovability of the existence of a model ends up being little more than an annoying technicality that we have to dutifully acknowledge in our rigorous writeups, but that doesn’t play any important role in practice.

Perhaps if someone were to show that ZFC + \(\neg\)Con(ZFC) has some really interesting models, then the role of Gödel’s 2nd in set theory would change radically, but so far that hasn’t happened.

By the way, on the topic of Gödel’s 2nd, I can’t resist mentioning a beautiful paper by Kritchman and Raz, The surprise examination paradox and the second incompleteness theorem. The authors give a new proof of Gödel’s 2nd using an idea from the famous surprise examination or unexpected hanging paradox. They also turn the argument around and use Gödel’s 2nd to give a new take on the paradox.

Comment #289 November 6th, 2020 at 9:01 pm

@Timothy #258

I think that is a quite bad way of thinking about what \(\neg \mathrm{Con(PA)}\) says.

Yes, this is a formal assertion in the language of arithmetic. But if you unravel what it says, it just says that there’s a number which codes a finite sequence, where each element on that sequence codes a formula so that this sequence of formulae ends with 0=1, following the rules of a proof calculus, blah blah. The key point is the beta lemma—you can code finite sequences with single numbers. The rest is no more controversial than using ASCII to represent English words—you pick a number to stand for each symbol, etc. So in the end it’s nothing more than what we do all the time with computers—represent things as long strings of 0s and 1s.

More, you can instead do all of this coding in set theory, treating things like \(\neg \mathrm{Con(PA)}\) or \(\neg \mathrm{Con(ZF)}\) as statements about sets. It’s the same idea, but the coding is easy because it’s easier to code sequences as sets than as natural numbers. And the same phenomenon exists, with (necessarily omega-nonstandard) models of e.g. \(\mathrm{ZF} + \neg\mathrm{Con(ZF)}\) existing. If you object to the arithmetic statement as not really expressing that PA is inconsistent, then for the same reason you should also think that the set theoretic statement doesn’t express that PA is inconsistent. But then you’re just saying that you don’t think set theory can adequate code mathematical statements. That’s a position one can take, but I’d be surprised if a set theorist such as yourself—or myself, for that matter—would take it. I think it’d be weird to point to nonstandard models to say that set theory cannot adequately formalize the notion of an infinite series (because in an omega-nonstandard model of set theory what it thinks is an infinite series of reals is really some weird nonstandard thing). I don’t see why formalizing notions from logic should be treated any different.

Instead, I think the way to understand what’s going on with models of \(\mathrm{PA} + \neg \mathrm{Con(PA)}\) is to look a bit more closely at nonstandard models. If we take such a model, then any (code for a) proof of an inconsistency from the axioms of Peano arithmetic must be a nonstandard element of the model; if it were standard, then since the relevant notions are sufficiently absolute it would code a real proof, thus implying the falsity that Peano arithmetic is inconsistent. Digging a bit deeper, because PA proves the consistency of each of its finite subtheories, we get that this proof has to use a nonstandard instance of the induction schema.

So we can state simply what’s going on: in such a model, its “proof” of an inconsistency uses an ersatz axiom, some weird nonstandard thing that isn’t a real formula and so cannot be used in a real proof. But the problem isn’t that the nonstandard model is wrong about what it means to say that a theory is inconsistent. The problem is the nonstandard model is wrong about finiteness, and thus wrong about what a formula is.

Comment #290 November 6th, 2020 at 10:15 pm

@John #273

Timothy has already made some good points, but I want add something that hasn’t been explicitly addressed yet.

This standard is biased against negative results, and for that reason is adequate as a standard. (To clarify, I’m not saying you’re claiming it’s a necessary condition for a theorem to be important.)

Take Weierstrass’s construction of a continuous, nowhere differentiable function. It may have some application in proving other results, but that’s not why it’s an important theorem. It’s an important theorem because it demonstrates that a certain intuitive picture of the mathematical world—a picture a lot of mathematicians before Weierstrass had—is wrong. If one were to ask “what theorems in analysis follow from Weierstrass’s theorem that there is a continuous, nowhere differentiable function?” it would miss that importance. Maybe there’s good answers to this question, but if not it wouldn’t be a mark against Weierstrass’s result.

I certainly understand why mathematicians like positive theorems that say yes we can do X. We want to prove things, so having more tools is a good thing. But negative results, which tell us not to waste our time on things we cannot do, are also important. It’s just that we cannot judge them by the same standard. And negative results can also open up new vistas of exploration, e.g. with the negative result that the parallel postulate doesn’t follow from the other axioms of Euclidean geometry, or with the second incompleteness theorem and the large cardinal hierarchy, as Timothy discussed above.

So I don’t think the second incompleteness theorem not having lots of corollaries in logic should be seen as a mark against it.

Comment #291 November 6th, 2020 at 11:33 pm

Kameryn W #289: This is largely a matter of opinion so I hesitate to dig my heels in too deeply. But from my experience trying to explain these sorts of things to students, or even arguing with philosophers like Mic Detlefsen, I think that it’s important to straighten out some very basic confusions that commonly arise, before rushing ahead to talking about proofs encoded by standard integers versus proofs encoded by nonstandard integers.

In particular, I find that students often fail to keep a clear mental distinction between

syntacticoperations andarithmeticoperations. The reason we take such pains to spell out how to mimic syntactic operations with arithmetic operations is of course that syntax and arithmetic are conceptually distinct. In particular, I maintain that Con(PA), despite its name, is first and foremost anarithmeticstatement, because when you expand it out, it contains arithmetic symbols such as + and \(\times\) that have an immediate and relatively intuitive meaning in any model of PA. Con(PA) doesnotdirectly contain symbols that directly represent syntactic operations such as concatenation. But because of its name, students have a strong tendency to think that Con(PA) is first and foremost a syntactic statement—and this gets very confusing because what do syntactic operations mean in a nonstandard model of PA? In their mathematical education they have probably already encountered the idea that it’s not just numbers that can be multiplied—polynomials and matrices can be multiplied, for example. So they can grasp readily that there are different types of multiplication depending on what mathematical system you’re talking about. But different notions of “concatenation”? That’s unfamiliar.Thus I think it’s important to emphasize the following sequence of logical steps: We first mimic standard syntax with standard arithmetic to get an arithmetic formula Con(PA). Being an arithmetic formula, Con(PA) has a clear arithmetic meaning in any model of PA. So we can “transport” syntactic concepts to a nonstandard setting, by carefully unpacking the correspondence between arithmetic and syntax in a nonstandard model to see what “nonstandard syntax” is all about. When everything is completely understood then we can allow ourselves the luxury of talking about nonstandard models being “wrong about finiteness” and “wrong about what a formula is.” But I find that introducing that kind of language too early just causes confusion.

Comment #292 November 7th, 2020 at 3:57 am

I’d like to thank Scott, Jair, and especially Timothy for substantially reducing my initial confusion about Gödel’s 2nd. Now I feel like I understand the basics of the basics. But that said, I am worried to see comments like “Kameryn W #289” appearing, which argues with Timothy. I’m afraid more comments like that would result in a reverse outcome of my being confused again. So let me try Trump’s strategy to avoid that: Scott,

STOP THIS THREAD NOW, don’t allow such late comments anymore!!! 🙂Comment #293 November 7th, 2020 at 10:19 am

Tamas V #292: Let me quote from Benson Farb’s essay, “On being Thurstonized.”

Being a Thurston student was inspiring and frustrating—often both at once. At our second meeting I told Bill that I had decided to work on understanding fundamental groups of negatively curved manifolds with cusps. In response I was introduced to the famous “Thurston squint”, whereby he would look at you, squint his eyes, give you a puzzled look, then gaze into the distance (still with the squint). After two minutes of this he turned to me and said: “Oh, I see, it’s like a froth of bubbles, and the bubbles have a bounded amount of interaction.” Being a diligent graduate student, I dutifully wrote down in my notes, “Froth of bubbles. Bounded interaction.” After our meeting I ran to the library to begin work on the problem. I looked at the notes. Froth? Bubbles? Is that what he said? What does that mean? I was stuck.

Three agonizing years of work later I solved the problem. It’s a lot to explain in detail, but if I were forced to summarize my thesis in five words or less, I’d go with: “Froth of bubbles. Bounded interaction.”

I don’t fundamentally disagree with Kameryn W, and I believe that once you understand what’s going on with models of PA + \(\neg\)Con(PA) (which will hopefully take you less than three agonizing years!), you’ll also see the wisdom of Kameryn W’s “froth of bubbles with bounded interaction.” It’s just my opinion that at this stage, it’s important for you to first see that Con(PA) has a direct

arithmeticinterpretation in any \(\mathbb{N}\)-lookalike, and that it’s this arithmetic statement that you need to unpack carefully in order to understand what’s happening.Comment #294 November 7th, 2020 at 10:51 am

Kameryn #290 wrote:

Good! Since we’re talking about logic, I was careful not to use “if” to mean “iff” when I wrote:

Since I was trying to answer Jair’s question “why is Gödel’s second incompleteness theorem important?”, I was focused on

sufficientconditions for importance, not necessary ones.I also listed another condition that’s sufficient for me, if perhaps not everyone:

Weierstrass’ construction of a continuous but nowhere differentiable function was surprising and beautiful. But it was mainly important for the reasons you mention – it corrected a misimpression people had, and led the way to a deeper understanding of functions.

At first some mathematicians thought Weierstrass’ function was “weird” and “pathological”. Later, we learned that “most” continuous functions are nowhere differentiable: the set of anywhere differentiable functions is meager in the set of continuous functions. This changed our expectations about what functions are like… sort of like how Gödel’s first incompleteness theorem changed our expectations about what axiom systems could do.

Comment #295 November 7th, 2020 at 11:57 am

@Timothy 291

I certainly agree it’s bad to tell students that Con(PA) is anything but a formal statement of arithmetic (or whatever background context you’re working in). That will lead to bad misunderstandings. But I think going too far in the other direction also leads to misunderstandings.

As I’ve sketched above, we can narrow in on what the specifically is going wrong of models which think Con(PA) is false. And it’s a fairly short and understandable explanation. My experience as a student was that understanding this more detailed explanation was helpful in understanding what’s going on with the second incompleteness theorem and—especially due to the extra concepts we had to cover for the fuller picture—immensely helpful in understanding nonstandard models. So I think that’s the explanation we should give—to students or whomever—rather than stopping at merely telling them it’s a complicated statement about numbers that gets interpreted weirdly in nonstandard models.

Comment #296 November 7th, 2020 at 12:37 pm

Timothy Chow #293: sure, it will be less than 3 years, due to Scott’s series on CH 🙂 I don’t know why, but I have an innate resistance to studying things whose names are like “fundamental groups of negatively curved manifolds with cusps”. Maybe because I fear I’ll not see the forest for the trees if I go down that path. But Gödel’s theorems are still within my level of tolerance, I expect they are basic enough to interest me.

Comment #297 November 7th, 2020 at 1:01 pm

Stella #269 and John #274: Thank you for the detailed explanations and help! So my understanding of Cohen’s argument is now as follows:

1) We start with a huge, “stratospheric”, “distance of \(\aleph_0\) feet” model of ZFC in which we work. (This is the part I’m probably shakiest on – is this a model of ZFC we’re working in, or something else?). Let’s call this model \(L\).

2) Some of the sets in \(L\) form the “minimalist” model of ZFC. Let’s call this minimalist model \(M\).

3) Some more of the sets in \(L\) form a larger model of ZFC, let’s call it \(N\) (actually Cohen constructs many such \(N\), but at my level of understanding I’m happy to consider just one at a time).

4) Every set in \(M\) is in \(N\), and every set in \(N\) is in \(L\) (I’m avoiding calling \(M\), \(N\), and \(L\) “sets”, because I’m not really sure if they’re sets, proper classes, or something else).

5) Some ZFC constructions in \(M\), \(N\), and \(L\) yield the same result. For example, the number \(2 = \{\emptyset, \{\emptyset\}\}\) is the same in all these models.

6) Other constructions yield different sets in \(M\), \(N\), and \(L\), such as the power set of \(\mathbb{N}\), or the construction of \(\mathbb{R}\). Let’s subscript these constructions with their model so we can differentiate them as sets in \(L\), eg \(\mathbb{R}_M\) is the real numbers in \(M\). And \(\mathbb{R}_M \neq \mathbb{R}_N \neq \mathbb{R}_L\).

7) Every \(\mathbb{R}\) is uncountable in its own model. After all, “\(\mathbb{R}\) is uncountable” is a theorem of ZFC.

8) \(\mathbb{R}_M\) is countable in \(N\), which implies that it’s also countable in \(L\). \(\mathbb{R}_N\) is uncountable in \(N\). Scott’s summary doesn’t provide enough detail to know if \(\mathbb{R}_N\) uncountable in \(L\), and indeed it may depend on which \(N\) we’re talking about now.

9) Finally, it’s STILL not clear to me if there’s a larger model encompassing \(L\), say \(K\), in which \(\mathbb{R}_L\) is countable, or more generally if *any* model’s real numbers (or uncountable sets) can always be made countable in a larger model. As far as I know, the answer to this could be “yes”, “no”, “open”, “undecidable”, or “the question is not well-formed”, and any of those answers might be prefixed with “obviously” 🙂

I want to give a

huge thanks to Scottfor writing this up. I’m an always-reader, seldom-commenter on this blog, but this post has made me more excited about mathematics than I have felt in many years.Comment #298 November 7th, 2020 at 1:45 pm

Chow #288

If you can prove that arbitrarily large fragments of ZFC are consistent, why can’t you use a compactness principle to prove that ZFC is consistent?

Comment #299 November 7th, 2020 at 4:38 pm

Stella #298: I should probably have phrased my statement more precisely. For any finite list of axioms of ZFC, there is a ZFC proof that that list of axioms does not lead to a contradiction. This usually goes by the name of the Reflection Theorem. But ZFC does not prove “no finite list of axioms of ZFC leads to a contradiction” since as you say, that would mean that ZFC proves its own consistency.

The way the Reflection Theorem is proved is a little strange if you haven’t seen it before. The way most results of the form “ZFC proves X” are derived is, we use ordinary mathematical reasoning to prove X, and then we formalize our mathematical reasoning in ZFC. But for the Reflection Theorem, we have to consider ZFC more explicitly than we usually do, because the argument proceeds by

induction on formulas. We have aninfinite familyof statements of the form “ZFC proves X(\(\phi_1,\ldots,\phi_n\))” that we want to establish; for every finite list \(\phi_1,\ldots,\phi_n\) of axioms of ZFC we have a corresponding formula that explicitly contains those axioms. We prove that all these are theorems of ZFC by giving an inductive argument “outside the system.” We can handle all formulas at once outside the system, but ZFC itself is only able to handle a specific finite list at any given time, and cannot prove the consistency of all axioms at once.Comment #300 November 7th, 2020 at 9:51 pm

What I don’t understand and would like to get a grasp on is why many people here (as I understand, including Scott) don’t believe that CH has a definitive truth value, regardless of whether we can prove or even know it.

I believe that it’s very intuitive what the reals are – let’s say, countable binary sequences (just doing \([0,1] \)). Forcing “adds” reals – but to me, there must be a “true, real” set of all such possible sequences, and some models of ZFC just lack some of them due to our inability to describe almost all of those sequences. If a model of ZFC does not contain some infinite \(0111010110100101.. \) then it’s not complete, and if a model contains weird longer sequences that aren’t countable from the outside, then it’s also not “right”.

Similarly, while clearly harder to imagine, it still seems like the equivalent must be true for all left-total and right-unique (i.e. functional) sets of pairs of sets of such sequences, which means that a bijection from some subset of all those sequences to some other subset either exists or not.

And again, to me, it seems clear that if some model is missing a possible such set or contains one that is not possible, then it’s not a model of “reality”.

What part of this do you not think is reasonable?

Just to be clear – I’m aware that what is discussed in the post and most of the comments is much more on the formal side, and I make no claim about the argument above being possible to be formalized in ZFC or some reasonable extension of it; in fact, I would be surprised if it could.

I just would like to understand the objection to this platonisti view better.

Comment #301 November 8th, 2020 at 2:09 am

Fnord #300: does platonism exclude the possibility of mathematical “parallel universes”? There might be a world if ideas in which CH is true, and another one in which CH is false. Just as quantum mechanics adjusted our view on information, it may also make us adjust what we mean by platonism. At the end of the day, I don’t think math is separable from physics.

Comment #302 November 8th, 2020 at 11:46 am

Fnord #300: Ultimately, doubts about whether CH has a definite truth value boil down to a feeling that if S is a sufficiently large infinite set, then our concept of S is intrinsically vague.

It’s perhaps easier to grasp this objection if we consider the weaker skeptical claim that V, the class of all sets, is an “intrinsically indefinite” object. Intuitively, a set is a definite totality. So if V were definite (the argument goes), then it would be the set of all sets. But “the set of all sets” leads to a well-known contradiction. And if V is indefinite, then what justification do we have for claiming that statements that quantify over all sets have definite truth values?

Now you could be skeptical about the definiteness of V but still think that \(2^{2^{\aleph_0}}\) is sufficiently clear that statements such as CH have a definite truth value. Other than an insistence that even \(2^{2^{\aleph_0}}\) is vague, the main objection to this view (I claim) goes something like this: “If something is

intrinsically unknowablethen it’s doubtful whether it even exists.” (Here, the allegedly unknowable thing is the truth value of CH.) You might not buy this alleged connection between epistemology and ontology, but let’s suppose you do. Do we have any reason to think that the truth value of CH is intrinsically unknowable?There have been various proposals for how we might come to know the truth value of CH, despite its independence from ZFC. The first thought, suggested already by Gödel, was that by adding some “strong axioms of infinity” (large cardinal axioms), we might be able to settle CH. While not entirely uncontroversial, some large cardinal axioms are widely accepted as being reasonable extensions of ZFC. However, a 1967 result of Levy and Solovay basically showed that large cardinal axioms are incapable of settling CH. This is a specific fact about CH in particular; it turns out that even after you lock down many features of the set-theoretical universe, you can use forcing to show that the truth value of CH still isn’t nailed down.

If large cardinals won’t do the job, might other axioms work? Hugh Woodin has made various proposals over the years. If you search around for “\(\Omega\) conjecture” or “V = Ultimate L” then you can read more. The trouble with these axioms is that they remind most mathematicians of the old joke where a professor is asked, “Is that really obvious?” and has to think for a really long time before finally declaring, “Yes, it is obvious.” If it takes years of training for a professional mathematician to even understand exactly what an axiom is saying, can we really say with a straight face that it’s self-evidently true and should be adopted as a basic axiom of mathematics?

None of the above constitutes a decisive argument that the truth value of CH is intrinsically unknowable, but the prospects of

settlingCH any time soon certainly don’t look good. Again, you could point out that even if the truth value is unknowable, it doesn’t automatically follow that the truth value doesn’t exist. Lots of people seem to think that whether a Turing machine halts is always a statement with a definite truth value even though some statements of that form are arguably unknowable. But if you’re already inclined to be skeptical about the clarity of statements about infinite sets, then the difficulties involved in settling CH will probably just increase your skepticism.One final remark. The ability to create models of ZFC with “missing” subsets isn’t, or shouldn’t be, a direct argument that the truth value of CH doesn’t exist. As you point out, the truth value of CH could still be perfectly well defined in the “true” set-theoretical universe. As I indicated above, the argument is more indirect. Forcing shows us that even if we add more axioms to ZFC, CH almost always seems to slip out of our grasp. That could be interpreted as a hint that CH somehow isn’t a fixed feature of the set-theoretic universe.

Comment #303 November 8th, 2020 at 2:50 pm

Fnord #300: I don’t know if you read my earlier comments here, but this is precisely what’s been blowing my mind based on Scott’s post – from a Platonist point of view, maybe infinite binary sequences simply

don’t exist. Sure, we can describe functions or processes that take in any natural number and spit out a binary digit, but maybe that’s all they are – descriptions of processes, but not actual infinite objects. And crucially, since language is discrete, there’s only a countable number of these (from a Platonist POV).With this view, that “sequences” are no more than definitions of functions, we can still apply Cantor’s diagonalization argument, but it doesn’t prove that the set of binary sequences is

uncountablein the way a Platonist means the word. It just proves that we can’tdefine a function that enumerates them.Indeed, it seems like in whatever formalization of language you allow yourself to make these definitions, there’s a slightly broader language that allows you to define a function spitting out all the sequences you can define in the first language. (Just make your new language allow you to describe every sentence in your old language, then start enumerating the sentences). But that gives you a bigger set of sequences you can define in the second language, so in a way you’ve just ended up back where you started.

(I think what I’m doing here is basically describing the countable minimalist model of ZFC from a Platonist perspective).

What I’m wondering now is, maybe for

anylanguage that lets you define binary sequences, you can actually enumerate all the sequences definable in that language by using a slightly broader language? Which seems to mean that, from a Platonic viewpoint, there’s no such thing as an uncountable set(?!?!)?Comment #304 November 8th, 2020 at 3:32 pm

Thank you for the responses!

Tamas V #301: My view on mathematical “parallel universes” regarding CH is similar to what is discussed in many other comments here regarding “true” integers and non-standard integers. The latter surely are part of the mathematical universe, but are not the “true” integers. For me, the same goes for models of ZFC – sure, lots of odd models (such as countable ones) exist, but they aren’t representing the “true” set theoretic universe. In the case of the countable models, that seems like it should be obvious to anyone, no?

And since I know where I am here, I’m (sadly?) resisting the urge to respond to the physics analogy – I probably would be lynched for me views!

Timothy #302: I think I can relate to the general skepticism of many regarding full V, but for me it begins later, and I think that that is not inherently a problem of whether there’s some truth to it, but more of my inability to fully grasp large cardinals in this context – I have no clear intuition on whether their implied set theoretic universes are artifacts similar to non-standard integers or if (some of them) do represent the intended “real” set theoretic universe. (Yes, it’s understood that the latter has the implicit assumption that such a “real” set theoretic universe exists!)

The same, only even moreso, goes for most of Woodin’s work. Without truly deep understanding of forcing, it seems impossible to build an opinion on his proposals, and having read “Forcing for Dummies” doesn’t exactly appear to be enough for that purpose!

I think you really nailed it: it boils down to not buying “this alleged connection between epistemology and ontology.”

What about your own personal view on CH?

And does Harvey Friedman’s work affect the opinion of others? Specifically, his paper about statements about natural numbers requiring large cardinals?

Comment #305 November 8th, 2020 at 3:45 pm

Dan #303: I think we have a misunderstanding regarding what the “Platonist” view is? To me, it basically means, paraphrased and simplified: “The Truth is out there”, as opposed to the formalist view “there is no Truth beyond formalism”. It does not make any claims about

whatthat truth is.Since I’m a Platonist in that sense, your position (as I understand it) is an example of the one I find incredibly difficult to relate to. Do you really think that just because you can’t formally define something, it doesn’t exist? Isn’t the entire CH question moot for you then (since V=L implies CH)?

Comment #306 November 8th, 2020 at 4:35 pm

Fnord #304:

Oh, I thought you meant Platonism like e.g. the “true” integers exist in a perfect “world of ideas”, and mathematicians are basically exploring that ideal world, but using tools that are limited by the imperfect physical world we live in. And also, in such an ideal world every statement has a truth value, so CH is either true or false, and it’s waiting for mathematicians to discover which. (Don’t get me wrong, this is far from what I believe.)

Feel free to respond to my comment regarding physics, nobody will lynch you here, we don’t even know your coordinates 🙂

Comment #307 November 8th, 2020 at 4:49 pm

Fnord #305:

From your perspective, is it possible that the “truth that’s out there” is simply that infinite sets do not exist? That our every attempt to define, understand, and manipulate them is simply us making finite rules about finite sets? If that were the case, then wouldn’t the Platonist truth about CH be not that it’s true or false, but that it’s simply meaningless outside of formalism, because it’s a statement about entities that don’t exist?

To be clear, I’m not

claimingthat infinite sets do not exist, nor am I claiming that something doesn’t exist just because we can’t define it. If anything, I’m desperately looking for a reasonnot tobelieve these positions! But I’m trying to explore and understand this possibility, because right now it’s the only Platonist view that’s making sense to me.What Scott’s post has made me realize is that if you want a Platonist view of CH, you first need to re-examine the Platonist meanings of “countable”, “uncountable”, “cardinal”, etc., since we now see that their definitions are not nearly as clear-cut as we imagined when we first learned Cantor’s diagonalization argument. There are all these sets that appear uncountable within one model, but are countable within a larger one. And

maybe(I don’t believe anyone has given me an answer on this yet),everyset is countable in a large-enough model. To me that would mean that, from a Platonist point-of-view, every set is countable; uncountable sets don’t exist.Comment #308 November 8th, 2020 at 5:06 pm

Tamas V #306: That actually

ismy view indeed! Only that the “ideal world” also contains formal logic, model theory and everything else. So in the “ideal world” there are the “True Integers” – what we all think of when counting. And there exists PA and models of it – the latter including non-standard integers.I’m not so sure about the part of “waiting to be discovered” though. Being “real” in some sense does not imply us being able to know it. On the contrary, as I tried to explain before: Certainly, there will be sets that are simply completely impossible to imagine and much less accurately describe in what our minds are able to do. CH may or may not be on the edge of that.

My view regarding physics is similar. I believe that reality strictly follows rules, and part of that rules is “basic” logic. I accept that quantum physics is currently our best description for much of it, and I accept that it is likely inherently impossible to, say, have accurate knowledge of certain things (uncertainty principle), but I don’t believe that the conclusion that our description

iswhat happens in reality is accurate. But really,, that is a rather different topic!Comment #309 November 8th, 2020 at 5:26 pm

Fnord #304: Regarding my own view on CH, I don’t believe that CH will ever be “settled” in the sense of a consensus developing on its truth value. Even if someone comes up with some axiom that implies CH, I believe that the majority view will be, “Okay, so that axiom implies CH, but that doesn’t necessarily mean that either the axiom or CH is

true.”But does CH have a definite truth value? I’m basically agnostic on this question. I don’t find the anti-Platonist arguments persuasive. I have found that those who argue against Platonism are usually Platonists about

something, and I don’t see a compelling way to argue that we should be Platonists aboutthisbut notthat. For example, in a post to the FOM mailing list, I mentioned an article by nominalist Mary Leng, who denies that “2+3=5” is true simpliciter, yet seems to think that a logical fact such as “Proposition P follows from Axiom A”istrue simpliciter. I cannot fathom how to reconcile these two beliefs. Or take Solomon Feferman, who strenuously denies Platonism, but nevertheless believes that the natural numbers form an “absolutely clear intuitive model” of PA. My response is, fine, let’s avoid talk of Platonic realities if you don’t believe in them, but if we’re allowed to claim that the natural numbers are “absolutely clear” then I don’t see any reason in principle why we can’t claim that \(2^{2^{\aleph_0}}\) is absolutely clear.At the same time, I think we do need to have some humility when it comes to making claims that infinitary concepts are absolutely clear. For a strong claim like that, it’s nice to have some “checks and balances” in the form of corroborating evidence. For example, claims that a Turing machine does or does not halt can be (at least partially) corroborated by building physical computers and observing them. The trouble with CH is that there aren’t many checks and balances available, other than finding other people whose intuition agrees with yours. I do think that the majority of people who have sufficient mathematical training to understand the statement of CH (and who haven’t been “contaminated” by too much philosophical speculation) instinctively feel that it is a clear mathematical statement, so that counts for something. But I don’t think it’s decisive.

Comment #310 November 8th, 2020 at 6:42 pm

Timothy #309: Thank you very much! I can totally understand

thatview! In fact, it appears to not be all that different from mine. Humility in understanding the infinite – of course! Even when one gets past the point where “simple” things like Hilbert’s Hotel seem astonishing, or after taking the step over Banach-Tarski (totally irrelevant tangent: I love this video) looking like an insurmountable paradox, there’s always enough strangeness in infinity to blow one’s mind.Dan #307: My resounding and very definitive personal answer to the first question is: No! My answer to the second directly following question about our reasoning is: Partially! For why those are not incompatible, see my previous response (which I typed simultaneously with yours,, apparently). I would not go so far that we can only reason about finite statements about finite sets.

The countable does not appear to be me to be beyond understanding – I feel I have some sort of grasp on that. Perhaps more importantly:

Everyonewith mathematical education (with the obligatory exception of what I’d call extremists) agrees on most of the properties of countable things! If there wasn’t some independent truth separate from formalism behind it, how could that be possible? In fact, how could it be that we developed that formalism onlyafterusing it for decades (or centuries?), in many cases independently but with the same results, if there wasn’t some fundamental independent truth behind it all?And in the same way, that goes further, following in Cantor’s steps. Which leads to your last point: Don’t let those odd countable models of ZFC confuse you. While ZFC is often is called the foundation of mathematics, you should imho

notuse it when you’re trying to build your intuitive basic understanding of mathematics and the underlying set theory.Instead, embrace the naive set theory, work

thatin your mind, then try to bend your mind further around Russel’s paradox which is why we have ZFC and all the formalism in the first place. And then, after a long journey in your mind, you come back around and look at strict formalism, ZFC, model theory and think “yeah, okay, that ZFC is probably modeling most of our ‘ideal mathematical world’ accurately”.I don’t believe you can understand mathematical logic and model theory the other way around at all.

But the above might be a long journey! Most of the folks discussing here have studied math for years. It’s certainly been a long journey (not over!) for me, and if I think back on how my mind was blown by the well-ordering principle, non-measurable sets, AoC, and among many other things Russel’s paradox(!) the first time I heard about them, I think that journey is worth it for anyone interested in these topics. If you do take that path – well, then I cannot imagine how the existence of countable (and bigger!) infinities will not seem clear!

Comment #311 November 8th, 2020 at 7:12 pm

As I understand it, mathematical realists (or perhaps mathematical Platonists – I’m not an expert on the philosophy of math) believe that there is a unique privileged model of mathematics that is “actually correct in the real world,” regardless of what can be proven within which axiom systems. Presumably, they believe that the CH is either “actually true” or “actually false” (again, putting aside any questions of provability). This question at Math Stack Exchange asks which position is more popular among mathematical realists. The only answer claims that there is systematic difference of opinion between mathematicians in the U.S. and in Europe, but it doesn’t seem to give to give much evidence to support that claim.

I’d be curious whether anyone has any thoughts on whether there’s any non-rigorous reason to consider one position or the other to be “better”, or if that’s just a 100% subjective question.

Comment #312 November 9th, 2020 at 12:55 am

Fnord #308: thank you for sharing your view! (I hope you’re still fine…)

I cannot agree more. In fact, I believe there is a simple test to decide whether a theory is true: If the theory states that something exists, then it’s false 🙂

Comment #313 November 9th, 2020 at 8:36 am

There’s a very interesting article by Colin Rittberg entitled, “How Woodin changed his mind: new thoughts on the Continuum Hypothesis.” For a while, Woodin was arguing that CH was false. The argument is highly technical, but basically Woodin came up with a more powerful notion of logic called “\(\Omega\)-logic” and asked if there existed a plausible axiom which (together with a suitable large cardinal axiom) could be added to ZFC and that would “decide” (in the sense of \(\Omega\)-logic) all questions about \(H(\omega_2)\), the sets whose transitive closure is less than \(\omega_2\). He proposed an axiom (*). If you buy all this, then it turns out that \(2^{\aleph_0} = \aleph_2\), so CH is false.

However, Woodin has since changed his mind in the light of new results that he and others have proved. Recall that Gödel showed that V=L implies CH. But V=L has never been a popular candidate for a “fundamental axiom,” in part because it strikes a lot of people as artificially restrictive, and also because it contradicts large cardinal axioms such as the existence of a measurable cardinal. There have been efforts (the so-called “inner model program”) to reconcile the two by contructing L-like structures that can accommodate large cardinals. There was a breakthrough by Woodin, which roughly speaking shows that if one can accommodate a supercompact cardinal, then one would also accommodate all larger caridnals “for free.” This is still highly conjectural, but it can be regarded as evidence for an axiom called “V = Ultimate L.” This axiom would have the nice feature of deciding a large number of undecidable questions in one fell swoop—just as V=L does, but with the advantage that no large cardinal axioms are violated. In particular, V = Ultimate L implies CH.

Incidentally, it might seem bizarre to imagine that \(2^{\aleph_0} = \aleph_2\) is objectively true; why \(\aleph_2\) and not \(\aleph_{13}\) or something? But actually, there does seem to be something special about the relationship between the continuum and \(\aleph_2\). See for example Justin Moore’s article, What makes the continuum \(\aleph_2\). There’s also a completely different approach from Woodin’s to settling CH, involving a generalization of Martin’s axiom known as “Martin’s Maximum.” Martin’s Maximum also implies that \(2^{\aleph_0}=\aleph_2\).

There’s a nice Quanta magazine article about all of this.

Comment #314 November 9th, 2020 at 10:14 am

Scott #146:

Sorry I’m a bit late to respond here, it’s been a busy week. To your responses:

Ad your (2): There was nothing in the post I specifically thought you should revise (about standard transitive models). I look forward to the next instalments.

Re: your philosophy of mathematics (and since the truth-value determinacy of CH has popped up elsewhere in the thread). I’m very sympathetic to the idea that we might have some sort of moderate relativism regarding CH. Even if you think that there is some `absolute’ universe of sets out there, perhaps there’s more to set-theoretic truth/falsity than simply sticking a universal quantifier in front of an epsilon symbol—it’s important what concepts we’re employing with the use of that syntax. Rewind a little, and consider Mirimanoff’s thinking in his 1917 paper where he talks about the `ordinary’ sets (what we’d call well-founded) and the `extraordinary’ sets (what we’d call non-well-founded). If we think about Mirimanoff in terms of our present-day iterative conception of sets, he looks very dumb—of course all sets are well-founded (Foundation is an axiom of ZFC)! This is unfair on Mirimanoff though, he didn’t have the iterative conception of set, and it’s plausible that things could have gone different (say we all became very interested in non-well-founded sets for applications in computer science, and came up with a graph-based conception of set). Neil at that world is writing a comment about how we *could* have adopted foundation, rather than some anti-foundation axiom. So it seems like it wouldn’t be right to say Foundation had a truth-value for Mirimanoff, his concept just wasn’t sharp enough to settle the question.

It’s then plausible that we’re in the same position as Mirimanoff, but with CH. Maybe there are futures in which we adopt CH (say we get very impressed by Woodin’s Ultimate-L programme), and futures in which we adopt ¬CH (say we get very impressed by forcing axioms). Maybe we should say that CH is indeterminate given our current concept of set, but might be settled by our intellectual descendents? (I don’t know the etiquette of plugging one’s own papers on the blog, so apologies if this is not allowed, but I consider these issues in a bit more detail in `Structural Relativity and Informal Rigour’.)

As to whether you need the natural numbers for logic, I agree it’s hard to see how to allow for the idea that the natural numbers might be indeterminate (and, for example, that Con(ZFC) doesn’t have a truth value) and keep any sense of what it means to even talk about specific non-finitely axiomatisable theories. (Another ruthless plug if anyone’s interested, I talk about this in `Multiversism and Concepts of Set: How much Relativism is acceptable?’.) However, there is one way which this might go: Joel Hamkins (who believes that Con(ZFC) is indeterminate) suggests (in `The Set-Theoretic Multiverse’) that we may discover a forcing analogue for arithmetic, that will let us modify the truth-value of these sentences in a flexible way. This (he argues) will challenge our belief that the natural numbers are determinate. I’m willing to accept that such a technique would challenge our confidence in the determinateness of the natural numbers, but I’m also inclined to say that no such technique exists—our concept of natural number is just too sharp.

Comment #315 November 9th, 2020 at 11:20 am

Haim Horowitz #271:

… don’t you think that an absoluteness result such as the one I mentioned makes the concept of P(N) a bit more robust?

Well, it shows a degree of robustness

against large-cardinal axioms. But we know that forcing can change the set-theoretic universe with no effect at all on large cardinals, right?Comment #316 November 9th, 2020 at 11:20 am

Fnord #310: To be clear, I do have years of mathematical study under my belt, and I’m familiar with all the other mind-blowing results you list in your post. It’s just that I never learned model theory or much past the basics of logic and set theory during my studies of algebra and topology, hence why Scott’s post has blown my mind.

Up until reading Scott’s post, my mindset was very Platonist, and very much like yours. And I think we both saw that these results about multiple versions of \(\mathbb{R}\) force a Platonist to pause and furrow their brow. But after that our reactions were different – you’re seeking an understanding of how this can be consistent with Platonism, while I’m questioning my Platonism. I think both are worthy lines of thought!

Looking at all this from a Platonist point-of-view, we need to be careful to think about what the “true” real numbers are – I think we’d want them to be some sort of “maximal” set of reals, where any toy model’s real numbers should be contained in them. It feels wrong that the minimalist, countable real numbers are in fact the Platonic reals (This should be non-controversial – I doubt you disagree with anything in this paragraph, right?)

But given this line of reasoning, I think the truth value of CH depends on the answer to the question I’ve been asking, rephrasing, and re-asking throughout my posts here:

My Question: Can a set (formally) be “intrinsically” uncountable, meaning that in that no encompassing model has a bijection between that set and \(\mathbb{N}\)?If the answer is “no”, it seems to imply that (formally)

allsets are countable in some containing model. In my mind, this means that, Platonically speaking,all sets are countable, and the entire notion of uncountablility is an artifact of formalism, a consequence of models being unable to contain “enough” maps from \(\mathbb{N}\). This makes the CH meaningless Platonically – not only is there no cardinal between the naturals and the reals, but the naturals and the reals are the same cardinal! I don’t know the answer to My Question, but I can’t seem to escape this Platonic consequence if the answer is “no”.If the answer is “yes”, and if in particular there’s an “intrinsically” uncountable version of the real numbers in some model, then Cohen’s construction seems to prove that, Platonically speaking, CH must be false – after all, we want the Platonic real numbers to be “maximal”, so they must be as “big” as some of Cohen’s reals, which have smaller uncountable cardinals below them (I guess this line of reasoning also requires these smaller cardinals to be “intrinsically” uncountable).

Comment #317 November 9th, 2020 at 11:27 am

Tamas #292:

I’m afraid more comments like that would result in a reverse outcome of my being confused again. So let me try Trump’s strategy to avoid that: Scott, STOP THIS THREAD NOW, don’t allow such late comments anymore!!! 🙂

Do I have a second and a third for closing down the thread? On the one hand, I feel like the discussion has been one of the top ten in

Shtetl-Optimized‘s history; on the other hand, I lost the ability to keep up evenbeforeplunging into my current post-election, post-surgery haze!Comment #318 November 9th, 2020 at 1:35 pm

Scott #317: The best way to shut down this thread is to write your next installment and have the conversation continue there!

I’m tempted to respond to Neil Barton #314 but in the interests of trying to “move on” I’ll postpone my thoughts. I will say in response to the question in Dan Staley #316 that if \(\kappa\) is a strongly inaccessible cardinal, then \(V_\kappa\) is a model of ZFC, and in no model containing \(V_\kappa\) will there be a bijection between \(\aleph_0\) and \(2^{\aleph_0}\). So the answer to your question as stated is yes. But perhaps more importantly, even if the answer were no, I don’t think I agree with the conclusions you draw from that. But again, this discussion is perhaps best postponed for later.

Comment #319 November 9th, 2020 at 1:47 pm

Kameryn W #289:

This is very interesting and new to me. Can you, or anyone who understands it well, elaborate on exactly what this lets us conclude, and on how it’s proved?

On first reading, I thought you meant “any nonstandard PA-proof of a theorem with no standard PA-proof, must use an induction axiom with a nonstandard number of quantifiers — and that’s the source of its apparent power, not that the proof is secretly infinitely long (which by itself would provide no new power)”.

But I think that can’t be exactly correct, because if it was, then a finite fragment of PA (say, “I Sigma_n” == “induction on formulas with at most n quantifier-blocks”) would not have nonstandard models with proofs of its own inconsistency; but that is false.

So is this understanding of the power of nonstandard proofs specific to models of PA+¬Con(PA), or am I confused in some way?

Comment #320 November 9th, 2020 at 1:51 pm

Scott #317:

I’m not following your blog for such a long time, but this thread is

by farthe best discussion I’ve seen! I’ve learned really a lot from the comments. Just wondering whether you can still focus on writing the second installment.I try to be practical with math, I apply why Feynman said about physics: “Even those ideas which have been held for a very long time and which have been very accurately verified might be wrong.”

So, while we

have toassume certain “obvious” things (about natural numbers, sets, logic, etc.) upfront to be able to come to conclusions, we also have to keep in mind that those “obvious” assumptions may turn out to be wrong one day. The bottom line is that yes, at any point in time there are things we consider “obvious”, and it’s good like that, just don’t believe those are absolute truths.Ok, you can close it now, before I get lynched 🙂

Comment #321 November 9th, 2020 at 2:03 pm

Scott #317: Do I have a second and a third for closing down the thread?

Timothy Chow #318: The best way to shut down this thread is to write your next installment and have the conversation continue there!

I second Timothy Chow — letting this discussion continue provides a valuable public service — the fact that it continues proves “we” don’t (at the present moment) have a better place to carry on a similar discussion. (And yes, I know/presume Tamas #292 was joking.)

Comment #322 November 9th, 2020 at 4:16 pm

Ted #311: Just briefly – I would consider CH true to be more beautiful, and CH false to be in a sense more interesting. Particularly the, as mentioned, oft-favored \(2^{\aleph_0}=\aleph_2\) would indeed truly blow my mind.

Tamas V #312: Completely different tangent now! We all know that contrary to mainstream belief, scientific theories (excepting mathematics here…) are never “true” or “false” in a strict sense – classical Newton mechanics describes much of our day to day perfectly well, yet we know it’s not accurate, with both relativity on the one extreme scale and quantum on the other (and sometimes both) being more precise.

Timothy #313: Thanks again. Sadly, I have tried to get a grasp on Woodin’s work, particularly Ultimate L, but unfortunately I understand just enough about model theory to realize that his concept is well beyond my current horizon. I do wonder how many people on the world truly understand it in detail – I suspect it’s a rather low number.

In contrast, while my understanding of Martin’s Maximum (and even Martin’s Axiom) is similarly limited, those seem to me to be counterintuitive. I can’t really point it down accurately, but broadly speaking – to me, forcing seems like a nice tool for set theory, but not so much for getting information about the “ideal true real set theoretic universe”, since models generated by forcing seem so very artificial (and in some cases outright wrong) to me personally. Of course, I have no good explanation for that.

Dan #316: I didn’t mean to imply that you didn’t!

And our different reactions are indeed interesting. I personally still – somewhat carefully! – believe that the “true” reals are just as perfectly definitive as are the natural numbers, but I can certainly understand why someone might be more skeptical about that.

An important point: The true reals are not the “maximal” ones. As I understand, there are models that clearly contain

morereals than the intended model, just as there are models that lack some “true” reals.Your one central question, though, I don’t quite understand. I think both the fact that in any model of ZF, for any set P, \(2^P>P\) and the existence of countable models of ZFC (and models of any cardinality – Löwenheim-Skolem) are known to you. So both the statement “any model of set theory has internally uncountable sets, one of which is the reals” and “there are models of set theory in which every set is externally countable” are true. I don’t understand how either affects your sense about an absolute mathematical truth?

Scott #317 and Timothy #318: Closing comments? Aw no please? Only after next installment! It’d feel silly to discuss this under your post about surgery (I do hope you feel better!)

Comment #323 November 9th, 2020 at 4:31 pm

Stella #272, sorry for the slow response but I sort of let this thread roll by. Yes I did think you meant literally every countable ordinal, so sorry for the misunderstanding. Ok it’s clarifying to hear that the halting problem can be straightforwardly encoded in Magic. I’m unfamiliar with Magic so I didn’t have a way to know that. What I’d like to know is whether the halting problem can be encoded in IC0, or maybe some variant including 3-dimensional IC0. It’s not obvious to me whether that is the case. I suspect that for 2-D IC0, the question may still be open. I should take a more careful look at JDH’s papers to see if this is addressed, but you know how that goes.

Comment #324 November 9th, 2020 at 4:42 pm

Added: of course what I really want to know is whether IC0 (or Magic) are arithmetically complete and not just \(\Pi^0_1\)-complete. That means they can encode the halting problem for any finite depth of oracle machines.

Comment #325 November 9th, 2020 at 5:14 pm

Don Staley #310, yes, of course 2

^{N}can’t possibly have a bijection with N in any model. Regarding platonism and CH, you might like this MO post by JDH: https://mathoverflow.net/a/25199Scott, yeah, please close the thread. At first I was going to say keep it open until the next installment is posted, but the topic has already spread to more recent threads, so I don’t want to have to keep looking back at this one.

Comment #326 November 9th, 2020 at 6:33 pm

Timothy Chow #318 and asdf #325: Either I mistated my question, you’ve misinterpreted it, or I misunderstand your answer.

If \(M\) is one model of ZFC and \(N\) is another model containing it, my understanding is that if we construct \(2^\mathbb{N}\) in \(M\), we can view it as a set in \(N\), but crucially if we construct \(2^\mathbb{N}\) in \(N\) we get a

different set. Indeed, the former may be countable in \(N\) while the latter never is. Have I gotten this right?If so, then I think asdf’s statement “\(2^\mathbb{N}\) can’t possibly have a bijection with \(\mathbb{N}\) in any model” doesn’t answer my question, because it’s a

different set in each model. Similarly, Timonthy Chow’s statement “in no model containing Vκ will there be a bijection between \(\aleph_0\) and \(2^{\aleph_0}\)” doesn’t answer my question, because again \(2^{\aleph_0}\) is a different set in each of those models.What I’m trying to ask about is the following situation: We take a model where \(2^{\aleph_0}\) (or any other set we pick) is uncountable in that model. Then, we pass to a larger model, and our set is no longer \(2^{\aleph_0}\) (or whatever set we constructed from the ZFC axioms) in that model. But is our set still uncountable? Can we construct our first model and set such that it – not a different set constructed in the same way in the large model – will always be uncountable, no matter how “Godly” a vantage point we take?

Or am I being too platonistic here? Is there no such thing as viewing a set in one model as also being a set in a containing model, when the definition used to construct it in the smaller model gives you a completely different set in the larger model?

Comment #327 November 10th, 2020 at 8:02 am

Dan Staley #325: You are right to ask for the answer to your question to be phrased more precisely, so let me start again.

The relevant buzzwords here are “absolute” and “relative.” The raw language of first-order set theory has almost no symbols in it, so when we start using symbols such as \(\aleph_0\) and \(2^{\aleph_0}\) or even \(\emptyset\), we have to be a little careful. There might be some ambiguity as to whether we’re talking about the object in \(V\) (the “ambient” universe in which we’re working) or the object in some model \(M\) of interest. When we’re being careful, we write (for example) \((\aleph_0)^M\) to mean “\(\aleph_0\) in \(M\)” (\(\aleph_0\) relativized to \(M\)) to distinguish it from \(\aleph_0\) in \(V\).

A very important fact to understand is that some (but not all!) concepts are

absolute, meaning that they are the same in \(M\) and \(V\) (or, in the context of two different models \(M\) and \(N\) with \(N\) containing \(M\), they are the same in \(M\) and \(N\)). In such cases we can drop the superscript because there is no ambiguity.With rare exceptions, one usually works with what are called

standard transitivemodels of ZFC (which I won’t define here; see my beginner’s guide to forcing if you’re unfamiliar with these terms). It turns out that lots of basic concepts are absolute for all standard transitive models of ZFC. Membership, the empty set, whether \(x\) is a subset of \(y\), whether \(x\) is an ordered pair, whether \(f\) is a function—all these are absolute (in fact, they’re absolute even if you drop the powerset, infinity, and foundation axioms from ZF). The absoluteness of \(\aleph_0\) is a little more subtle, and you do need the foundation axiom here, but it is absolute for all standard transitive models of ZFC (in fact, of ZF – P).On the other hand, \(2^{\aleph_0}\) is most definitely

notabsolute in general, and that’s a crucial point for understanding the whole forcing business. Indeed, if \(M\) iscountable, then \((2^{\aleph_0})^M\) cannot be the same as \(2^{\aleph_0}\) in \(V\) for the simple reason that in \(V\), \((2^{\aleph_0})^M\) is countable while \(2^{\aleph_0}\) is uncountable.Your question can be rephrased as asking whether there can be a model \(M\) such that \(2^{\aleph_0}\) is absolute; i.e., where \((2^{\aleph_0})^M\) coincides with \(2^{\aleph_0}\) in \(V\). The answer is yes. As I said, if \(\kappa\) is a strongly inaccessible cardinal then you can take \(M = V_\kappa\). In this model, “everything below \(\kappa\)” is absolute, and a strongly inaccessible cardinal is so large that just about everything you encounter in ordinary mathematics lies below it.

The development of forcing relies on restricting attention to countable models, for which as I said \(2^{\aleph_0}\) is never absolute, but there’s no reason one can’t consider uncountable models.

Comment #328 November 10th, 2020 at 11:16 am

Timothy Chow #327: Thanks for your answer, it’s the first answer to my question I feel I fully understand!

Well, except for one part: What exactly is \(V\)? Is it also a model of ZFC, just “bigger” in some ways? Or is it something else that transcends simply being a model?

If it is just a “big” model, then I can’t help but wonder if there could be an even bigger fish out there, some \(W\) encompassing \(V\) in which (let’s see if I get the notation right) \((2^{\aleph_0})^V\) is countable in \(W\).

Comment #329 November 10th, 2020 at 11:45 am

Timothy #327: Wait, so

allforcing happens in countable models? Including Martin’s axiom(s)?That certainly explains why I feel so uneasy about those results. I mean, we already know that any “true” set theoretic universe will contain uncountable sets.

Dan, to add a little point to Timothy’s explanation: One of the oddities of set theory is that no matter what you do, you will still always work in an “outer” set theory. Basically, you do “normal” mathematics as usual outside of the model to work inside the model. Confusing, isn’t it? That makes the notion of absoluteness so important..

Comment #330 November 10th, 2020 at 1:08 pm

I found this link to be very useful (I’m not this person!):

https://risingentropy.com/on-self-hating-theories-of-arithmetic/

Comment #331 November 10th, 2020 at 4:40 pm

Fnord #327: So maybe the question I’m trying to ask in unanswerable, at least formally. If we proved that a model \(M\) was uncountable in any containing model, we would only be doing so in the context of models contained in some super-large model \(V\), but could never make conclusions about models not contained in \(V\). Or if we showed that any model \(M\) could be extended to a larger model in which \(M\) was countable, it would only hold for some class of models in \(V\), but would say nothing about \(V\) itself since we can’t reason about \(V\)’s extensions when all our reasoning takes place within it.

Comment #332 November 10th, 2020 at 4:59 pm

Dan Staley #328: When studying a formal system such as ZFC, we have to distinguish between the theory we’re studying (ZFC in this case) and the metatheory in which we’re conducting the study. In principle, the metatheory could be very weak—just strong enough to analyze basic syntactic operations. In practice, though, that’s psychologically impossible. We want to be able to bring to bear all our usual mathematical intuition and apparatus to study the problem. So one typically uses

set theoryas the metatheory. This can be very confusing at first, but once you get used to it, it’s very convenient. It means that a model of ZFC is simply a set satisfying the axioms of ZFC, and we can use our ordinary mathematical intuition to think about it, just like we think of a group as a set satisfying the group axioms.When I mentioned \(V\), I was implicitly taking the metatheory to be set-theoretic. \(V\) is the class of all sets in the metatheory. Now when I say that the metatheory is set-theoretic, I don’t necessarily have to take the metatheory to be ZFC; some other set-theoretic axioms could be used instead. But it’s common to take the metatheory to be ZFC. In that case, \(V\) can indeed be thought of as a “model” of ZFC, with the caveat that \(V\) is a proper class and not a set (there’s no such thing as the set of all sets) and a “model” usually has to be a set by definition.

You could worry about whether there is something “outside \(V\)” but to make sense of that you’d have to introduce a metametatheory. And why stop there? Why not consider a metametametatheory? Why not allow infinitely many meta levels? But now we’re venturing out of mathematical territory into philosophical territory. If you want to stay firmly in mathematically rigorous territory then you should just fix some metatheory that’s strong enough to support your mathematical investigations. If that metatheory is set-theoretic then \(V\) is “all there is” and in particular, every model of ZFC lives inside \(V\).

Fnord #329: I’m trying to avoid pre-empting Scott’s future posts, but the short answer is yes, forcing operates on countable transitive models. I’m not sure why this makes you feel uneasy. The goal is to figure out what’s

provableand not what’strue, and Löwenheim–Skolem tells us that if we just wantsomemodel for a particular set of axioms, we can take a countable one. So restricting to countable models is no loss of generality in that sense.Comment #333 November 10th, 2020 at 5:33 pm

Dan #331: No, I think Timothy did answer your question – as accurately as one can without considering “extremist” positions that deny the existence of most accepted infinite objects.

If you reread Timothy’s explanation, you’ll note that once you get to a certain size of the set theoretic universe (Timothy used inaccessible cardinals), everything below that becomes “fixed” no matter what models you consider. Specifically, being countable becomes a fixed property, as countable sets are the smallest infinite ones you can consider. So once you’re beyond that limit, nothing will suddenly make bigger sets countable than in smaller models beyond that limit. If you move from \(M=V_{some\ strongly\ inaccessible\ cardinal}\) to \(N=V_{some\ appropriate\ larger\ cardinal}\), the fact that \((2^{\aleph_0})^M>(\aleph_0)^M\) remains true even if you replace either of the \(M\) with \(N\).

Do note that if you only replace one of them, for that statement to even be defined, you need to view that from the outside perspective. In other words, \(>\) technically speaking

shouldalso be \(>^M\) or \(>^N\) respectively if you’re talking about what happens inside either model.Platonically speaking: You’re basically approaching (or have reached, depending on your view on large cardinals!) set theoretic reality with your models, where “small” sets behave as they should.

Caveat: I’m relying on me reading Timothy correctly here. I can’t claim that I have the subtleties of set theory down well enough in the necessary detail to speak confidently without standing on someone’s shoulders! To that degree, I’d love if someone more knowledgeable could confirm that what I wrote above is accurate!

Comment #334 November 10th, 2020 at 6:07 pm

Timothy Chow #332 and Fnord #333: Thank you, now that I understand a bit more about what \(V\) is, I think my question has been answered as much as it can be.

So within the world of set theory, some sets can be “intrinsically” uncountable. However, in the way I intuitively meant “intrinsic”, the question cannot be answered formally, because we cannot reason about an unbounded infinity of metatheories – this is different than trying to reason about an unbounded infinity of sets or models (which can be done).

So I think my question can’t really be formalized in a way that matches my intuition, and so is indeed philosophical and not mathematical, as comment #332 suggests.

Thanks again for helping me to better understand all this!

Comment #335 November 15th, 2020 at 6:30 am

Just wanted to note that I was promptly floored when you revealed that Cantor only proved uncountably many reals for us. And you probably meant the “possibility” in the next sentence. Thanks for the interesting read, even though I barely understood half of it.

Comment #336 November 15th, 2020 at 11:46 am

I feel like I’ve learned a lot from this post, mostly by spending an hour being very confused. In reading around, I think I’ve un-confused myself enough to ask a reasonable question (apologies if it has been answered elsewhere in the comments).

Lowenheim-Skolem tells us there is a countable model (I’ll call in L-S) of ZFC. That is, there is a model in which, say, the set \(\mathbb{R}\) of the reals is defined that is, to our eyes, countable. However, the bijection that tells US that this set is countable does not exist within L-S. Hence, consistent with ZFC, that set \( \mathbb{R} \) is uncountable in the model.

However, power sets also make sense in L-S. So you can start with a countable set in -S, take its power set, then the power set of that set, and so on…and get sets that all look countable to us! So we have a chain of sets that are power sets of one another that are all countable? I think the resolution to this paradox is that the power set in L-S is different than the power set in “our” model of ZFC. This is exactly my question – how can the power set be a relative notion? My intuition (which has been proven wrong multiple times already today) is that the power set is a fixed thing, since “is a subset of” feels like a fixed notion.

I wonder, though, if understanding this “relative” nature of the power set is tantamount to understanding Cohen’s proof, given the statement “When e.g. Cohen shoves aleph-2 new real numbers into his model, you could instead say that he’s making new subsets of the natural numbers.” in comment #17.

Comment #337 November 15th, 2020 at 3:33 pm

Matt #336: Your understanding is absolutely correct! The power set of an infinite set is not the same within that countable model compared to what it “really” is, as we’d see it from the outside.

How can that be?

What is a power set in a model of ZFC? It is a set that satisfies an axiom, which roughly states “for any set A, there is a set B consisting of all sets C such that C is a subset of A”. The crux here is: How exactly do we make sure we capture

allsubsets of an infinite set?And the answer is that we really can’t. No axiom of ZFC is able to say that all subsets we would expect intuitively to exist really do have to exist in its models.

This is in turn a consequence of the sheer number of possible subsets that “should exist” for infinite sets. As noted in many previous comments, the underlying reason is that we use finite language, which restricts us to being able to distinguish at most countably many objects. There simply isn’t enough possible descriptions to capture all subsets.

As Scott puts it in his post, all ZFC can capture is the “countable dust”, and so models of it are free to omit most of the uncountably many “true” subsets.

Comment #338 November 15th, 2020 at 11:42 pm

A question about proving the independence of the axiom of choice:

As I understand it, Cohen’s proof (including all the theorems that it cites) is roughly like this:

1. Assuming Con(ZFC), use Gödel’s Completeness Theorem to obtain a countable ZFC*.

2. Throw away all of the non-constructible sets, to obtain a smaller, minimalist ZFC, which satisfies CH.

3. Use forcing to add in \(\aleph_2\) new reals to obtain a larger ZFC, which satisfies not(CH).

So, assuming that ZFC theory is consistent, we can both construct ZFCs that satisfy CH and ZFCs that do not satisfy CH.

Now let’s set aside the technical difficulties of how one uses forcing to add new sets, and imagine that we want to imitate this proof to prove that AC is independent of ZF theory. So, assuming Con(ZF), we build ourselves a countable ZF, reduce it to minimalism (thusly building ourselves a ZF that satisfies AC), and then forcing in some more sets** to make a ZF that satisfies not(AC). However,

in order to use Gödel’s Completeness Theorem, we need the Axiom of Choice.Thusly, we’ll be able to prove, “If Con(ZFC), then Con(ZF) -> Con(ZFC), Con(ZF + not(AC))”, but that doesn’t help us with knowing whether or not Con(ZF) -> Con(ZFC), Con(ZF + not(AC)). I’m thinking about using, instead of AC, the axiom schema of dependent choice over integers and PA formulas (which you could abbreviate DCIPAF), which might be enough to get Gödelian completeness (encode the model-so-far as an integer, the process of extending the model as an arithmetical formula, the eventual model as a sequence of integers). Alternatively, we could just say that we’ve proved “ExistsCountableModel(ZF) -> ExistsCountableModel(ZFC), ExistsCountableModel(ZF + not(AC))”, but that doesn’t have quite the same ring to it.*Here, I’m reusing terminology from my earlier comment (#39).

**what exactly one should force in is also mysterious, but we can also ignore this for now

Comment #339 November 15th, 2020 at 11:52 pm

Fnord #337 Ok, that’s reassuring. That does make sense – as much as I think I understand all the subsets of N, I can only describe a very small number of them, relatively speaking.

So Cohen’s proof forces more subsets of N into ZFC, much in the same way that we see more subsets of N appearing as we move from the L-S model to ZFC?

My brief wikipedia-ing suggests the L-S theorem relies on choice, so I assume we have little realistic chance of actually understanding the countable model it implies.

Comment #340 November 16th, 2020 at 6:53 am

Stella #287

It seems I have to agree with

What I don’t understand is the constraint of

I would expect a particular number \(c\) (represented by a bit sequence, my claims only refer to bit sequences, not numbers) to be the output of my device if we allow step \(i \to \infty\).

I don’t understand why the device would only enumerate

eventually constantbit sequences if we allow step \(i \to \infty\). \(\log_2 i\) is the count of bits in the bit sequences generated by the device for step \(i\) and the device generates all \(\log_2 i\) bit permutations. Since we have \(lim_{i \to \infty} \log_2 i \to \infty\), I expect the device to generate all infinite bit permutations.P.S. and thank you for the TeX demonstration. Forced me to figure it out 🙂

Comment #341 November 16th, 2020 at 7:17 am

G #286, Stella #287

I think I see where I had a nuance in my head that I did not express via the program I wrote. My intent was to output all bit sequences generated at each step \(i\). Let me try again:

Javascript:

for (let i = 0; i <= Infinity; i++)

{

for (let n = 0; n <= i; n++)

{

console.log((n >>> 0).toString(2).split(”).reverse().join(”).padEnd(Math.ceil(Math.log2(i+1)),0));

}

console.log(`—end step ${i+1}—`);

}

At every step \(i\) the intended output of the program are all bit sequences generated up to step \(i\).

0

—end step 1—

0

1

—end step 2—

00

10

01

—end step 3—

00

10

01

11

—end step 4—

000

100

010

110

001

—end step 5—

000

100

010

110

001

101

—end step 6—

000

100

010

110

001

101

011

—end step 7—

000

100

010

110

001

101

011

111

—end step 8—

0000

1000

0100

1100

0010

1010

0110

1110

0001

—end step 9—

0000

1000

0100

1100

0010

1010

0110

1110

0001

1001

—end step 10—

So, apologies for the previous program. This is the program that I claim will eventually output all infinite bit sequences.

Comment #342 November 16th, 2020 at 4:36 pm

duck_master #338: In fact, forcing-based proofs that (for example) Con(ZF) implies Con(ZFC) can be formalized in very weak systems, such as primitive recursive arithmetic. It takes some work to show this, but you can find a sketch in Kunen’s book on set theory. A key point is that any inconsistency in ZFC must arise from a

finitesubset of axioms of ZFC, and the forcing machinery is sufficiently finitistic to let you convert the ZFC inconsistency into an inconsistency from a finite subset of axioms of ZF.Matt Davis #339: The answer to your question is basically yes, but I recommend that you get into the habit of making a clear distinction between ZFC itself (which is a set of axioms) and a model of ZFC. Forcing gives us a way to expand a countable transitive model of ZFC to a larger countable transitive model of ZFC that contains more subsets of \(\mathbb N\). We’re not “forcing anything into ZFC.” As for “understanding” countable transitive models of ZFC, the sticking point isn’t the axiom of choice; it’s more that the existence of a countable transitive model of ZFC cannot be proved in ZFC (by Gödel’s 2nd incompleteness theorem), so that puts a limit on how “explicitly” we can describe such a thing. Nevertheless, it’s possible to get a pretty clear “understanding” of, say, the smallest countable transitive model of ZFC, because it’s built up step by step by transfinite induction using pretty concrete operations. (The catch comes when you want to “explicitly” say how many transfinite steps need to be taken.)

Comment #343 November 16th, 2020 at 5:56 pm

Matt Davis #339 and Timothy Chow #342: I don’t want to preempt what Scott’s going to explain later (I hope it includes this!), but I always cringe a bit when I see the description that forcing “adds something”, so the model seems “enriched”.

If you view models of ZFC set theory from the outside, we already

knowthat countable models cannot contain all sets almost all of us intuitively (and arguably justifiedly) expect there to be.So in my view, forcing in Cohen’s proof does not so much

addto the model, it just reduces its “failings” in representing set theoretic truth. And with regards to CH, it does so by very intentionally “reinstating some of the missing sets”withoutadding “missing functions between those and other sets”.In other words, if you’re inclined to a platonist view, it really doesn’t say much about CH in “reality”. Timothy, if I understand your latest reply to me in #332 correctly, you make a clear distinction between what Cohen tells us about CH regarding its provability vs its “truth”, but I feel that many others do not, so it’s a bit of a pet peeve of mine to emphasize this point.

Comment #344 November 16th, 2020 at 6:44 pm

Fnord #343: I confess I don’t understand what you don’t like about the term “add.”

Suppose we have the field \(\mathbb Q\) of rational numbers and we want to talk about the process of going from \(\mathbb Q\) to \({\mathbb Q}(i)\). Would you balk at saying that what we’re doing is “adding” some numbers to \(\mathbb Q\)? Would you rather say that we’re instead “reinstating” some complex numbers while not reinstating other complex numbers?

Along the lines of my comment #270, I’m tempted to discuss “How I Learned to Stop Worrying and Love Models of ZFC.” Even if you believe in a One True Universe of All Sets, I see no reason to think of models of ZFC as impostors or poseurs. They’re legitimate mathematical structures in their own right that satisfy all the axioms you’ve written down.

Comment #345 November 16th, 2020 at 7:33 pm

Timothy #344: I agree with everything you say. My issue with the wording of “adding” and the often used term “enriching” is purely from the “One True Universe of All Sets” view.

I’m perfectly fine with set theory working as it does, but many people, including some (clearly not you!) professional mathematicians, use those terms and Cohen’s proof as arguments in the “One True Universe of All Sets” debate over CH, and for those inclined to believe in the platonist view and thus interested in that topic, I want to point out why those conclusions are not correct.

Comment #346 November 16th, 2020 at 8:21 pm

Tristan #340-341:

> … This is the program that I claim will eventually output all infinite bit sequences.

Several of us have tried to explain to you that this program will not only not output *all* infinite bit sequences, but it will never even output a *single* infinite bit sequence. This should be pretty obvious, since the instructions it uses to “output a bit sequence” are only capable of outputting finite ones, and indeed a javascript program (even if you could run it on a computer with truly unlimited memory, forever) could only ever output, or internally contain, a finite bit sequence or number.

You seem to be arguing “(1) those finite bit sequences it outputs get longer and longer, without limit; therefore, (2) they will eventually include infinite bit sequences too; (3) in fact, they will eventually include all infinite bit sequences.”

(1) is correct, but (2) does not follow from (1). (2) and (3) are not correct.

Maybe it would be easier to see the issue if we forget about bit sequences and just think about numbers. (And I hope you’ll forgive me for switching to a programming language I actually know.)

The analogous argument to yours, but for numbers, would be that this python program (with indents shown by dots, since I don’t know how to do better in the limited HTML that works here):

would (if run with truly unlimited memory, forever) (1) eventually print every positive integer; (2) also eventually print infinite integers; in fact (3) eventually print all of those.

(1) is correct, but (2) and (3) are not correct — in fact they are nonsense, since there are no infinite integers; but even if there were, this program could never print one, since the variable i can’t hold one, and the statement print(i) can’t print one. It would reach and print larger and larger finite positive integers — in fact, all of them — but that’s all, since that’s all there are. (And at any given time, it would only have reached an initial finite subset of them, so far.)

The only difference about bit strings is that, unlike integers, infinite ones *exist* — that doesn’t mean any program can ever print one. (At any given time, the best it could do is to have started to print one and gotten partway done.)

Comment #347 November 17th, 2020 at 2:59 pm

Timothy #342. Thank you – that is quite an important distinction.

Comment #348 November 23rd, 2020 at 10:36 am

I was just watching a Roger Penrose presentation video on youtube, and he mentions Godel and this

https://en.wikipedia.org/wiki/Goodstein%27s_theorem

That’s pretty interesting (but not sure how relevant this is to the present thread).

Comment #349 November 23rd, 2020 at 10:53 am

That’s the Penrose presentation, which is one of his clearest (I’ve watched quite a few by now)

Comment #350 November 27th, 2020 at 2:05 am

The worst popularization of basic set theory was George Gamow

One Two Three … Infinity. He stated that \(\aleph_1\) was the size of the continuum, and that \(\aleph_2\) was the number of curves in the plane, and that it is a deep unsolved problem to come up with examples of higher alephs.This nonsense was repeated by Robert L Forward in his novel

Dragon’s Egg.Comment #351 November 27th, 2020 at 2:24 am

AC is relevant to hidden variables in quantum mechanics. The various no hidden variables theorems implicitly assume everything is Lebesgue measurable. So if you want, you can develop a hidden variable using non-measurable sets. See the work of Boos.

Another little known use of AC concerns Arrow’s paradox in economics: the assumptions going into the paradox are actually just defining an ultrafilter, which for a finite number of voters, must be a principal ultrafilter i.e., there is a “dictator”. So, thanks to AC, the paradox fails for infinitely many voters.

Comment #352 November 27th, 2020 at 4:37 pm

william e emba #351:

AC is relevant to hidden variables in quantum mechanics. The various no hidden variables theorems implicitly assume everything is Lebesgue measurable.

I don’t understand how that could possibly be true. If we consider, for example, the Bell/CHSH proof of no local hidden variables, it clearly has no measurability assumption anywhere. Neither does the Kochen-Specker Theorem, since it involves only a finite list of bases. Could you give me a link to the work of Boos that discusses this?

Comment #353 November 28th, 2020 at 6:09 pm

Boos https://link.springer.com/article/10.1007/BF00413903

Earlier work was by Pitowsky (see his book Quantum Probability–Quantum Logic) and Gudder (see his book Quantum Probability) assuming CH. See Farah-Magidor https://arxiv.org/abs/1212.0110 who show that Pitowsky’s hidden variables are consistently non-existent (in the forcing model obtained by violating CH with random reals). Pitowsky’s model was in his thesis, see his PRD paper https://journals.aps.org/prd/abstract/10.1103/PhysRevD.27.2316

Comment #354 November 28th, 2020 at 11:04 pm

william e emba #353: I looked at the paper by Boos and found it barely comprehensible, but in any case certainly didn’t see any hint there that AC could let you violate Bell or Kochen-Specker … which wouldn’t make sense anyway since as I said, those no-go theorems have completely elementary proofs and don’t rely on any assumptions about transfinite sets.

Extraordinary claims require extraordinary clarity and directness … “drive-by linkings” don’t cut it! 🙂

Comment #355 November 29th, 2020 at 4:15 am

It’s been a a long while since I looked at Boos, and I just looked at it again, and I won’t defend it now.

However, Pitowsky and Gudder are straightforward and highly comprehensible, and Pitowsky in particular explains why his work evades the no-go theorems. They both published papers on this in the 1980s, along with their books.

See also his 1982 PRL v48 short form paper (and its errata), which is just an outline of the longer 1983 PRD paper, along with the comments from Mermin and MacDonald and Pitowsky’s response to their comments (v49).

Comment #356 November 29th, 2020 at 10:45 pm

Bruce #346

Is it not true that \(lim_{i \to \infty} i = \infty\) and \(lim_{i \to \infty} log_2i = \infty\)? Does that not imply \(\infty\) rows in the matrix and \(\infty\) columns in the matrix?

Is not a matrix of bit sequences with \(\infty\) rows and \(\infty\) columns the starting point for the diagonal argument?

I don’t understand why this is being rejected. Do limits not apply somehow? Why?

What I really struggle with is the willingness to believe that \(\infty\) by \(\infty\) matrix is a square matrix without proof, over a specific algorithm to construct one and an application of limits. This seems to be a dead-end to these exchanges.

Let me pose a question instead.

Why is the enumeration of all infinite bit sequences assumed to form a square matrix? With all the counterintuitiveness that is invoked around infinities, why should that assumption be true without proof?

Comment #357 November 29th, 2020 at 11:09 pm

I made a typo in my previous comment.

Should be:

\(lim_{i \to \infty} i \to \infty \) and \(lim_{i \to \infty} log_2i \to \infty \)

and not:

\(lim_{i \to \infty} i = \infty \) and \(lim_{i \to \infty} log_2i = \infty \)

I guess the nature of the typo is the essence of everyone’s argument that the algorithm I presented never generates \(\infty\) and instead it generates all eventually constant bit sequences. It gets arbitrarily close, but never reaches \(\infty\).

I assume the diagonal argument starts beyond that impenetrable gap, when we’re already at \(\infty\).

🤔… I am much less certain about the following statement: the existence of that gap makes it feel completely disconnected from constructive mathematics.

Comment #358 November 30th, 2020 at 1:14 am

Tristan #356: It was already explained to you multiple times that there’s no

assumptionin the diagonal argument involving “the enumeration of all infinite bit sequences forming a square matrix.” Rather, it’s simply aproof by contradiction—are you familiar with those? You assume, by contradiction, that someone found an enumeration of all infinite binary sequences. You then show that the assumption was wrong, by constructing a new infinite binary sequence that’s not in the enumeration.Because, again, this is a

proof by contradiction, it’s not your problem to construct the enumeration or ensure any nice properties of it. It’s the problem of the person of you’re refuting.Alas, I’ve decided to disallow further debate about the validity of the diagonal argument, in this comment thread and all the CH-related threads to follow. This is for essentially the same reason why a discussion of quantum field theory might disallow Aristotle vs. Newton debates.

Comment #359 November 30th, 2020 at 5:21 am

Scott #358

I appreciate you letting the comments get this far, thank you, it led me to an insight. And thank you to all those who engaged and patiently got me to where I’m at.

I also understand why it would be fruitless to continue to argue about validity of the diagonal argument, so no objection from me.

Thanks again, cheers!

Comment #360 November 30th, 2020 at 12:53 pm

I’m sorry for the late response, I thought I had posted this earlier.

It’s been 25 years since I looked at Boos, and I forgot how incomprehensible it was. At the time of my posting, it was the only example I could remember off-hand. Had I checked first I wouldn’t have mentioned it.

But Pitowsky and Gudder are very readable, and have inspired responses both supportive and negative. In particular, Pitowsky’s original PRL short form version (look also for the errata) received a comment from Mermin, to which Pitowsky replied.

There’s also a mathoverflow discussion https://mathoverflow.net/questions/279786/on-independence-and-large-cardinal-strength-of-physical-statements/280005#280005 on his work and related proposed set theory/physics connections.

Comment #361 November 30th, 2020 at 2:20 pm

william e emba #360: At least I could read and understand that MathOverflow discussion, but it said nothing about hidden variables! I’m still missing how any set-theoretic considerations could

possiblyoverturn straightforward theorems proved via arithmetic, which is all the hidden-variable no-go theorems really are. This is not a question of details. You might as well have claimed to me that using the axiom of choice, you can make √2 = 17.Can you provide links to any of this Pitowsky / Mermin discussion? If I looked it up, would it directly address the fundamental confusion above?

Comment #362 December 3rd, 2020 at 2:05 am

The long PRD paper (with full proofs) is here: https://journals.aps.org/prd/abstract/10.1103/PhysRevD.27.2316

The short PRL paper (outline, technical proofs omitted) is here:

https://journals.aps.org/prl/issues/48/19

That page includes links to the errata (necessary)

and the Mermin/Macdonald/Pitowsky exchange.

The no-go theorems prove hidden variables do not exist are making elementary arithmetical calculations and drawing conclusions regarding

set-ups relying on standard Kolmogorov probability. While it has been wonderful for people over the years to reduce the proofs to almost trivial arithmetic, you’ve let that fool you. In order to draw a conclusion about QM, the proofs are only relevant when talking about QM. Pitowsky sneaks past them by working with nonmeasurable sets, and inventing (using CH) a probability theory that does screwy things with them.Also check out the recent Kellner paper https://link.springer.com/article/10.1007/s10701-016-0049-0

Kellner claims Pitowsky’s model is so deep doodoo hidden variable that it is actually a form of super-determinism.