The 8000th Busy Beaver number eludes ZF set theory: new paper by Adam Yedidia and me

I’ve supervised a lot of great student projects in my nine years at MIT, but my inner nerdy teenager has never been as personally delighted by a project as it is right now.  Today, I’m proud to announce that Adam Yedidia, a PhD student at MIT (but an MEng student when he did most of this work), has explicitly constructed a one-tape, two-symbol Turing machine with 7,918 states, whose behavior (when run on a blank tape) can never be proven from the usual axioms of set theory, under reasonable consistency hypotheses.  Adam has also constructed a 4,888-state Turing machine that halts iff there’s a counterexample to Goldbach’s Conjecture, and a 5,372-state machine that halts iff there’s a counterexample to the Riemann Hypothesis.  In all three cases, this is the first time we’ve had a reasonable explicit upper bound on how many states you need in a Turing machine before you can see the behavior in question.

Here’s our research paper, on which Adam generously included me as a coauthor, even though he did the heavy lifting.  Also, here’s a github repository where you can download all the code Adam used to generate these Turing machines, and even use it to build your own small Turing machines that encode interesting mathematical statements.  Finally, here’s a YouTube video where Adam walks you through how to use his tools.

A more precise statement of our main result is this: we give a 7,918-state Turing machine, called Z (and actually explicitly listed in our paper!), such that:

  1. Z runs forever, assuming the consistency of a large-cardinal theory called SRP (Stationary Ramsey Property), but
  2. Z can’t be proved to run forever in ZFC (Zermelo-Fraenkel set theory with the Axiom of Choice, the usual foundation for mathematics), assuming that ZFC is consistent.

A bit of background: it follows, as an immediate consequence of Gödel’s Incompleteness Theorem, that there’s some computer program, of some length, that eludes the power of ordinary mathematics to prove what it does, when it’s run with an unlimited amount of memory.  So for example, such a program could simply enumerate all the possible consequences of the ZFC axioms, one after another, and halt if it ever found a contradiction (e.g., a proof of 1+1=3).  Assuming ZFC is consistent, this program must run forever.  But again assuming ZFC is consistent, ZFC can’t prove that the program runs forever, since if it did, then it would prove its own consistency, thereby violating the Second Incompleteness Theorem!

Alas, this argument still leaves us in the dark about where, in space of computer programs, the “Gödelian gremlin” rears its undecidable head.  A program that searches for an inconsistency in ZFC is a fairly complicated animal: it needs to encode not only the ZFC axiom schema, but also the language and inference rules of first-order logic.  Such a program might be thousands of lines long if written in a standard programming language like C, or millions of instructions if compiled down to a bare-bones machine code.  You’d certainly never run across such a program by chance—not even if you had a computer the size of the observable universe, trying one random program after another for billions of years in a “primordial soup”!

So the question stands—a question that strikes me as obviously important, even though as far as I know, only one or two people ever asked the question before us; see here for example.  Namely: do the axioms of set theory suffice to analyze the behavior of every computer program that’s at most, let’s say, 50 machine instructions long?  Or are there super-short programs that already exhibit “Gödelian behavior”?

Theoretical computer scientists might object that this is “merely a question of constants.”  Well yes, OK, but the origin of life in our universe—a not entirely unrelated puzzle—is also “merely a question of constants”!  In more detail, we know that it’s possible with our laws of physics to build a self-replicating machine: say, DNA or RNA and their associated paraphernalia.  We also know that tiny molecules like H2O and CO2 are not self-replicating.  But we don’t know how small the smallest self-replicating molecule can be—and that’s an issue that influences whether we should expect to find ourselves alone in the universe or find it teeming with life.

Some people might also object that what we’re asking about has already been studied, in the half-century quest to design the smallest universal Turing machine (the subject of Stephen Wolfram’s $25,000 prize in 2007, to which I responded with my own $25.00 prize).  But I see that as fundamentally different, for the following reason.  A universal Turing machine—that is, a machine that simulates any other machine that’s described to it on its input tape—has the privilege of offloading almost all of its complexity onto the description format for the input machine.  So indeed, that’s exactly what all known tiny universal machines do!  But a program that checks (say) Goldbach’s Conjecture, or the Riemann Hypothesis, or the consistency of set theory, on an initially blank tape, has no such liberty.  For such machines, the number of states really does seem like an intrinsic measure of complexity, because the complexity can’t be shoehorned anywhere else.

One can also phrase what we’re asking in terms of the infamous Busy Beaver function.  Recall that BB(n), or the nth Busy Beaver number, is defined to be the maximum number of steps that any n-state Turing machine takes when run on an initially blank tape, assuming that the machine eventually halts. The Busy Beaver function was the centerpiece of my 1998 essay Who Can Name the Bigger Number?, which might still attract more readers than anything else I’ve written since. As I stressed there, if you’re in a biggest-number-naming contest, and you write “BB(10000),” you’ll destroy any opponent—however otherwise mathematically literate they are—who’s innocent of computability theory.  For BB(n) grows faster than any computable sequence of integers: indeed, if it didn’t, then one could use that fact to solve the halting problem, contradicting Turing’s theorem.

But the BB function has a second amazing property: namely, it’s a perfectly well-defined integer function, and yet once you fix the axioms of mathematics, only finitely many values of the function can ever be proved, even in principle.  To see why, consider again a Turing machine M that halts if and only if there’s a contradiction in ZF set theory.  Clearly such a machine could be built, with some finite number of states k.  But then ZF set theory can’t possibly determine the value of BB(k) (or BB(k+1), BB(k+2), etc.), unless ZF is inconsistent!  For to do so, ZF would need to prove that M ran forever, and therefore prove its own consistency, and therefore be inconsistent by Gödel’s Theorem.

OK, but we can now ask a quantitative question: how many values of the BB function is it possible for us to know?  Where exactly is the precipice at which this function “departs the realm of mortals and enters the realm of God”: is it closer to n=10 or to n=10,000,000?  In practice, four values of BB have been determined so far:

  • BB(1)=1
  • BB(2)=6
  • BB(3)=21 (Lin and Rado 1965)
  • BB(4)=107 (Brady 1975)

We also know some lower bounds:

See Heiner Marxen’s page or the Googology Wiki (which somehow I only learned about today) for more information.

Some Busy Beaver enthusiasts have opined that even BB(6) will never be known exactly.  On the other hand, the abstract argument from before tells us only that, if we confine ourselves to (say) ZF set theory, then there’s some k—possibly in the tens of millions or higher—such that the values of BB(k), BB(k+1), BB(k+2), and so on can never be proven.  So again: is the number of knowable values of the BB function more like 10, or more like a million?

This is the question that Adam and I (but mostly Adam) have finally addressed.

It’s hopeless to design a Turing machine by hand for all but the simplest tasks, so as a first step, Adam created a new programming language, called Laconic, specifically for writing programs that compile down to small Turing machines.  Laconic programs actually compile to an intermediary language called TMD (Turing Machine Descriptor), and from there to Turing machines.

Even then, we estimate that a direct attempt to write a Laconic program that searched for a contradiction in ZFC would lead to a Turing machine with millions of states.  There were three ideas needed to get the state count down to something reasonable.

The first was to take advantage of the work of Harvey Friedman, who’s one of the one or two people I mentioned earlier who’s written about these problems before.  In particular, Friedman has been laboring since the 1960s to find “natural” arithmetical statements that are provably independent of ZFC or other strong set theories.  (See this AMS Notices piece by Martin Davis for a discussion of Friedman’s progress as of 2006.)  Not only does Friedman’s quest continue, but some of his most important progress has come only within the last year.  His statements—typically involving objects called “order-invariant graphs”—strike me as alien, and as far removed from anything I’d personally have independent reasons to think about (but is that just a sign of my limited perspective?).  Be that as it may, Friedman’s statements still seem a lot easier to encode as short computer programs than the full apparatus of first-order logic and set theory!  So that’s what we started with; our work wouldn’t have been possible without Friedman (who we consulted by email throughout the project).

The second idea was something we called “on-tape processing.”  Basically, instead of compiling directly from Laconic down to Turing machine, Adam wrote an interpreter in Turing machine (which took about 4000 states—a single, fixed cost), and then had the final Turing machine first write a higher-level program onto its tape and then interpret that program.  Instead of the compilation process producing a huge multiplicative overhead in the number of Turing machine states (and a repetitive machine), this approach gives us only an additive overhead.  We found that this one idea decreased the number of states by roughly an order of magnitude.

The third idea was first suggested in 2002 by Ben-Amram and Petersen (and refined for us by Luke Schaeffer); we call it “introspective encoding.”  When we write the program to be interpreted onto the Turing machine tape, the naïve approach would use one Turing machine state per bit.  But that’s clearly wasteful, since in an n-state Turing machine, every state contains ~log(n) bits of information (because of the other states it needs to point to).  A better approach tries to exploit as many of those bits as it can; doing that gave us up to a factor-of-5 additional savings in the number of states.

For Goldbach’s Conjecture and the Riemann Hypothesis, we paid the same 4000-state overhead for the interpreter, but then the program to be interpreted was simpler, giving a smaller overall machine.  Incidentally, it’s not intuitively obvious that the Riemann Hypothesis is equivalent to the statement that some particular computer program runs forever, but it is—that follows, for example, from work by Lagarias and by Davis, Matijasevich, and Robinson (we used the latter; an earlier version of this post incorrectly stated that we used the Lagarias result).

To preempt the inevitable question in the comments section: yes, we did run these Turing machines for a while, and no, none of them had halted after a day or so.  But before you interpret that as evidence in favor of Goldbach, Riemann, and the consistency of ZFC, you should probably know that a Turing machine to test whether all perfect squares are less than 5, produced using Laconic, needed to run for more than an hour before it found the first counterexample (namely, 32=9) and halted.  Laconic Turing machines are optimized only for the number of states, not for speed, to put it mildly.

Of course, three orders of magnitude still remain between the largest value of n (namely, 4) for which BB(n) is known to be knowable in ZFC-based mathematics, and the smallest value of n (namely, 7,918) for which BB(n) is known to be unknowable.  I’m optimistic that further improvements are possible to the machine Z—whether that means simplifications to Friedman’s statement, a redesigned interpreter (possibly using lambda calculus?), or a “multi-stage rocket model” where a bare-bones interpreter would be used to unpack a second, richer interpreter which would be used to unpack a third, etc., until you got to the actual program you cared about.  But I’d be shocked if anyone in my lifetime determined the value of BB(10), for example, or proved the value independent of set theory.  Even after the Singularity happens, I imagine that our robot overlords would find the determination of BB(10) quite a challenge.

In an early Shtetl-Optimized post, I described theoretical computer science as “quantitative epistemology.”  Constructing small Turing machines whose behavior eludes set theory is not conventional theoretical computer science by any stretch of the imagination: it’s closer in practice to programming languages or computer architecture, or even the recreational practice known as code-golfing.  On the other hand, I’ve never been involved with any other project that was so clearly, explicitly about pinning down the quantitative boundary between the knowable and the unknowable.

Comments on our paper are welcome.

Addendum: Some people might wonder “why Turing machines,” as opposed to a more reasonable programming language like C or Python.  Well, first of all, we needed a language that could address an unlimited amount of memory.  Also, the BB function is traditionally defined in terms of Turing machines.  But the most important issue is that we wanted there to be no suspicion whatsoever that our choice of programming language was artificially helping to make our machine small.  And hopefully everyone can agree that one-tape, two-symbol Turing machines aren’t designed for anyone’s convenience!

288 Responses to “The 8000th Busy Beaver number eludes ZF set theory: new paper by Adam Yedidia and me”

  1. Lawrence D'Anna Says:

    Wow, that’s really cool!

  2. Greg Kuperberg Says:

    If the machine did halt, that would be provable in ZFC, which would be a contradiction since you proved that it isn’t provable. So you should get cracking simulating Yedidia’s machine. If the simulation ever stops, it will be the greatest theorem in all of human history! 🙂

  3. turbo Says:

    Sorry could you tell in a few sentences is there importance of this to complexity theory (like p versus np problem or permanent in p/poly problem)?

  4. Daniel Bilar Says:

    Let me be the first to congratulate Adam for this really delightful achievement. Also you, Prof. Aaronson – I would have never learned of his thesis “”A Relatively Small Turing Machine Whose Behavior Is Independent of Set Theory” had you not mentioned his winning a prize for his MEng thesis on passant in a talk last October. I followed his references up the stream into Friedman territory and new avenues opened up for me.

  5. Scott Says:

    turbo #3: This has no obvious implications for complexity theory. It’s about an issue orthogonal to what complexity theory normally asks about: time and space are free in this world; all we care about is minimizing program size.

  6. Scott Says:

    Greg #2: See this paragraph in the OP:

      Incidentally, to preempt the inevitable question in the comments section: yes, we did run these Turing machines for a while, and no, none of them had halted after a day or so.  But before you interpret that as evidence in favor of Goldbach, Riemann, and the consistency of ZFC, you should probably know that a Turing machine to test whether all perfect squares are less than 5, produced using Laconic, needed to run for more than an hour before it found the first counterexample (namely, 32=9) and halted.  Laconic Turing machines are optimized only for the number of states, not for speed, to put it mildly.
  7. Sniffnoy Says:

    Greg Kuperberg #2: Well, careful — unless I’m missing something, actually all that it halting would imply is that SRP isn’t consistent with ZFC. That seems to be the biggest “hole” here, the reliance on large cardinal axioms.

  8. Sniffnoy Says:

    So now I’m wondering how much smaller you could get it if you just wanted it so that Peano arithmetic can’t prove that it runs forever (assuming consistency). 🙂

  9. Scott Says:

    Sniffnoy #7: Yes, that’s correct, and it represents a nontrivial hole to be filled by future research. Friedman has promised us that, when he gets around to it, he’ll try to design a new arithmetical statement that hits Con(ZFC) on the nose, rather than “overshooting the target” and going up to Con(SRP).

  10. Scott Says:

    Sniffnoy #8: Excellent question! Again: future research. 🙂

    There are actually natural arithmetical statements known to be independent of PA, like Goodstein’s Theorem and the non-losability of the hydra game. But all the ones I know are Π2-sentences rather than Π1-sentences.

  11. Qiaochu Yuan Says:

    Fantastic. Seems like a good candidate for a good Polymath project (similar to the prime gaps project) to lower this number as much as possible? It would need to draw on a bigger CS crowd than previous projects though, so I’m not sure where it should be hosted.

  12. Joshua Zelinsky Says:

    Could this sort of Turing machine efficient compiler tactics have any chance of being used to improve on the sort of results about how fast Busy Beaver grows at nearby values like the sort that we’ve discussed earlier.

  13. anonymous Says:

    “Turing machine that halts iff there’s a counterexample to Goldbach’s Conjecture… this is the first time we’ve had an explicit upper bound on how many states you need in a Turing machine before you can see the behavior in question” – one writes a Java program that halts iff there is a counterexample to Goldbach conjecture. One diligently translates this program into a Turing machine. One gets thus an upper bound on the minimal number of states of such a Turing Machine. What am I missing here?

  14. Scott Says:

    anonymous #13: Well sure, doing that would give you some crappy upper bound, probably on the order of 100,000 states (or much more if you were actually compiling from Java). I don’t know that anyone bothered, but I agree that it wouldn’t involve any ideas, just a lot of hacking.

    Now, if you want a more respectable upper bound like 5,000 states, that takes more thought, and the on-tape processing and introspective encoding tricks.

    The question of whether you could construct (say) a 50-state Turing machine that encodes Goldbach remains open, and I confess interesting to me.

    Of course, if anyone actually proves Goldbach, we’ll then have a 1-state Turing machine that’s equivalent to it, namely one that just loops forever unconditionally! 🙂 But no similar observation applies to the ZFC machine—there, the smallest n such that BB(n) is independent of ZFC really is a knowledge-independent constant that one could consider the “intrinsic complexity of ZFC.” So, our main goal was to get the first nontrivial upper bound on that constant. The Goldbach and Riemann machines were mostly just “comparison/control cases.”

  15. Scott Says:

    Joshua #12: Yes! In particular, the “introspective encoding” tactic of Ben-Amram and Petersen is directly relevant to that. I actually first learned about introspective encoding when Luke Schaeffer and I were thinking about your “Ask Me Anything” challenge, and Luke realized that introspective techniques would let you show the existence of a constant C such that, for every computable function f and every n,

    BB(n + C*n/log(n) + Of(1)) > f(BB(n)).

    Luke even worked out an explicit value for C (I don’t remember what it was). But then we found out that Ben-Amram and Petersen had done it earlier.

    Later, when Adam was building the small ZFC-independent machine with on-tape processing, Luke realized that introspective encoding would be relevant there as well.

  16. D Says:

    Is it possible to convert your TM into the explicit 3×3 matrices such that RH is true iff the “mortal matrix problem” isn’t solvable for these matrices? For all we know the determinants are all nonzero, which would immediately imply RH.

    Also, can you find the explicit 3SAT formula such that RH has a proof length at most n in ZFC iff the formula is satisfiable? It would be fun trying to guess satisfying assignments!

  17. Job Says:

    Namely: do the axioms of set theory suffice to analyze the behavior of every computer program that’s at most, let’s say, 50 machine instructions long? Or are there super-short programs that already exhibit “Gödelian behavior”?

    Excluding programs with access to a random number generator, right?

    Otherwise you can still shoehorn complexity by randomly assembling a TM and interpreting it.

  18. Scott Says:

    D #16: Sure, you could run the reduction from our RH TM to the 3×3 matrix mortality problem, although there’s probably a more direct way to do it (yielding fewer matrices with smaller entries), which would go through the Post Correspondence Problem or something close to it rather than through Turing machines.

    If you do this, I’ll bet my life savings that at least one of the matrices has zero determinant (and more generally, that this doesn’t yield a proof of RH), since no actual knowledge of number theory has been used in getting to this point—you’re just starting from Lagarias’s statement, then pouring the same mathematical content from one computational vessel into another one.

    I completely agree, constructing explicit 3SAT instances whose solutions encode proofs of RH would be another fun challenge for another motivated student! 🙂

  19. Scott Says:

    Job #17: Yes, of course, we’re talking about deterministic programs here.

  20. code golf addict Says:

    The ZFC program is a neat accomplishment, but a Goldbach machine ought to be doable with a few hundred states.

  21. Ben Zinberg Says:

    I agree with:
    “If ZFC can prove that the Turing machine Z halts, then ZFC can prove that ZFC is inconsistent.”

    But I don’t see how you’re getting:
    “If ZFC can prove that the Turing machine Z halts, then ZFC is *actually* inconsistent.”

    I would agree with the second one if you changed “inconsistent” to “not $\Sigma_1$-sound”[1]. Am I missing something?


  22. David Eppstein Says:

    Since you’re actually running TMs and finding it slow: have you considered using something like hashlife to speed it up? In more detail: I assume your simulator does one TM step at a time, but that your TM steps are very repetitive. The idea of hashlife is to cache bigger-sized chunks of the time-space diagram and, when they match, to fill them in from the cached copy rather than just going through all those steps again. It was designed for cellular automata (and works amazingly well for certain Game of Life patterns, allowing simulation to numbers of steps far beyond what could be done step-by-step) but Turing Machines are similar enough that I wonder whether it might work for them as well.

  23. John Hawksley Says:

    This is so cool =)

  24. Ernie Davis Says:

    Fascinating! Two questions. 1) How much memory (i.e. squares of the tape) was used in disproving the conjecture that all squares are less than 5? 2) Somewhat along the lines of anonymous13: Suppose instead of the Turing machine, you use the hypercompact language that Chaitin presents in “Algorithmic Information Theory” (IIRC). Do you have any sense how many bits these various programs would require?

  25. Joe Random Says:

    Even though it may be impossible to show in ZF that a particular Turing machine A will halt, does that rule out being able to show with ZF that Turing machine A halts if and only if Turing machine B does not halt?

    If it is possible to pair Turing machines (or otherwise count related Turing machines), then it might be possible with ZF to prove an infinite set of busy beaver numbers without being able to prove the haltingness of particular Turing machines.

  26. Gabriel Nivasch Says:

    Dear Scott,

    What is the smallest n for which you can prove that BB(n) is *HUGE* by constructing an n-state Turing Machine that halts only after a very very long time?

    Here’s an idea based on the “TREE sequence”: Fix a value of k (say k=3, or k=1000), and construct a Turing Machine that tries to build an infinite sequence T_1, T_2, … of k-labeled tress such that each T_i has at most i vertices and no T_i is homeomrophically embeddable into any later T_j. The machine would traverse all the search space of this problem, and then halt. We know it will halt by Kruskal’s tree theorem.

    You probably already know about this idea since it is based on Harvey Friedman’s work…

  27. Vadim Kosoy Says:

    Scott #14: Regarding the Goldbach result becoming meaningless if Goldbach is proven. Well, you can still have results of the form “sentence A is provable in strong axiom system T2 and unprovable in weak axiom system T1, and there is a Turing machine M with only N states s.t. T1 can prove that M halts iff A.”

  28. Stefan O'Rear Says:

    Very neat. Regarding “Such a program might be thousands of lines long if written in a standard programming language like C”, I think this is an overestimate; link goes to a stripped-down Metamath kernel in Laconic (very untested). I’d be curious to know how many states it generates, but the current Laconic compiler doesn’t appear to support nullary functions.

  29. Artem Says:

    What about problems that were proven to be ZFC-independent, like the continuum hypothesis? Can those be done?

  30. John Sidles Says:

    Sincere congratulations are extended to Adam Yedidia (and to Scott) for this fine and thought-provoking work.

    Many a thesis and many a scientific article are graced by literary quotations; the following poem can be read as a meditation upon works like Adam’s. The poem is from poet/computer scientist Vincenzo Della Mea’s essay “Gödel’s childhood and other algorithms,” as collected in Imagine Math 2: Between Culture and Mathematics (2013, Michael Emmer, editor).

    Della Mea’s poem is a meditation upon Turing programs that do not stop:

    La macchina universale di Turing
    se opportunamente caricata
    con una descrizione minuziosa
    della mia vita, per definizione
    potrebbe raccontarmi in anticipo
    cosa farò da grande, se farò
    qualcosa; però se inerte raggiungo
    il limite del nastro illimitato,
    allora la macchina altro non può
    che osservarmi con le sue transizioni,
    lentamente, di stato in stato,
    mentre anch’io l’osservo. Facendo niente.

    Della Mea translates his own “unstoppable” poem as follows

    The Universal Turing Machine,
    if opportunely loaded
    with a meticolous description
    of my life, by definition
    could tell me in advance
    what I will do when grown up,
    if I will do anything; but if I idle reach
    the limit of the limitless tape,
    then the machine only will be able
    to observe me with its transitions,
    slowly, from state to state,
    while I too observe it. Doing nothing.

    Note  The essay’s in Emmer’s collection vary greatly in their math-to-poetry ratio; in particular Marco Abate’s “Exotic spheres and John Milnor” is among the most lucid (and poetry-free) discussions of “homeomorphic but not diffeomorphic” (that are known to me).

    Conclusion  It nearly certain that any given reader will be greatly pleased by at least one of the essays in Imagine Math 2, and nearly certain too that any given reader will be repelled by at least one of them. In either event, Shtetl Optimized readers who are writing theses and/or articles will find abundant raw material for reflection and quotation. 🙂

  31. andrew Says:


    > programming language like C or Python … we needed a language that could address an unlimited amount of memory.

    In Python, integers (actually long ints, which were unified with ints) have infinite precision i.e. can be arbitrarily big until your machine runs out of memory. So I don’t follow this argument. Can you expand? Maybe I’ve missed the point.

  32. casey Says:

    Isn’t there something a bit bizarre about fighting for constants in the size of the TM description due to physical analogies, while the busy beaver of even one-hundredth of that constant is so enormously big that there is literally nothing physical about it?

  33. Scott Says:

    code golf addict #20:

      The ZFC program is a neat accomplishment, but a Goldbach machine ought to be doable with a few hundred states.

    I challenge you or any other readers of this blog to demonstrate that—I don’t really think it’s wrong, but I’d be extremely interested in how to do it! I can tell you that Adam tried several different approaches for the Goldbach machine (with and without on-tape processing, introspective encoding, etc.), but they all produced machines with ~5,000 states or more.

  34. Scott Says:

    Ben Zinberg #21:

      I don’t see how you’re getting:
      “If ZFC can prove that the Turing machine Z halts, then ZFC is *actually* inconsistent.”

    Ah, but I never said that; I just reread the post to make sure! I said: assuming a reasonable consistency statement, Z runs forever, and ZFC can’t prove that Z runs forever.

    If Z halted, what that would mean is that ZFC (and forget about ZFC, even your favorite weak fragment of arithmetic) could prove that SRP was inconsistent, and also that SRP would actually be inconsistent.

    If ZFC were consistent but not arithmetically sound, then yes, it’s possible in principle that ZFC could prove that Z halts even though Z doesn’t actually halt. To fix that issue, I suppose you’d want an equivalent of the Rosser sentence (rather than the Gödel sentence)—another project for the next motivated student!

  35. Royce Peng Says:

    Gabriel Nivasch #26: You may be interested in my answer to the following MSE question:

    which gives various enormous lower bounds to the Busy Beaver function, starting at BB(23) > Graham’s Number (that has since been improved to BB(22) > Graham’s Number). The TREE(n) function lies between the Kirby-Paris Hydra function and the Buchholz Hydra function mentioned in the link. The reason TREE(n) specifically was not implemented is that it appears to be more difficult to implement than the hydras, which are deterministic sequences, whereas for the TREE function you have to search through all possible sequences to find the longest sequence.

  36. Paul Chapman Says:

    The Laconic language has many similarities to the ‘assembler’ language I designed in 2002 to describe programs for a Minsky Register Machine. An example of a program is at the supplied link. I used it to build the first finite universal computer in the game of life.

    The main difference is that I used macros where you use functions. But we both went for a finite number of nonnegative integer registers to represent the state of the computation.

    As you can imagine, then, this kind of thing is right up my alley. 🙂

  37. Scott Says:

    David Eppstein #22: Using hashlife to speed things up is an interesting idea! But the real answer to your question is that if you cared about speed, you’d make completely different design choices from the very beginning. E.g., rather than compiling TMD to Turing machine, you’d just run the TMD directly. Better yet, rather than compiling Laconic to TMD, you’d run the Laconic directly. Better yet, rather than writing the program in Laconic, you’d write it in C—and you’d use more lines of code and make it faster. 🙂

  38. Scott Says:

    Ernie Davis #24: I forwarded your questions to Adam—he can surely answer (1) and might try to field (2) (though (2) is really a subject for future research).

  39. Scott Says:

    Joe Random #25:

      Even though it may be impossible to show in ZF that a particular Turing machine A will halt, does that rule out being able to show with ZF that Turing machine A halts if and only if Turing machine B does not halt?

    First of all, if a TM halts, then ZF can always prove that it halts, by just simulating it until halting. A difficulty can only arise if the TM doesn’t halt. But let me assume you meant, “Even though it may be impossible to show in ZF that a particular Turing machine A runs forever…”

    In that case, the answer is yes, the sort of conditional result that you ask for is ruled out, assuming that ZF is arithmetically sound (i.e., that the statements it proves about Turing machines are actually true). For if you could prove that exactly one of A and B halts, then you could simply simulate A and B until one of them had halted, and then you’d have proven the “haltingnesses” of both. And ZF can formalize that.

  40. Raunak Says:

    There’s a minor error on Page 2 of the linked paper -> “7-tuple M = (Q, Γ, b, Σ, δ, q0, F)” must be “7-tuple M = (Q, Γ, a, Σ, δ, q0, F)” according to the symbol explanations given.

  41. Raoul Ohio Says:

    Try your hand at Quantum Computing! IBM online QC:

    For those of you on a budget, this is about $10M less than D-Wave.

  42. Raoul Ohio Says:

    Wired report on IBM’s online QC:

  43. David Wu Says:

    This is really cool.

    Also, if you want some expertise in Turing machine programming, there’s a chance it could be fruitful to contact LittlePeng9 from the Googology wiki. It seems he’s implemented a lot of TMs for all sorts of things – it could be possible that with collaboration, the number of states in the interpreter could be reduced.

    Here’s another user from the Googology page who has used LittlePeng9 and many other users’ work to determine bounds on the busy beaver function. Some of the strongest bounds listed there take advantage of LittlePeng9’s implementation of a machine that expands the Kirby-Paris hydra and/or the Buchholz hydra.

    (Most of the interesting bounds with links to the machines are visible if you click to expand the first header under “All Known Bounds”, “One dimension, one head, absolute movement, Sigma”)

    The approach there seems to be to implement these hydras using TMs with more than 2 colors, then apply to apply a reduction to 2 colors. Unlike the introspective encoding approach, this gives a multiplicative blowup rather than an additive blowup, but it might be worth the additional ease of programming enabled by more colors, I don’t know.

  44. Ronan Lamy Says:

    Thanks for sharing the code!

    There is an easy way to speed up the simulation by a factor of ~7: just use the PyPy interpreter ( instead of CPython. I timed a full run of squaresaresmall (2862347231 steps) at 13 minutes.

    And there’s probably another order of magnitude to be gained by optimising the code a bit.

  45. Scott Says:

    Vadim #27:

      Regarding the Goldbach result becoming meaningless if Goldbach is proven. Well, you can still have results of the form “sentence A is provable in strong axiom system T2 and unprovable in weak axiom system T1, and there is a Turing machine M with only N states s.t. T1 can prove that M halts iff A.”

    Yes, that’s a good point! But still, if a statement S is provable in such a weak axiom system that we need those same axioms anyway just to engage in “everyday reasoning about arithmetic and Turing machines”—the sort of reasoning needed to prove, e.g., that some Turing machine does or doesn’t encode S—then it might make sense to stop worrying about the strength of axiomatic theories, and just say the damn statement is true! 🙂

  46. Scott Says:

    Stefan O’Rear #28: Wow, thanks so much for writing a metamath kernel in Laconic! I’m amazed that you were able to do that less than 12 hours after the existence of the Laconic programming language was publicly announced. 🙂

    I can ask Adam to add support for nullary functions to the Laconic compiler. In the meantime, though, on your side: how much more work would it take to get from the kernel you wrote, to a program that actually enumerates all the theorems of ZF and searches for a proof of a contradiction?

    If it turns out to be possible to get a Turing machine with 8000-10000 states (or maybe even fewer?) without using Friedman’s statements at all, just using direct enumeration of the theorems of ZF (plus tricks like on-tape processing and introspective encoding), that will be an extremely interesting result that will force us to revise our views.

  47. Scott Says:

    Artem #29:

      What about problems that were proven to be ZFC-independent, like the continuum hypothesis? Can those be done?

    Errr, how do you write a program that loops over the transfinite cardinalities, searching for one that’s intermediate between $$\aleph_0$$ and $$2^{\aleph_0}$$? 🙂 More seriously, the problem is that CH is not an arithmetical statement.

  48. Scott Says:

    andrew #31:

      In Python, integers (actually long ints, which were unified with ints) have infinite precision i.e. can be arbitrarily big until your machine runs out of memory.

    Cool! So then Python (and Lisp, I suppose) don’t have the technical problem about addressable memory, and there’s only the other worry, that the Python or Lisp interpreter is doing too much of the work for you. The memory issue would only arise for languages like C.

  49. Scott Says:

    casey #32:

      Isn’t there something a bit bizarre about fighting for constants in the size of the TM description due to physical analogies, while the busy beaver of even one-hundredth of that constant is so enormously big that there is literally nothing physical about it?

    Yes, I’ll agree, there’s something bizarre about it. 😉

    If you like, what’s “physical” about our machine Z is not that its halting or (more likely) non-halting would occur within the lifetime of our universe—but rather, that there’s a short proof that Z’s eventual behavior encodes a question about the consistency of a transfinite set theory.

  50. DavidC Says:


    Maybe you’ll indulge a question I always wonder about these kinds of things:

    Why are answers about what’s “provable within ZFC” telling us about the “boundary between the knowable and the unknowable”? I mean, we “know” this thing doesn’t halt, right? Even though that’s not provable in ZFC.

  51. Scott Says:

    Raunak #40: Thanks for bringing that to our attention! We have a few other minor revisions to make to the paper, and hope to post an updated version within a day or two.

  52. “The 8000th Busy Beaver number eludes ZF set theory” | Logic Matters Says:

    […] you haven’t seen it: this blog post by Scott Aronson (on a new paper has co-authored with Adam Yedidia), written with his usual clarity and zest, is […]

  53. Scott Says:

    DavidC #50: Yes, fine, you’re right, there’s not a single “boundary between the knowable and the knowable.” What there is, is more like a concentric series of boundaries, with signs warning you that as you proceed outward you need to take on bolder and bolder axioms (which might be large-cardinal axioms and might be something else). What’s significant about going beyond ZFC is just that, by that point, you’ve clearly passed the first of those boundaries.

  54. Nick Says:

    Great post! One of my favorites ever on this blog, in fact. Definitely worth the wait after the recent post drought.

    I have two (sets of) questions:

    1) The technical topic at hand is provability in ZFC, but several times you use the vaguer and more philosophical-sounding word “knowable”. To what extent do you identify knowability with formal provability? Does it make a difference if we’re talking about knowability wrt humans or knowability wrt gods? What about provability wrt to ZFC vs some weaker or stronger system? Do you construe this result as a result about the general knowability of some Busy Beaver numbers? (Obviously you’ve spent a lot of time talking about this kind of thing over the years, but I’m interested to hear your thoughts about it as it concerns this specific example.)

    2) Imagine that tomorrow you wake up and discover that the Z machine has halted. After you change your pants, what’s the next thing you do? What happens long-term?

  55. DavidC Says:

    Scott #52:

    “Yes, fine, you’re right …”

    Sorry, I really wasn’t trying to nitpick! Your answer helps me understand what’s going on. Thanks.

  56. Scott Says:

    Nick #54:

    1) For ZFC versus other axiom systems, see my comment #53, which I just posted.

    For provability versus knowability: as a practical matter, we might never be able to know even BB(7), even assuming (as seems plausible) that its value is provable in ZFC. On the other hand, if you want to know that there’s a sense in which it’s unknowable whether some specific mathematical statement is true or false, then the only clear way we know how to do that is to prove the statement independent of a strong axiom system. That at least tells you that you can “know” the statement, only at the price of assuming axioms well beyond the ones used in almost all of modern mathematics.

    2) If Z halted, then the most probable explanation by far would be that our Turing machine had a bug. 😉

    The second most probable explanation would be that Friedman’s order-theoretic statement had a bug.

    And the third most probable explanation would be that the large-cardinal theory SRP was inconsistent, even though ZFC was consistent. That would be a big deal to set theorists, but less of a big deal to other mathematicians.

    A good goal for future research would be to design a Turing machine such that, not only is its halting provably equivalent to Con(ZFC), but a formal proof of the equivalence can be fully written out and machine-verified. That would address all three issues above.

  57. Stefan O'Rear Says:

    Scott #46: the code I posted does search for a contradiction, or rather a proof of “all sets are unequal to themselves” . I’ve found two bugs so far: the proof checker recurses infinitely on zero (should be invalid), and I left out the three “propositional logic” axioms (ax-1, ax-2, and ax-3).

  58. Bram Cohen Says:

    To make your interpreter as simple as possible you should consider using unlambda or brainfuck.

    This result is a weird yin-yang of high school student understandability. All the constructions are straightforwardly HSSU. The work of Friedman which it depends on, not so much.

  59. Scott Says:

    Stefan O’Rear #57: Wow, OK, awesome! Then my remaining question is: what’s the axiomatic strength of the theory in which you’re searching for a contradiction? Is it really full ZF?

    And also: would it be easy to change your code to get rid of the nullary functions and make it compile? (Regardless, I’ve already asked Adam to add nullary support to the Laconic compiler.)

  60. Scott Says:

    Bram #58: Yes, other people also suggested unlambda and brainfuck to us—footnote 6 in our paper talks about them. Writing a Turing machine interpreter for those languages, and seeing how the state count (i.e., the count for the interpreter plus the count for an interesting program to be interpreted) compares to what we got, would be an excellent project for the next motivated student or hobbyist!

    And yes, it was fun for me to work on something so high-school-student-understandable, given how much of what I do isn’t.

  61. Stefan O'Rear Says:

    Scott #59: It’s ZF without Regularity (but for bugs), so yes. I can try to deal with the nullary functions later; the hope was that a function call without arguments would generate very few bits in the intermediate description, which is useful since we’re encoding the axiom schemes as sequences of function calls.

  62. Vadim Kosoy Says:

    The fact the Riemann Hypothesis is equivalent to a Pi_1 sentence is really cool. It makes me wonder whether it’s possible to find a Pi_1 formulation of P=/=NP. On the face of it, P=/=NP is a Pi_2 sentence: for all K there is N and a satisfiable instance of SAT of size N s.t. it takes Levin’s universal search more than N^K + K steps to find the answer.

    It might also be interesting to explore the formulation of higher-order sentences as oracle machines. For example Pi_2 sentences correspond to halting of machines with a halting oracle. Of course this requires fixing the encoding of Turing machines that the oracle accepts.

  63. John Sidles Says:

    Andrew asserts (#31) “In Python, integers (actually long ints, which were unified with ints) have infinite precision i.e. can be arbitrarily big until your machine runs out of memory.”

    This statement is true for machines with 2^63 bytes of memory, which includes all of the physical machines that presently exist. However all of the languages (known to me) that support what are called “bignum” calculations fail for a different reason when integer lengths exceed 2^63 bytes, namely, they fail by pointer overflow.

    For details, see for example Pat Shaughnessey’s essay “How big is a bignum?“.

    In comparison, it has been estimated that a pointer-length of 63 bits would suffice to address all the words ever spoken, and so is plausible that no real-world “bignum” computer code has yet encountered this limit.

    Overcoming pointer-length restrictions is entirely practical from an engineering point-of-view, and need not incur any substantial computational overhead: consider for example a compile-target consisting of an unbounded chain of laptop computers, each linked by a USB cable to a left-neighbor and a right-neighbor, with an execution token passing from computer-to-computer along the chain as required, and with new computers (in unbounded number) added to the chain upon request of the executing computer.

    Conclusion  In rigorously implementing unbounded-runtime / unbounded-memory calculations, pending a scrupulous review of the compiler used with due consideration of pointer-length bounds, it’s good practice to code with languages that compile to explicit Turing Machines, in the style of Yedidia and Aaronson.

    Needless to say, folks who compute Gröbner bases commonly encounter these big-memory problems in real life. Just because a state-space is formally a variety doesn’t mean it’s feasible to work with an explicit representation of it; explicit Gröbner representations are commonly far too big for that.

  64. Scott Says:

    Royce Peng #35 and David Wu #43: Enormous thanks for the links! I’m astounded that I’d never discovered the “Googology Wiki” until today—it contains so much information of so much interest to me. I updated my post to include the spectacular lower bounds on BB(7) and BB(23) that were recently found by Googology users. The obvious question now is whether these constructions give us any handle on how to construct tiny TMs (with 30 states or whatever) whose behavior is unprovable in strong axiomatic theories. Maybe one could at least get such a machine whose behavior was independent of PA, using some variant of the hydra game?

  65. Scott Says:

    Stefan #61: Awesome!

    In that case, let’s just wait for Adam to add support for nullary functions to his compiler. I met with him this afternoon, and he said it would be simple to do. But he also quickly noticed various bugs in your program (e.g., functions missing return statements). He might get in touch with you directly to talk about this.

    If this indeed works, then Adam predicts that it will indeed lead to a TM with fewer states than what you get from Friedman’s statement — he says your program is about as complicated as his program for the Riemann hypothesis, which suggests that we might get under 6000 states.

    Meanwhile, regarding “minimalist” languages like Unlambda and Brainfuck—Adam’s view is that Laconic is almost as minimalist as he knew how to make it, if you need a function stack (which Unlambda and Brainfuck apparently lack). So, if we wanted further huge reductions in the state count (e.g., below 4000 states), we might need to give up on a function stack in the high-level language.

  66. Scott Says:

    Vadim #62: I can tell you that a Π1 equivalent for P≠NP would be a breakthrough in the field. In particular, it would mean that we could do a straightforward computer search for polynomial-time SAT algorithms (or at least, certificates that such algorithms exist), in the same way one does computer searches for counterexamples to Goldbach or Riemann!

  67. Adam Yedidia Says:

    Thanks, everybody, for all the extremely kind comments! I am surprised and flattered that my project generated so much interest.

  68. Adam Yedidia Says:

    Sniffnoy #8: As Scott said, that is a very interesting question! If you can find a nice simple Pi-1 sentence that is independent of PA, you could go and code it up in Laconic, and see what you get! Unfortunately, with my current design, you won’t be able to get below 4,000 states (so a factor of 2 from ZFC) because that is the fixed cost of the “interpreter” part of the Turing Machine. But if your program is simple enough, you might be able to do better than that by redesigning the interpreter to be smaller and correspondingly weaker. That could be an interesting direction for future research!

  69. Adam Yedidia Says:

    Anonymous #13, Scott #14: Yes, that is the gist of my project! To Scott’s response, I’d just add that general conversion of Java to Turing Machine is quite a large undertaking just from an engineering perspective! Indeed, one thing that is too bad is that I had to make my own, reduced-instruction language (Laconic) which would compile to Turing Machine, instead of compiling from a well-known language like Java. The problem is that Java is huge in scope, even if you only allow a fraction of the commands it supports (and if you make that fraction small enough, well, Laconic is approximately what you get!)

  70. Adam Yedidia Says:

    code golf addict #20: It’s quite possible you’re correct, but a few hundred states is quite an ambitious goal! With my current design, you won’t be able to get that low, but you might be able to get there with an overhaul of the “interpreter” part of the Turing Machine. The “interpreter” as currently designed has support for some things, in particular the low-level function stack, which aren’t necessary for the Goldbach program.

    This relates to what I was saying to Sniffnoy #8, so let me elaborate a little bit. There is a trade-off to be had here, where a larger interpreter involves a larger fixed cost but allows you to represent more complex programs more cheaply. In the limit as your programs get more and more complex, it is always worth it to have a beefier interpreter!

    I optimized my interpreter size, roughly speaking, for the central result of the paper: the result about the Busy Beaver function. For much simpler statements, like Goldbach’s conjecture or (potentially) something independent of Peano Arithmetic, this interpreter size is probably too big. Ideally, I would have some kind of “sliding scale” of progressively more powerful interpreters, which could be chosen according to the complexity of the program I was trying to represent! Maybe one day 🙂

  71. Adam Yedidia Says:

    David Epstein #22: I don’t usually find that simulation speed is that important–for example, my independent-of-ZFC TM almost immediately begins execution by writing (8!)! 1’s onto the tape. An order-of-magnitude simulation speedup is unlikely to help too much in seeing what’s going on! 🙂

    Of course, for TMs like the squaresaresmall TM, or more seriously the Goldbach or Riemann TM’s, it would be nice if simulation was quicker, just to give viewers a better sense for what they’re doing in a shorter time. It sounds at glance like the caching methods you’re describing here could indeed be applied to make them faster, since the execution histories of the TMs are indeed extremely repetitious!

  72. Adam Yedidia Says:

    John Sidles #30: Your comments always bring a smile to my face, John Sidles. 🙂

  73. Adam Yedidia Says:

    Ronan Lamy #44: Wow, that is very good to know! I didn’t know about that. Thanks for sharing!

  74. Adam Yedidia Says:

    Stephen O’Rear #28, #57, #61: Wow, I am extremely impressed that you did all this just in the last half-day! I am very curious about how your program works. There are a few kinks to be worked out, of course, and on my end I can easily modify Laconic to support nullary functions if you give me a day or two. But I would be very interested in speaking with you about your program! Email me at adamy at mit dot edu and we can schedule a Skype meeting from there.

  75. Jerry Schwarz Says:

    Scott #48

    An abstract C program does have an infinite memory because it has an infinite stack. It isn’t obvious to me how to use that to simulate a Turing machine’s infinite tape. But proving it can’t be done would require a more subtle argument than just the finiteness of the memory.

  76. Scott Says:

    Jerry #75: Ah, thanks for that observation! But keep in mind that, if your only infinite resource is a stack, then what you’ve got is a pushdown automaton, which is certainly not as powerful as a Turing machine. An infinite queue, on the other hand, or two infinite stacks, do let you simulate a TM. (I actually give that as a homework problem in my undergrad class.)

  77. Adam Yedidia Says:

    Bram Cohen #58, Scott #65: Indeed, I have little doubt that a Brainfuck interpreter would be smaller than the interpreter I currently have! It’s very similar to TMD, but with less flexibility in what is representable on each tape, and with no function stack, both of which are going to make the interpreter smaller! But that means that for programs like the independent-of-ZFC program, you will need to make your own function stack, which will make your program correspondingly bigger, potentially by a lot. This relates to the tradeoff I was talking about with Sniffnoy #8 and code golf addict #20. I think you’re right that for programs like the Goldbach program, you’d be better off with a smaller interpreter, like a Brainfuck interpreter.

    As for tiny lambda-calculus-based languages like Unlambda, and its cousins Iota and Jot–well, you might indeed be right that the interpreter would be much smaller than the TMD interpreter, although that is less obvious to me (and, of course, you’d still want a higher-level language that compiles to Unlambda/Iota/Jot). But yes, I agree that this is a promising direction if you are looking for interpreters that “dominate” the current one I’m using (that is, that are both smaller and read programs that are more concise than this one). David Madore warns on his page about Unlambda that Unlambda is very difficult to write interpreters for–but that maybe just means it will be a fun challenge! 🙂 In any case, I would encourage you to give this a shot if you are interested–you are likely to be able to reuse my code and documentation that works at the TM level.

  78. Amanda Says:

    Possibly stupid (or more likely irrelevant) question: do you suppose this has any application for understanding the minimum complexity required for consciousness (a la Hofstadter)?

  79. Adam Yedidia Says:

    *Stefan* O’Rear: Sorry for misspelling your name earlier!

  80. AdamT Says:

    Adam #77,

    I have a new Iota/Jot/Zot interpreter written in C++ as well as a simple language based on the SKI calculus that transcompilers to/from lambda calculus and has some efficient speed ups here and here:

    So in theory you could write an interpreter in “high level” lambda calculus for the SKI calculus-like Hof language and compile that to Hof and have a self-hosting interpreter.

  81. Sniffnoy Says:

    I’d advise staying away from Unlambda, personally; Brainfuck is sensibly minimalist, but unlambda is a strange mix of sensible minimalism and deliberate perversion. Iota and Jot are better suggestions (or any other universal combinator — relevant math.stackexchange question). Or just the plain old SKI- or SK-calculus; any of these possibilities can then be written with an unlambda-like syntax. And if you use two bits to encode each of the three possibilities of “apply”, “S”, and “K”; well, now you’ve reinvented binary combinatory logic. If you use the universal combinator idea, like Iota or Jot, then you only have two possibilities and get get away with only one bit.

    Other interesting possibilities: Binary lambda calculus; any of various OISCs.

  82. Adam Yedidia Says:

    Ernie Davis #24:

    To your first question: 24,652 squares of tape memory were used in the execution of the squaresaresmall TM.

    To your second question: I’m unfamiliar with Chaitin’s hypercompact language. But I assume what this is essentially asking is: if we compressed all of the redundancy out of the TMD binaries, then what would be the size of the independent-of-ZFC program? That’s an extremely interesting question! I don’t know the answer, but you could get a rough upper bound on it by taking some TMD binaries outputted by the example programs in the tmd_dirs directory, building some kind of compression scheme (maybe Lempel-Ziv? I’m not sure) to compress them as much as you possibly can, and then applying that compression scheme to the independent-of-ZFC program’s TMD binary. I’d be very curious to see what this kind of thing might yield!

  83. javra Says:

    Withouth having read about SRP: Isn’t it kind of cheating that you need the consistency of what I assume is more than ZFC to prove the non-termination of your TM?

  84. Scott Says:

    Amanda #78: I suppose that if you agreed with Penrose’s ideas about consciousness, the minimum complexity of a Turing machine needed to elude ZF set theory might be related in some way to the minimum complexity needed for consciousness. But

    (1) Z is still a Turing machine! So presumably Penrose wouldn’t find it an acceptable model for consciousness. Maybe he’d instead consider it our current candidate for the smallest artifact that human consciousness can understand but computers can’t??

    (2) In any case, this whole discussion is academic, because I don’t agree with Penrose’s ideas about Gödel and consciousness, for reasons I’ve spelled out in detail elsewhere.

    So to summarize: probably no relevance. 🙂

  85. Vadim P. Says:

    The call stack in a C program is actually quite limited, either by the compiler, like in a Windows environment, or by an OS environment variable, like in Linux, which is why iteration is so strongly preferred over recursion in C. But one can allocate memory from the heap, which is “unlimited” subject to physical memory constraints (which would include disk space if virtual memory is enabled). One can make use of that heap space as part of any data structure. For a Turing machine, a doubly linked list seems perfect.

  86. Adam Yedidia Says:

    AdamT #80: That’s really neat! So Hof is your language, and is like the other minimalist lambda-calculus-based languages but supports random behavior?

  87. Scott Says:

    javra #83: It’s not cheating, it’s just an opportunity for further research! (The scientific version of “it’s not a bug, it’s a feature!”) 🙂

    More seriously: if Stefan O’Rear’s Laconic program from earlier in this thread ends up working, then not only will it decrease the number of states from 8000 to possibly around 6000, it will also immediately fix the issue with ZFC vs SRP.

  88. AdamT Says:

    Adam #86,

    Yes, it is just the SKI calculus + transcompiler to/from lambda calculus + a couple of extra combinators for introducing randomness and output. Planned usage is for a Quine in Hof that will self-mutate via the random combinator. Anyway, the transcompiler would give you ability to write in “high-level” lambda calculus but run in nothing more than SKI calculus which other than the universal combinators and possibly binary lambda calculus… You can’t get much simpler for a Turing complete language.

  89. John Tromp Says:

    Hi Scott!

    We should have been discussing this at our recent dinner:-)

    To demonstrate the compactness of Binary Lambda Calculus, I constructed a 267 bit binary lambda term


    and pictured in

    that normalizes iff there’s a counterexample to Goldbach’s Conjecture

  90. Nick Says:

    Just for clarification, when you say “the 8000th Busy Beaver number eludes ZF set theory”, is that equivalent to saying that there is no number n such that ZF proves n = BB(8000)? If not, can you spell it out a little more concretely?

  91. Scott Says:

    Nick #90: Yes, although actually something stronger than that is true: there’s no integer n such that ZFC proves n≥BB(8000).

  92. Cat Freak Says:

    Scott? This seems like one of the biggest discoveries in the history of computer science. Please tell me you are taking Adam with you to Texas and that you will continue working on these results!

  93. Jason Gross Says:

    Great work! Congrats, Adam and Scott! I’d not seen any previous discussion of TMs whose halting behavior was provably independent of ZF. This answers a question I’d been wondering about for some time: are there programs that halt whose computational complexity depends on which axiom system you assume?

    I’d like to ask a slightly different question: I don’t know anyone who thinks ZF+¬Con(ZF) is a reasonable system to work in, however consistent it might be. Are there any axioms A such that both ZF+A and ZF+¬A are considered “reasonable”, such that you can made a TM behave differently at infinity depending on which is “true”? Or can the behavior of TMs at infinity only be determined by axioms about countable infinities, and all such axioms have a somehow “natural” truth value?

  94. Scott Says:

    Cat Freak #92: I’m delighted that you like it, but no, it’s not “one of the biggest discoveries in the history of CS.” 🙂

    I said this in the post, but maybe it bears repeating: the fact that there’s a program whose behavior is independent of ZF is not the slightest bit new. That’s just Gödel’s Theorem. The only new part is that this time we also fought to minimize the size of the program.

    And Adam is currently a PhD student of Martin Rinard, doing programming languages. This project involved an amazing synergy between my interests and his, but sadly for me, he’s already more-or-less moved on…

  95. Scott Says:

    Jason Gross #93:

      This answers a question I’d been wondering about for some time: are there programs that halt whose computational complexity depends on which axiom system you assume?

    Sorry, I don’t understand that question, let alone how our work answers it! Different axiom systems can differ in how much they can prove about the running time of a program—but the running time itself is just an intrinsic property of the program.

      Are there any axioms A such that both ZF+A and ZF+¬A are considered “reasonable”, such that you can made a TM behave differently at infinity depending on which is “true”? Or can the behavior of TMs at infinity only be determined by axioms about countable infinities, and all such axioms have a somehow “natural” truth value?

    Again, to be clear: whether a TM halts or not has no dependence on which axioms you use to reason about the TM, any more than whether a bird flies or not depends on which words you use to discuss the bird.

    If ZF+A and ZF+¬A disagree about whether a TM halts, then the only possible conclusion is that either A or ¬A has a consequence that’s arithmetically false. So then, the one that I’d consider “natural,” is simply whichever one I believe doesn’t have arithmetically false consequences!

    So for example, if ZF+Con(ZF) seems more “natural” than ZF+¬Con(ZF) to me, that’s because I believe that Con(ZF) is a true statement of arithmetic.

    This is completely different from (say) CH versus ¬CH or AC versus ¬AC, where it’s possible to argue that you really do get to pick which one you like better as a matter of taste. This is the difference between statements about transfinite sets (which, in cases like CH and AC, can be shown to have no arithmetical consequences) and arithmetical statements.

  96. Stefan O'Rear Says:

    Scott #0: In §8.3.2 you state that a Turing machine with \(n\) states can store \( 2(\log_2(n)+1) \) bits of information. I think this is not quite accurate because the space of possible Turing machines is greatly reduced by isomorphism; I think the factor of \(2\) approximately disappears due to the \(n!\) legal state relabelings, but I haven’t worked out all the details.

    Artem #29: To elaborate on Scott’s answer, CH is not only not an arithmetical statement, it provably has no arithmetical consequences; both Gödel’s model of CH and Cohen’s model of ¬CH have the same natural numbers as the models they are built in.

    Vadim Kosoy #62: The question of \(\Pi_1\)-sentences which approximate \(P \neq NP\) is touched on in §5 of

    Adam Yedidia #74: Sent you an email. Also willing to answer any questions about it here.

    Adam Yedidia #77: I’m not very optimistic that Unlambda will help you; the process of compiling down to combinators blows up the state count quite a bit. The pure untyped lambda calculus would be a better idea; I’m also partial to Forth-style stack machines, as you may have noticed. BTW, David Madore seems to have stopped updating the page; I emailed him an Unlambda-to-OCaml compiler several years ago and the page still lists it as an open problem.

    Scott #87: I do think there’s still a lot of interesting space to explore with these arithmetical statements, even if the current statement is in a tight race with direct FOL implementations for the Con(ZFC) question, better statements could be discovered and you can also move the goalposts (make the problem “consistency of SRP” 🙂).

    Meta: Hi everyone! Been reading this blog for a few years, first comment.

  97. Ward Cleaver Says:

    Since the “Name the bigger number” game is your idea you obviously get to make and interpret the rules, but when I looked at your essay my reaction was that using the busy beaver function should get you disqualified precisely because it’s not computable. The most straightforward interpretation of your instructions to the contestants (“Be precise enough for any reasonable modern mathematician to determine exactly what number you’ve named, by consulting only your card and, if necessary, the published literature”) is that the modern mathematician should be able to compute the number, meaning write a computer program that with unlimited running time would print the digits and then halt. If you write BusyBeaverFunction(Graham’s Number) then there is some ambiguity in the instructions about whether the modern mathematician can “determine exactly what number you’ve named.”

  98. Ward Cleaver Says:

    What’s up with the Theory of Computing journal? It’s almost Mother’s Day and there are zero articles for 2016. Even if Babai uses this journal for his graph isomorphism paper it’ll be a slow year for open access theoretical CS.

  99. Linda Janourova Says:

    Sorry if this is a dumb question, but I recall that Conway was able to exhibit an explicit FRACTRAN program consisting of very small set of fractions (fewer than 30?) capable of universal (and therefore undecidable) calculation. Is there any relation between Conway’s result and this one?

  100. Joseph Van Name Says:

    Scott Aaronson. Have you looked at (or has Harvey Friedman pointed you towards) Laver tables to provide the desired Turing machines with few states where the question of whether such a Turing machine halts is possibly independent of ZFC (but settled under strong large cardinal hypotheses)? Let me now present two programs that illustrate how easy it is to produce computer programs using Laver tables where whether the program halts may be independent of ZFC.

    Program A

    if x=2^n then return y;
    elif y=1 then return RemInt(x,2^n)+1;
    else return table(n,table(n,x,y-1),x+1);fi;end;
    n:=5; while table(n,1,32)=2^n do n:=n+1; od;

    Program A is a program (written in a meager 5 lines) that halts when it finds some 2^n by 2^n classical Laver table such that in this algebra 1*32 is less than 2^n. Program A is not known to halt in ZFC, but under the very strong large cardinal hypothesis, for all n there is an n-huge cardinal, program A halts. However, Dougherty has shown that program A lasts for at least Ack(9,Ack(8,Ack(8,254))) steps.

    Program B

    if x=2^n then return y;
    elif y=1 then return RemInt(x,2^n)+1;
    else return table(n,table(n,x,y-1),x+1);fi;end;
    test:=true; n:=0;
    while test=true do
    for m in [0..n] do if table(n,2,2^m)=2^n and not table(n,2,2^m)=2^n then test:=false; fi; od; od;

    If for all n there exists an n-huge cardinal, then program B does not halt. However, it is not known whether ZFC implies that program B does not halt. Furthermore, it is not known whether ZFC along with the hypothesis “program A does not halt when 32 is replaced by any other power of 2” implies that program B does not halt. As with program A, it is known that program B runs at least Ack(9,Ack(8,Ack(8,254))) steps without halting.

    Turing machines for these programs

    It seems like program A and program B are not only easy to write in GAP or any other popular programming language, but that one should be able to construct Turing machines with few states that simulate program A and program B.

    In particular, I believe that the Turing machines that one can construct to compute program A or program B would use closer to 10 states than 8000 states. It looks like one could build Turing machines with few states that compute program A or program B directly without needing the 4000 state interpreter (it should not be too hard to make a Turing machine that calculates table(n,x,y) simply by simulating a program running on a stack of numbers but I have not gone through the details).

  101. Scott Says:

    Ward Cleaver: Sorry, I confess that I don’t know what’s up with ToC. Maybe Laci has been too busy traveling the world giving lectures about GI!

    Eliezer Yudkowsky also suggested to me that the biggest number contest should have the rule that, to “name” a number, you must have in mind a computer program that halts and outputs that number. To me, though, that’s just an interestingly different rule, which would need to be spelled out explicitly in the instructions.

    The first point I can make is that your and Eliezer’s rule has an ambiguity of its own. Namely, how do you convince the judges that your program really does halt and output an integer? Presumably you need to give them a proof? OK then, what axioms are you allowed to assume in that proof?

    This is far from an academic worry, because it turns out that, the wilder the large-cardinal axioms you’re willing to assume, the longer-running the computer programs for which you can give proofs that they halt! (Basically because stronger and stronger large-cardinal axioms let you prove the well-definedness of wilder and wilder ordinal notations, which you can then use to define more and more rapidly-growing functions.)

    For this reason, even your and Eliezer’s contest, with two skilled participants, will probably devolve into an argument about transfinite set theory.

    But the second point I can make is that, from my standpoint, there’s a clear difference between a number’s being “determined” and its being “computable.” For me, “determined” means definable arithmetically (or at least, using ordinal notations that I agree are well-founded).

    More concretely: sure, we’ll never “know” BB(1000) by a list of its digits, but by that criterion we’ll never “know” Graham’s number either. But regardless of whether we have the digits, can anyone doubt that, just as there’s exactly one positive integer that answers to “Graham’s number,” so there’s exactly one that answers “BB(1000)”? There’s a finite list of 1000-state Turing machines, and each one either halts or doesn’t halt, and there’s some maximum number of steps that any of the halting ones run for. And we can prove lower bounds; we know, for example, that BB(1000) dominates Graham’s number.

    Up in computational heaven, where every angel gets issued an oracle for the halting problem, they find BB(1000) as straightforward to compute as we find 99. To them, free from the epistemic limitations imposed by our laws of physics, BB(1000) is just another positive integer. And yet even in computational heaven, the angels still argue about whether the Continuum Hypothesis is true or false—and for example, they regard

    “min{100, the number of intermediate cardinalities}”

    as an ill-defined positive integer—because not even a HALT oracle can help them with those sorts of perplexities. 😉

  102. Scott Says:

    Linda #99: Conway’s construction gives a small universal machine in Fractran. I explained in the post why I regard that as a fundamentally different sort of quest from finding a small machine such that set theory can’t prove what it does on a blank input.

    To say it another way: if M is universal, you’re right that there’s some input x such that ZF can’t prove whether M(x) halts, assuming ZF is consistent. But in that case, the complexity that I care about is going to be the complexity of M plus the complexity of x, not one or the other in isolation! Alas, in practice, all the constructions of “small universal TMs” that I’ve seen minimize the complexity of M only by shoehorning a huge amount of complexity into x, which they then don’t count towards the total.

    If we want to close that loophole, and get rid of the “offshore complexity havens in Panama,” one good way is to consider TMs that already need to do something interesting even when they’re run on a blank tape.

  103. Scott Says:

    Joseph van Name #100: Thanks so much for the reference to Laver tables! I confess I’d never heard of them before your comment.

    Your program B does indeed look like an outstanding candidate for converting into a small Turing machine! But:

    (a) There’s the obvious difficulty that no one knows, at present, whether this program’s running forever is provable in ZFC—are there any related programs for which that’s known not to be provable?

    (b) I’ve learned, from experience, to withhold judgment whenever anyone confidently guesstimates how few Turing machine states will be needed to accomplish something (e.g., “probably only ten or twenty!”). I used to do that myself: you can see an example in my “Who Can Name the Bigger Number?” essay, where I guesstimated that there should be a simple 100-state machine for Goldbach’s Conjecture. But … how to say this? The rubber meets the road where the head meets the tape. 🙂

    Of course, even if Laver tables “merely” led to (say) an 800-state machine that eluded ZFC, that would still be fantastic.

    Incidentally, in your code, what does the RemInt function do?

  104. Jason Gross Says:

    Scott #95: I think what you do here is equivalent to making a Turing Machine that enumerates all ZF proofs and halts iff it finds a proof of 1=0. This Turing Machine halts iff ZF is inconsistent, i.e., if Con(ZF) is false. You can turn this into a TM that is in P if Con(ZF) is true but in EXPTIME if Con(ZF) is false by using a trick similar to the one that lets you write a concrete SAT solver that runs in polynomial time iff P=NP.

    My follow-up question is if you can do this with other axons, e.g., Choice or CH. I think the answer is no, though, because of the difference you mention about transfinite axioms.

    (If I believe in Tegmark IV (or level V, maybe?), then there should be a universe modeling ZF+¬Con(ZF), and it’s not clear to me how to distinguish this universe from the one we live in. The confusing phrasing of my question probably comes from this philosophical background, and what I meant by “what axioms you assume” was roughly “which axioms are true” or “which axiom system does our universe model”?)

  105. Ben Zinberg Says:

    Re #34:

    Makes sense. I.m.o., your comment #34 merits mention in the paper, both for interest and for technical correctness (if I’m reading it right, we don’t yet know whether BB(7918) is independent of ZFC, if we only assume ZFC is consistent).

    I’d be really interested to see if we could craft an encoding such that the corresponding Rosser sentence is equivalent to a natural-sounding math problem. It seems really far off, but maybe that’s mostly a reflection of how little tooling we have.

    Viewed from that perspective, Parsimony/Laconic could be the first step in a project to establish practical heuristics for how hard it is to prove a given statement (such as the size of a partially optimized Turing machine to search for a proof). I’d guess that for purposes of estimating how hard things would be for a real-world mathematician, practical heuristics could be better than ideal quantities like Kolmogorov complexity. Better yet, unlike the latter, we can compute the former without knowing the answer to the search problem. I wonder if Coq or other proof assistant researchers are working on this?

    Very cool stuff, congratulations to Adam and you.

  106. Luke Says:

    Congrats, guys. It’s satisfying to see concrete bounds in the face of uncomputable functions.

    I firmly believe (and some of the other commenters seem to agree) that the next step is an interpreter for either the lambda calculus, or SKI combinatory logic. In either case the program is essentially a tree, and as unpleasant as it will be to write trees as strings and manipulate them with a Turing machine, I can’t imagine the interpreter being more than a few hundred states. I’d guess combinatory logic is the easier of the two (you can judge the difficulty for yourself by reading the rules here, for example), but it tends to have larger programs (see below) for the kinds of tasks you’re interested in.

    As John Tromp (#89) points out above, he already has a short program for the Goldbach conjecture. It ends up compiling down to

    (\1 ((\1 1) (\\\(\\6 2 (1 (5 5 2))) (\1 1 2 (\1 4) 1)) (\\\\1))) ((\1 1) (\\\1 (\\1) ((\4 4 1 ((\1 1) (\2 (1 1)))) (\(\1 (1 (4 2))) (\\\\1 3 (2 4))))) (\\\\1 (\\2) (2 4)))

    as a lambda term (in de Bruijn notation), a tree of 247 S and K combinators, a 267 bit binary lambda calculus representation, or a 740 bit (!!) binary combinatory logic representation. If I’m doing the math right, that would mean either 45 + 77 = 122 or 106 + 87 = 232 states for the blob + extractor, so I’d say 500 states for Goldbach is a realistic goal.

    I assume you’d be able to compress Friedman’s statement by a similar factor. But even if it doesn’t compress (or you just can’t stomach rewriting it in the lambda calculus), an interpreter for the lambda calculus would connect a lot of existing research in algorithmic information theory (like Tromp’s) to Turing machines in a concrete way.

  107. Joseph Van Name Says:

    Scott Aaronson #103.

    Response to (a) So far program A and program B are the only computer programs about the classical Laver tables where the halting problem is solved with large cardinals but where the solution to this instance of the halting problem has not been obtained in ZFC. In fact, these two results are the only published results about Laver tables which have been established under large cardinal hypotheses but which not have been established in ZFC.

    For program B, there are no known independence results and I would be very surprised if someone were to actually show that the halting problem for program B is independent of ZFC. The only independence result about the classical Laver tables is the result by Randall Dougherty which states that the statement “program A halts even when 32 is replaced by some other power of 2” is independent of PRA (Primitive Recursive Arithmetic) which is much weaker than even Peano arithmetic. This independence result follows from the fact that the halting time for program A (again when 32 is replaced by some other power of 2) is not a primitive recursive function. Randall Dougherty in [1] has given a lower bound for the halting time for algorithm A which grows slightly faster than the Ackermann function. These lower bounds obtained by Dougherty have not been improved upon so it is still unknown whether the halting problem for problem A is undecidable in Peano Arithmetic or in any other system stronger than PRA.

    In [2], Randall Dougherty has done work to attempt to show that algorithm A always terminates and the time it takes for algorithm A to terminate is a function that grows slightly faster than the Ackermann function. However, this attempt by Dougherty was abandoned as the Laver table calculations became too complicated.

    I would not be too surprised if someone proves these results in ZFC or Peano Arithmetic since people have often proven results using large cardinal hypotheses where the large cardinal hypotheses were later removed from the results. For example, Richard Laver showed that the word problem for the free left-distributive algebra on one generator was decidable using rank-into-rank cardinals (Laver’s results on the free left-distributive algebras is very closely related to his results on the Laver tables). However, later Patrick Dehornoy (I think) has proven in ZFC that the word problem for the free left-distributive algebra on one generator is decidable.

    Response to (b) One reason why I am still hopeful for a simple Turing machine for calculating Laver tables is the similarities between what I think a good Laver table computing Turing machine would look like and the Turing machines computing the Ackermann like functions. I guess I will only find out how easy it is to construct the Laver table computing Turing machine once the Turing machine is built.

    The RemInt(x,y) function in the language GAP simply takes x mod y.

    [1]. Critical points in an algebra of elementary embeddings I

    [2] Critical points in an algebra of elementary embeddings II

    [3] Handbook of Set Theory 2010. Chapter 11. Elementary embeddings and algebra

    [4] Braids and self-distributivity. 2000. Patrick Dehornoy. (the primary reference for the classical Laver tables.)

  108. Jared S Says:

    Scott #33: I have a 73-state, two-symbol, one-tape TM that halts iff Goldbach’s conjecture is false, and which checks N<=1000 in about 10 seconds and N<=2000 in about 2 minutes on a desktop computer. You want me to e-mail you?

    Hash of proof of concept:

  109. Scott Says:

    Jason #104: Yes, you’re correct, there’s no way to do anything analogous with AC or CH, because there’s no way to make a finitary computer program do one thing versus another depending on whether AC or CH are true or false.

    (At most, you could make a computer program do something if it found, say, a proof or disproof of AC or CH in ZF—but we already know from Gödel and Cohen that it’s not going to find them!)

    Related to that, it’s hard to imagine how anything measurable in the physical world could depend on whether AC or CH were true or false. (Indeed, it almost sounds like a joke…)

    FWIW, I don’t think even Tegmark would assign a Platonic reality to the truth or falsehood of AC and CH—in his book, he’s very explicit that only “computable” mathematical structures are Platonically real (and therefore physically real!), in his conception.

  110. Scott Says:

    John Tromp #89 and Luke #106: Thanks so much for the thoughts!

    Yes, I completely agree that using one of these lambda calculus or combinatory logic thingies would be the natural next step. Having just warned against shoot-from-the-hip estimates of Turing machine size in comment #103, I’m now going to disregard that, and say that I think 500 states for Goldbach, 750 for Riemann, and 1000 for Con(ZFC) are all reasonable goals for an enterprising individual to aim for.

  111. Scott Says:

    Jared #108: Yes, of course!! (And is my last comment already laughably obsolete? 🙂 ) Either email me or post a comment here, or write a short article about it—I’d love to know the details! Make sure to explain the ideas behind how you got down to 73 states, in addition to listing the machine.

  112. Gabriel Nivasch Says:

    In the paper you give a short argument that BB grows faster than any computable function. Actually your argument only proves that, if f is computable, then you cannot have f(n)>=BB(n) for *all* large enough n.

    If I recall correctly, Tibor Radó in his original 1962 paper on the busy beaver function gives a somewhat more involved argument that you must actually have f(n)<BB(n) for all large enough n. (Correct me if I'm wrong.)

  113. Scott Says:

    Gabriel #112: Yes, you’re right, it takes another argument to show that for all computable functions f, we have f(n)<BB(n) for all large enough n. But I can give you that argument right now.

    Here it is: let M be a k-state Turing machine that computes f(n). Then we can clearly design another machine, Mn, that first prints n on its tape, then computes f(n), then runs for f(n)+1 steps. This Mn will have k+O(log(n)) states. Therefore

    BB(k+O(log(n))) > f(n).

    Therefore BB(n) > f(n) for all sufficiently large n.

  114. Paul Chapman Says:

    Scott #76: As well as the functions stack, Laconic has another infinite (unbounded) resource: the size of integer variables. Godel encoding can therefore be used to simulate any other unbounded computational resource, including any number of stacks or queues, or even a heap.

  115. Ehud Schreiber Says:

    What about a Turing machine that scans all variable values of an explicit Matiyasevich type polynomial that has no root (while this fact cannot be proved in ZFC), and halts on zero? Would this do the trick? How many states would it have?

  116. Scott Says:

    Paul Chapman #114: Right! There’s no issue with unbounded memory in Laconic, because it was explicitly designed not to have that issue. We were talking about C.

  117. Scott Says:

    Ehud #115: Once again, the issue is that the description length of the polynomial itself would also need to be counted toward the state complexity, because given a blank input, the machine would need to print the polynomial before searching for its zeroes. And even if the zero-testing machine was small, the polynomial that encoded ZFC might be quite enormous.

  118. Gabriel Nivasch Says:

    Scott #113: Your argument does not even use the undecidability of the halting problem! Am I missing something?

  119. John Tromp Says:

    Note that a 500 state TM for Goldbach still takes 11000 bits to represent its transition function. Compared to a 267 bit lambda calculus term, you’d be spending 97.6% of your description on cramming a tiny program into the straightjacket of the Turing Machine model.

    Perhaps we should admit that Turing Machines are not the most suitable for these types of investigations?

    With the lambda caluclus there is also no suspicion whatsoever that its choice as programming language is artificially helping to make the pgrogram small.

  120. Scott Says:

    Gabriel #118: No, you’re not missing anything! There’s a sort of diagonalization built into the argument, but it never needs to invoke the undecidability of HALT.

  121. Timothy Chow Says:

    Adam and Scott, congratulations on a fine achievement! I hope that your work will help people understand the importance of Friedman’s achievements, which IMO have been seriously undervalued by the mathematical/scientific public.

    In terms of reducing the bound further, my gut instinct is that currently, a “direct” encoding of ZFC could be competitive with an encoding using Friedman-type statements, but only because there’s probably still an order of magnitude to be gained by “engineering” improvements. After engineering improvements “max out”, further improvements will require significant new mathematical insights, and the power of Friedman’s methods will become even more apparent.

    Regarding Vadim #62 on P ≠ NP, Friedman has emphasized that mathematics is very much a Π_1 enterprise. If someone were to prove that P ≠ NP then one of the first reactions would be to attempt to strengthen the result as much as possible. So for example, the following “slight” strengthening of P ≠ NP is Π_1: “For every n > 100, the smallest AND/OR/NOT circuit that solves all SAT instances of size at most n has more than n^(log n) gates.” So if you don’t mind overshooting a bit, you can play the same game.

    By the way, I wonder if you are aware of Jones’s work on undecidable Diophantine equations. It has the same flavor as your work and I think it would be similarly interesting to see “how low you can go” with a Diophantine equation whose lack of solutions is unprovable in ZFC.

  122. Jared S Says:

    Scott #111: Unfortunately, my dramatic reduction is specific to Goldbach, so only one part of your comment #110 is obsolete.

    I’m going to be a bit vague for a couple of days, purely for TM code golf reasons: in case other commenters (“code golf addict”?) want to try, I think Goldbach is amenable to TM code golf. This weekend I’ll post my TM, possibly slightly improved, and an explanation. But I’ll be nice and e-mail you today.

    Here’s my vague commentary. The reason that Goldbach can be reduced so much is that the desired computation is something that I can fully model in my head, consider totally different approaches, and then understand in my head how the approach would play to a TM’s strengths and weaknesses. My TM was written directly as a state machine with no abstraction.

    I expect that for more complex problems, possibly including both of yours, an interpreter-type approach like yours will dominate.

  123. Scott Says:

    John Tromp #119 (and others): I have to confess that I’ve never really understood lambda calculus (or indeed, any functional programming language), and that’s a major hole in my understanding of CS. Like, I can read an exposition of λ-calculus and say OK, I guess I more-or-less understand these rules for manipulating λ-expressions. But what a roundabout way of trying to get a Turing machine! 🙂

    Given how many people rave about λ-calculus and functional programming, there’s obviously something I’m missing.

    But it now occurs to me: if λ-calculus turned out to be the best way to build a tiny Turing machine whose behavior was independent of ZF, that’s a development that would force me to finally learn what other people keep raving about!

    Of course, you might complain that I’m still using the number of Turing machine states as my “gold standard”—saying that λ-calculus needs to win in a metric like that one (rather than the number of bits in a functional program) before I recognize its superiority. But that’s because I instinctively think about computation in terms of why you could actually implement these operations using a machine compatible with the laws of physics—something that Turing machines seem to make much more explicit than λ-calculus. And when I look at the fairly complicated manipulation rules for λ-expressions, I immediately ask myself how much Turing-machine complexity is getting swept under the hood.

    I.e., if the Turing machine model is a straitjacket, then maybe I’m only comfortable in a straitjacket? 🙂

  124. Dave Says:

    Scott #84: “In any case, this whole discussion is academic.”


  125. Scott Says:

    Dave #124: Maybe I should have a new phrase, “This whole discussion is academic2“—i.e., academic even by the standards generally operative here.

  126. Brian Muhia Says:

    This is blowing my mind, and I also love your blog. To nitpick a little, there’s a missing word in the first sentence of Section 1.1. The word ‘century’ is missing, between ‘twentieth’ and ‘which’.

  127. fred Says:

    Scott #123

    For those wanting to learn the magic of Functional Programming, I recommend reading and do its programming exercises on an iPad using something like Lisping
    (a lisp interpreter with a nice editor).

    Very pleasant/relaxing/fun!

  128. Sniffnoy Says:

    OK, I’ll bite: What’s so complicated about the rules for manipulation of lambda-expressions?

  129. John Tromp Says:

    Scott #124

    “It’s hopeless to design a Turing machine by hand for all but the simplest tasks, so as a first step, Adam created a new programming language, called Laconic,”

    Are these the words of someone comfortable in their straitjacket, or eager to escape it:-?

    I am comfortable enough with lambda calculus to design the goldbach conjecture by hand.

  130. Scott Says:

    Sniffnoy #128: Well, look at the Wikipedia page! There’s α-conversion, η-conversion, and β-reduction, each one of which has rules about how to handle bound and unbound variables, renaming, etc. Maybe it would help to see how a λ-expression evaluator gets implemented with a small amount of machine code.

  131. Joe Shipman Says:

    If you look for a small TM where “does it halt on blank tape” is notoriously unsolved then one way you might do much better than Goldbach’s conjecture is asking whether a small instance of a 3x+1-type problem ever reaches 1. By Conway’s results, this eventually escapes the logical power of any axiomatic system. The original 3x+1 problem isn’t well suited to this because it conjecturally always reaches 1, but a TM executing the “halt on 1, cut evens in half, take odds >=3 to 3x+1” loop is a great candidate for “smallest TM where it’s notoriously hard to say if it computes a total function”.

  132. Scott Says:

    John #129: It now seems clear that Laconic was overkill for Goldbach! See comment #108 above: Jared S appears to have built a Goldbach Turing machine completely by hand with only 73 states (pending verification). Even in terms of bit-length, this is within a factor of 3 of your 267-bit binary λ term for Goldbach. We should’ve tried it ourselves.

    But, OK, what about Riemann and Con(ZFC)? Are you comfortable writing programs for those directly in λ-calculus? If not, then it seems like Turing machines and λ-calculus are both currently 1 for 3 on our example problems… 🙂

  133. Scott Says:

    Joe Shipman #131: Yeah, we actually thought about the 3x+1 problem. The drawback, from our standpoint, is just that it’s not a Π1-sentence—i.e., we don’t know how to phrase it in terms of whether a TM halts on a blank input.

  134. Mitchell Porter Says:

    Nice to see Harvey Friedman’s work crossing over in this way.

  135. Rasmus Pagh Says:

    Here is a dot file with the 7918-state machine:
    However, Graphviz crashes when I try to render it on my machine… Not sure what to make of that!

  136. Adam Yedidia Says:

    Luke #106: It isn’t totally obvious to me why lambda calculus should yield an interpreter that is much smaller than the TMD interpreter that I am currently using. But I am happy to be proven wrong! If someone is interested in trying to build an interpreter of this kind, the software and documentation in the parsimony/src/tm/ directory will probably prove quite helpful.

  137. Timothy Chow Says:

    Scott #133: Something very close to the 3x+1 problem that is equivalent to a TM halting on blank input is, “There is a cycle that does not contain 1.” The only mildly tricky point is that you need to multitask over all starting numbers at once; i.e., if C(i,j) denotes the result of iterating the Collatz map j times starting with i, then you need to traverse all pairs in such a way that any fixed (i,j) is reached after a finite number of steps.

  138. John Tromp Says:

    Scott #133 A 73 state TM takes 73*2*(ceil(log(73))+1+1) = 1606 bits, 6 times more than goldbach.blc, but indeed quite impressive for a TM!
    I’d be comfortable writing Riemann and Con(ZFC) in a lambda calculus with some syntactic sugar on top, which is basically what Haskell is.

  139. Sniffnoy Says:

    Scott #130: Aside from some slight naming and bound/free weirdnesses that I don’t think are that practically relevant — and that are unlikely to trip you up if you keep in mind what the rules actually mean — I don’t think any of that is really complicated. Or if you do consider it consider it complicated, it’s certainly less complicated than formal logic, which has pretty much exactly the same rules for dealing with renaming and bound/free variables, but whose rules for dealing with ∀ and ∃ are, at least in every formulation I’ve seen, more complicated than the three (two once you drop renaming) rules of the lambda calculus.

    You might say, well, that’s not a problem with first order logic, because we know what ∀ and ∃ mean, and don’t need so much to rely on the formal rules; but we know what λ means too, so, like, I don’t think that’s a real difference.

    And I mean, the three rules are just:
    α-renaming: What variables are called does not matter. I.e., you can rename them.
    β-reduction: Functions do what they say they do.
    η-conversion: “The function that takes x and returns f(x)” is just the same thing as f.

    The one really weird thing I think, is the possibility of doubly-binding a variable — you need some convention to define what the hell is meant by λx.λx.x; there’s nothing intuitive about that. As it is, it’s defined that inner bindings override outer ones, so the above is equivalent to λx.λy.y. But once again, formal logic has the same problem; you need to define what’s meant by ∀x∃x P(x), and the convention is the same, that it means ∀x∃y P(y). (Kind of a dumb statement since it has no real dependence on x, but…) And of course you might object that nobody would ever write a statement like that, but the same applies to the lambda-calculus — why would anyone ever write a function like λx.λx.x? Just write λx.λy.y so people can tell what you mean.

    Note by the way that β-reduction doesn’t care about whether variables are bound or free; and while α does, well, α is just a rule saying you can rename variables, so how that interacts with boundness should already be intuitive even if writing it out formally is annoying — as I said, it’s no different than how you can rename variables in formal logic, or, well, anywhere in math. The only nontrivial rule where it’s an issue is η. But again, keep in mind what it means and it’s simple; η is supposed to mean “The function that takes x and returns f(x)” is just the same thing as f”, not “The function that takes x and returns f_x(x) is just the same thing as f”, because the latter statement is obviously false.

    And if you really want to avoid renaming and bound/free issues, you can always use De Bruijn index notation… though personally I find that to be just about unreadable, and it makes the two remaining rules (α is unnecessary when variables don’t have names!) considerably less clear.

  140. Scott Says:

    John Tromp #138: Sorry, I was counting the number of bits needed to describe an n-state TM as growing like n(log2n + O(1)) rather than 2n(log2n + O(1)), because of the need to mod out by permutation symmetry (as Stefan O’Rear #96 pointed out).

    But John Tromp and Haskell: OK, it sounds like I should just set aside a week sometime to learn how to code in Lisp or Haskell!

  141. John Tromp Says:

    Scott #131: A simple machine implementation of lambda calculus is shown in Figure 1 of
    An obfuscated C implementation of the Krivine machine is shown at the top of my homepage.

  142. Sniffnoy Says:

    Haskell is really nice, it’s a lot like just writing math!

  143. John Baez Says:

    Scott wrote: ” if λ-calculus turned out to be the best way to build a tiny Turing machine whose behavior was independent of ZF, that’s a development that would force me to finally learn what other people keep raving about!”

    The pattern of this remark is familiar to category theorists: “if you can promise that the mathematics you love turns out be the key to solving whatever highly specialized, technical problem I’m interested in at the moment, then I’ll learn it”. And there’s an unspoken addendum: “But otherwise, forget it.” 🙂

    Turing machines are idealized, ultra-simple hardware; the λ-calculus is an idealized, ultra-simple programming language. So, their charms are complementary.

  144. Michael P Says:

    ‘… if you’re in a biggest-number-naming contest, and you write “BB(10000),” you’ll destroy any opponent …’

    That was wonderful essay indeed. IIRC one of the requirement for the game in that essay was that one should name a number that would be unambiguous for a reasonable mathematician… Wait, didn’t Yedida and you just disqualified the entry of “BB(10000)” in that game by proving that it’s not computable? 🙂

  145. Scott Says:

    John Baez #143: Yes, I’ve noticed a strong analogy between λ-calculus and category theory—both in terms of how their proponents talk about them and in terms of how I respond!

    But come on: it’s not like I said, “unless the mathematics you love helps me with my problem, I’m going to condemn it and ban you from studying it.” I just said I probably won’t get around to studying it myself!

    But in any case, this time it looks highly plausible that λ-calculus will help us—in which case, I will indeed have no choice but to learn it!

  146. Scott Says:

    Michael P. #144:

      Wait, didn’t Yedida and you just disqualified the entry of “BB(10000)” in that game by proving that it’s not computable?

    Please see my comment #101 to Ward Cleaver, which answers exactly that.

  147. Stefan O'Rear Says:

    Paul Chapman #114: You may have noticed that my ZF/FOL program does, in fact, use unbounded integers to simulate a stack 🙂

    John Tromp #119: while I think that, with the benefit of hindsight, you may be right that Turing machines are not the best choice for algorithmic information theory, they are the rules Radó gave us to work with.

    Timothy Chow #121: I think you are absolutely right, the future belongs to Friedman-type statements. Many people overestimate the complexity of FOL, but there’s still a limit to how far it can go.

    Jared S #123: I’ll be interested to see what you’ve done, and I’m also kind of interested in what a skilled TM-golfer can do with a minimal FOL enumerator like mine.

    Sniffnoy #139: it looks like Tromp et al are already using de Bruijn numbers. IIRC, it’s not that bad if you stick to normal order reduction, as you won’t have to adjust any numbers that way.

    NOT A REPLY: There was a brief thread on the FOL program on the Metamath list. Norman Megill, creator of the axioms I’m using, pointed out that I can’t just drop Regularity because it’s required in order to recover standard Infinity from his axinfnd; I’ve revised the gist to include a version of Regularity.

  148. Gabriel Nivasch Says:

    Timothy #137: No big deal. Do as follows: For i = 1, 2, 3, … test whether, starting from any j<=i, you fall into a loop in at most i steps. You just start from scratch each time!

  149. John Baez Says:

    Scott #145 – I hope you learn the λ-calculus, just to see if you too start trying to convert others to that religion. 🙂

    You may be amused to know that when Jim Lambek supposedly proved that the λ-calculus was just a formalism for working with cartesian closed categories, a number of category theorists said “whew, so now I don’t need to learn the λ-calculus!”

    I don’t know if any experts on the λ-calculus said “whew, so now I don’t need to learn cartesian closed categories!”

    (When I first tried to learn the λ-calculus, I thought it might be true that it was just a formalism for working with cartesian closed categories – but I discovered it was not. There’s more to it, even from a purely category-theoretic view.)

  150. Stefan O'Rear Says:

    Gabriel Nivasch #148: You don’t even need a cycle finder, since a Collatz cycle which does not contain 1 is the same as a Collatz cycle which starts at >4. (That reminds me, I found a way to derandomize your logspace cycle finder a few months ago using a variant of the Cole-Vishkin graph coloring. I’ll write up something if you haven’t already heard it.)

  151. code golf addict Says:

    I’ve got 31 states for Goldbach. Props to Jared S for being the first to put up a number! My solution’s SHA-256 is e24e155537f3dc3537265ec29d3118c4b00cfa1542a20ae61af0266da5400066.
    Hmm, when should we code golfers reveal our solutions?

  152. Scott Says:

    code golf addict #151: Wow!!! That’s incredible. Could you email me your solution, along with an explanation of it—the more details the better? Then I’ll be happy to call it out in the post, whenever you’re ready for that.

    This whole “posting the SHA-256 of your solution” is awesome, but not normally how we’ve rolled in academia since the 1600s or so (when Galileo did something similar, not with SHA-256 of course but with a cryptogram of his own design)… . 😉

  153. John Tromp Says:

    The basic question this research is trying to answer is
    “What is the complexity of undecidability?”

    And we’re talking about descsriptional complexity, the size
    of a description of some machine or program whose behaviour is undecidable.

    An important concept in Algorithmic Information Theory is universality. A description method is universal if it gives sizes
    at most a constant larger than with any other (computable) method. This requires comparable size measures, so we normally measure sizes in bits.

    For Turing Machines to be universal in this sense of additive overhead, they must allow for binary input, and have their size measured as the combined length of an encoding of its transition rules and the input consumed (we also need to be able to tell these apart, but let’s ignore that for now).
    We can then in fact consider a fixed Universal Turing Machine that parses a description of another TM from its input and simulates that on the remainder of input.
    In this view TMs are just another programming language.

    This would get around the need for “introspective encoding”.

    I think the TMs will be more efficient if they can read input
    from a separate input tape. But we don’t want to complicate the transition function too much. Perhaps, in addition to the special HALT state, there can be a special INPUT state. If you transition from state i to INPUT then
    you actually end up in state i+1 with the next bit of input written or xor-ed into the current celll. Has anything similar to that been proposed before?

  154. John Baez Says:

    I got so interested in Laver tables that I wrote a semi-pop article about them here. It’s truly jaw-dropping math.

  155. wolfgang Says:

    @John Baez #143

    >> the λ-calculus is an idealized, ultra-simple programming language

    It seems that to every programming language there is a book “how to learn … in 24 hours – for dummies and complete idiots.”

    Is there a similar thing for the λ-calculus, if possible freely available? 😎

  156. Jay Says:

    John Baez #154

    Suppose we design a Turing Machine that iterates over the P(n) and halts when it finds 42. This machine do have a well defined behavior: either it stops, or it doesn’t.

    But if it stops, then we don’t need I3 to prove it: just construct the TM and wait (a physically impossibly looong time) until it stops. So it must run forever.
    But if it runs forever, then the negation of I3 must be inconsistent within ZF (or within any system of axioms powerfull enough to construct the above TM), even if this particular proof is physically impossible to reach. So I3 must be true.
    But no one seems to say that I3 was proven, so what do I miss?

    PS: thanks all for the WOW post and amazingly interesting comments! 🙂

  157. fred Says:

    in your 1998 essay “Who can name the bigger number”, you wrote

    “And in Go, which has a 19-by-19 board and over 10^150 possible positions, even an amateur human can still rout the world’s top-ranked computer programs.”

    At the time, would you have been very surprised if someone had told you that this would no longer be true 18 years later?

  158. Scott Says:

    Jay #156: From reading John’s post, I believe the situation is the following. If P(n) reaches a finite maximum, then the I3 axiom is inconsistent with ZFC. (We might also say, “I3 is false”—but when possible, I prefer to talk about the arithmetical consequences of large-cardinal axioms, rather than the axioms themselves.) If, on the other hand, P(n) grows without bound, then I3 might or might not be consistent with ZFC. Meanwhile, ZFC might or might not prove all by itself that P(n) grows without bound; that’s an open problem.

    But notice that neither eventuality—P(n) growing without bound or it reaching a finite maximum—can be certified by a Turing machine’s halting! A Turing machine can only halt and tell you that P(n) has reached some specific bound, like 32.

    In any case, though, I personally have no strong intuition as to whether I3 should be consistent with ZFC or not.

    I completely agree that Laver tables are amazing, and am glad to have learned about them!

  159. Scott Says:

    fred #157: Hard to say … I think I would’ve been interested and impressed but not shocked.

  160. Emilio Pisanty Says:

    Your big-numbers article is strangely topical, particularly the part that goes

    > And in Go, which has a 19-by-19 board and over 10150 possible positions, even an amateur human can still rout the world’s top-ranked computer programs.

  161. Dave Says:

    Forgive me if I’m missing something (or you’ve answered this already), but when you say:

    “To see why, consider again a Turing machine M that halts if and only if there’s a contradiction in ZF set theory. Clearly such a machine could be built, with some finite number of states k. ”

    Is the implication that the minimum k such that you could construct such a Turing machine is the first k for which you cannot prove it will run forever in ZFC, or is it possible that even this bound is not tight – i.e. there may be even a smaller k?

  162. Philip White Says:

    This is a really neat project! I wonder: How hard would it be to code a universal Turing machine in Laconic? (Has anyone already done this yet?)

  163. Scott Says:

    Dave #161: Great question! Yes, it’s conceivable that there exists a k for which there’s not yet a k-state Turing machine whose running forever we can prove implies Con(ZFC)—and yet nevertheless, there’s a k-state machine that does run forever, and which ZFC can’t prove runs forever for some other reason.

    On the other hand, in practice, I believe that essentially the only way we know how to show that the fact a Turing machine’s running forever is unprovable in ZFC, is to show that it implies Con(ZFC). (Any experts can correct me if I’m wrong.)

  164. Scott Says:

    Philip #162: I don’t think it would be hard at all to code up a universal Turing machine in Laconic (at least, were Laconic appropriately extended to deal with Turing machines that take input). But since we already know tiny universal Turing machines, what would be the point?

  165. Timothy Chow Says:

    Jay #156: The gap in your argument comes when you say, “So it must run forever.” You are correct that if the TM stops then this fact can be proven without I3. In this sense I3 is not “needed.” But when people say that I3 is “needed” to prove that P(n) eventually hits 32, they mean only that nobody knows a proof of this fact that doesn’t invoke I3 (or something similar), not that there couldn’t be one.

  166. Philip White Says:

    Scott #164: I hadn’t initially realized that Laconic isn’t designed to construct Turing machines that take input. But if Laconic programs can be compiled to Turing machines, and since every variable when declared is initialized to 0, shouldn’t the inner workings of the Turing machine that gets compiled allow the TM to still work on other inputs…even if the Laconic interpreter itself can’t simulate the TM on other values?

  167. Stefan O'Rear Says:

    Scott #163: One approach that comes to mind is to have a machine that searches for ZFC-proofs of its own non-termination, a la the first incompleteness theorem. I think this would not give you Con(ZFC), but an expert would have to weigh in.

  168. Timothy Chow Says:

    Stefan #163: The Goedel sentence G of the first incompleteness theorem (that states “G is not provable in T”) is equivalent to Con(T). To see this, note first that if we assume G, i.e., if we assume that G is not provable in T, then in particular T is consistent, since inconsistent systems prove everything. Conversely, if we assume that T is consistent, then by the first incompleteness theorem, we can conclude that G is not provable in T; i.e., we can conclude G.

  169. John Baez Says:

    Wolfgang #155 wrote:

    “Is there a similar thing for the λ-calculus, if possible freely available? 😎”

    My own route to the λ-calculus went through category theory, so I’m not the best guy to ask. There’s this:

    • Palmström, The Lambda Calculus for Absolute Dummies (like myself) .

    but personally I find this clearer:

    • Wikipedia, Lambda calculus.

    together with this stripped-down, formal treatment:

    • Wikipedia, Lambda calculus definition.

  170. John Baez Says:

    Scott #158 wrote:

    “In any case, though, I personally have no strong intuition as to whether I3 should be consistent with ZFC or not.”

    Indeed, I3 was intended to come almost as close as possible to being inconsistent without actually being inconsistent. So this is risky territory!

    There’s a beautiful kind of large cardinal called a Reinhardt cardinal. Unfortunately, it was proved to be inconsistent with ZFC in 1971. So, set theorists backed down a bit and formulated axioms I0, I1, I2, and I3, which state the existence of somewhat smaller large cardinals (with I0 being the biggest).

    Nobody has been able to prove these are inconsistent with ZFC. But they’re trying to get as close to the fire as they can without getting burned! It’s a high-powered version of “trying to name the biggest number”.

    Indeed, Eliezer Yudkowsky used an I0 rank-into-rank cardinal to win a contest for naming the largest computable natural number. But if this existence of this cardinal turns out to be inconsistent with ZFC, his number will turn out to have been a mirage.

  171. Stefan O'Rear Says:

    Scott #110: I’ve got some work in progress that looks like it’ll hit all three of your benchmarks, although it’s far too general to compete with code golf addict #151 …

  172. Scott Says:

    Stefan #167, Timothy #168: While, indeed, Stefan’s idea doesn’t quite work (because the Gödel sentence G(ZFC) is equivalent to Con(ZFC)), I thought of something else that in principle would work: the Rosser sentence, R(ZFC). That is, we could build a Turing machine M that halts iff it finds a ZFC proof that there’s a proof that M runs forever and no shorter proof that M halts.

    We then have:
    (1) Con(ZFC) implies that M runs forever.
    (2) ZFC can’t prove that M runs forever, without being inconsistent.
    (3) There’s at least no obvious proof that M running forever implies Con(ZFC). (For even if M runs forever, ZFC could still have an inconsistency, as long as the proof that M halts is shorter than the proof that it runs forever.)

    In practice, though, it’s hard to imagine how a machine whose running forever was equivalent to the Rosser sentence, would be smaller than a machine whose running forever was equivalent to the Gödel sentence and to Con(ZFC).

  173. jonas Says:

    You mention how the Collatz conjecture and many other conjectures are not a Π₀ problems, so we can’t disprove it with a Turing machine. Could you tell me good examples of famous conjectures or open problems that are thought to be very difficult to resolve, but not generally conjectured to be independent of ZFC, and that are Π₀ statements? The best I can come up with are the value of the sixth Ramsey number (this is actually Δ₀, we can compute it with a program with a known time bound), or the Hadwiger conjecture about graph coloring.

  174. Scott Says:

    jonas #173: I think you mean Π1 statements. This post already mentioned two famous examples: Goldbach’s Conjecture and the Riemann Hypothesis! But there are many, many others:

    – Any conjecture about a Diophantine equation not being solvable, or only having a fixed list of solutions (e.g., your favorite generalization of Fermat’s Last Theorem)

    – Strengthenings of P≠NP, such as 3SAT having no circuits of size nlog(n) (more generally, Π1 strengthenings of almost any asymptotic conjecture in combinatorics or theoretical computer science)

    – The conjecture that the frequencies of the digits of Π never deviate from their means by more than a prescribed value; other such conjectures in probability theory

    For the “other” Clay problems—Navier-Stokes existence and smoothness, the Yang-Mills mass gap, the Birch and Swinnerton-Dyer conjecture, etc.—their logical complexity is an interesting question to which I don’t know the answer. I don’t know that anyone has even bothered to check! But if nothing else, I’m pretty sure all these statements have plausible Π1 variants “in their neighborhood” (e.g., that imply them or are implied by them).

  175. Benoit Says:

    I read this post with interest, and I have a layman’s question: why focus on ZFC, and not on Peano or Robinson arithmetic ? Since Gödel’s incompleteness theorems apply to formal systems containing Robinson arithmetic, I thought the latter appears as a “distinguished” system, more than ZFC (please correct me if I’m wrong). Besides, it might be easier to handle since it is finitely axiomatized, with no axiom scheme.
    Maybe the difficulty is to prove that consistency of Robinson arithmetic implies that the constructed TM runs forever?

  176. Sniffnoy Says:

    I too basically learned the basics of λ-calculus from Wikipedia. Basically the one big thing about it I remember learning elsewhere was that it’s confluent, which wasn’t mentioned on Wikipedia at the time, but now it is, so! (I’m not sure Wikipedia’s presentation is really the greatest at the moment, but, it’s probably good enough?)

    Here’s a neat little λ-calculus challenge I like to give people: Write a function that takes (in the usual curried way) first a whole number n, folllowed by n arguments, and returns them in a list. (By a list I mean a linked list. You can represent all this however you like; basically I’m assuming that you’ve already built up sensible “primitives” for working with numbers, with ordered pairs, and with “nil”.)

    Note that this isn’t possible, in say, Haskell; there’s no possible type for such a function! It’s only possible in the untyped λ-calculus. But it’s a little tricky.

    BTW, this has been linked here before, but here’s a MathOverflow question where people try to place the various Millennium Prize problems on the arithmetic hierarchy. For the most part, they don’t come to any conclusions beyond what’s already been said, but apparently Navier-Stokes is Π_2.

  177. Ricardo Barreira Says:

    Regarding this sentence in the paper:

    “It is therefore impossible to prove the value of BB(4,888) without simultaneously proving or disproving Goldbach’s conjecture.”

    This says that any proof of BB(4888) would prove whether G halts or not. Is that actually true?

    A proof of BB(4888)=x implies that G either doesn’t halt or halts in a number of steps <= x (which still leaves Goldbach's conjecture undecided). Why does it need to imply anything more than that?

  178. Scott Says:

    Benoit #175: We chose ZFC because it’s around the “maximum” of what working mathematicians would normally assume (whereas PA is clearly less, since it doesn’t even allow ordinal induction); because Harvey Friedman had ZFC-independent statements that were suitable for our purpose; and because the most famous PA-independent statements, like Goodstein’s Theorem and the non-losability of the hydra game, are not Π1. But as Adam and I said before, the smallest TM whose halting is independent of PA is one of the most interesting directions for future research! And conceivably a PA-independent TM could be found that was vastly smaller than the smallest known ZFC-independent TM.

    I don’t understand the power of Robinson arithmetic well enough to be able to comment much, but can’t Robinson arithmetic not even prove that addition is commutative?

  179. Scott Says:

    Ricardo #177: A more careful statement would be that, if you knew BB(4888)—or scratch that, BB(31)—then the problem of proving or disproving Goldbach’s Conjecture would be reduced to a “mere finite calculation.” You’d simply need to run your Goldbach TM for BB(31) steps, and see if it had halted by then! If not, then it can never halt.

  180. Sniffnoy Says:

    Yeah, despite being enough for Gödel, Robinson arithmetic is really weak. On that note, here’s an interesting paper (stemming from this MathOverflow answer, also worth looking at) I encountered recently. It considers the problem — given a Diophantine equation, is it consistent with Robinson arithmetic? Like, is there at least some model of Robinson arithmetic where it has a solution? For stronger theories like PA this is undecidable, of course, but for Robinson arithmetic instead it is NP-complete! The models constructed for the proof are pretty far from resembling the actual natural numbers, unsurprisingly, and I think do a good job demonstrating the weakness of Robinson arithmetic.

    If you really want to go weaker than PA, maybe elementary function arithmetic might be an appropriate challenge, in light of “Friedman’s grand conjecture”? I don’t know, this is really not my area either!

  181. Ricardo Barreira Says:

    @ #179 Thanks Scott, that’s what I suspected.

  182. James Graber Says:

    I don’t understand this thread well enough to properly ask my question. I also don’t understand Hamkins-Miasnikov arXiv math/0504351(H-M) well enough, either. Nevertheless I want to ask if it would be possible to use techniques similar to those in Yedidia-Aaronson (Y-A) to tighten up H-M so that it says linear or quadratic or whatever in place of polynomial, and also so that a specific TM representation, or a specific UTM is specified? If it is not obvious, the point is to make H-M into a real working tool.

  183. Sam Howley Says:

    Off Topic :

    Do you have any thoughts on Donald Hoffman’s interface theory of consciousness and his idea to start with mathematically describing consciousness and then deriving down to physical reality rather than the standard approach of going the other way ?

  184. John Sidles Says:

    In regard to Sniffnoy’s mention (in #180) of Elementary Function Arithmetic (EFA) and Friedman’s Grand Conjecture, Freedman’s original discussion (1999) particularly focused upon Hilbert’s elegantly short, but transfinite and hence nonconstructive proof of what is todays called Hilbert’s Basis Theorem, as contrasted (by Friedman) with modern-day proofs that are longer and stodgier, but on the other hand are EFA-compatible and explicitly constructive.

    As Friedman observes

    “These are seriously different proofs!!!”

    (punctuation by Friedman).

    For practical calculations the more-modern less-elegant (?) EFA-compatible constructive methods have substantial advantages. Two well-regarded texts that teach these methods are by David Cox, John Little, and Donal O’Shea’s Using Algebraic Geometry (2nd edition, 2005) and Ideals, Varieties, and Algorithms (3rd edition, 2007), by the same authors.

    Here’s a Ken Wilson quote that illuminates the above considerations (and also my earlier comment #63 on Gröbner bases):

    I particularly got interested in the question if I had a computer big enough, how would I do this computation [in quantum statistical mechanics]? That became sort of a guiding point to drive me to how to think about the [quantum simulation] problem.

    How can I convert this from a problem of infinite number of degrees of freedom, which you can’t deal with anyhow, to a problem which is finite even if it was so large that you would have to have an astronomical size computer?

    I just wanted to convert it from an infinite number of degrees of freedom to a finite number.

    From “Interview with Kenneth G. Wilson” (2002, Google finds it).

    Conclusion  Plenty of enduringly successful projects in mathematics and physics have begun with insights gleaned from explicitly constructive, explicitly finitistic computations, similar to Adam Yedidia’s project.

    As the poet Pope might have put it:

    Reduce the space,
          if possible, with grace.
    If not, by any means
          reduce the space.

    Congratulations to Adam Yedidia and his colleagues for a fascinating research program that is well-begun.

  185. Scott Says:

    Sam #183: No, sorry, I don’t know anything about that.

  186. Will Says:

    We can attempt to classify instances of the halting problem into a few different types:

    1. We know the answer to this question.
    2. We can prove that this question is equivalent to some open mathematical problem.
    3. We can prove this question is equivalent to the consistency of some theory, or is otherwise undecidable in some standard theory.
    4. We have no idea how to turn this question into a human-comprehensible mathematical problem – the program looks totally random but it inexplicably continues to run for a very long time. As far as we know, we could run it for a few more steps and it would halt, or we could the next day find a proof that it runs forever.
    5. We are quite confident that we will never figure out how to turn this question into a human-comprehensible one and would be shocked if it was proved unsolvable.

    Of course these boundaries are somewhat subjective and problems can move between the different categories. Still I think they are a pretty good classification.

    What I want to know is – in some model of computation, is the first hard instance of type 2,3,4, or 5?

    The pessimistic answer is of course 4, that we reach the limits of our understanding before we reach the precious jewels of interesting mathematics. For n-state 2-symbol Turing machines, I think the answer is pretty likely 4. In particular note that the shortest Turing machine currently of type 2 has 31 states, and that it is only known to be of type 2 to one person (code golf addict #151), and that any other mathematician looking at the Turing machine might very well fail to detect the pattern. Given the Busy Beaver number data, it seems likely that there are very many Turing machines of at most 30 states that do not halt in the time we can run them on our computers.

    This question gets more interesting if we use more human-comprehensible languages, and in particular human-comprehensible languages that are useful for representing mathematical concepts. The extreme end here might be Peano Arithmetic, which I guess does not always produce an instance of the Halting problem. Is it true that every statement in Peano arithmetic using at most 33 symbols is one of the first two types?

    One could also ask this about standard programming languages. Do we reach total combinatorial chaos before we hit the ordinary mathematical kind of difficulty?

    Surely for really weird Turing complete languages like Wang tiling we reach 4 long before we reach 2 and 3.

  187. Gabriel Nivasch Says:

    Regarding TM’s that look for a counterexample to Goldbach’s conjecture: Here is how I would build one.

    Let us first allow more than two tape symbols. There is one basic nonblank symbol “b”, plus other symbols, which we can think of as post-it labels of different colors that the machine uses to mark different places.

    Say the machine wants to check whether the even number n is the sum of two primes. It has n written out in unary in the tape as bbb…b.

    To represent the splitting of n into two numbers x+y, it sticks a “blue” label on the x-th “b”.

    To check whether x (or y) is prime, the machine tries, for each i = 2,3,…x-1 the following: Hop i steps at a time along the representation of x, and see whether in the last hop you fall exactly at the end of x. (Then x is not prime.)

    To do the hopping, you have to have the number i written in unary somehere else on the tape, and you have to go back and forth between the representation of x and the representation of i, using and moving around labels of a few other colors.

    The details are left as an exercise to the reader… 🙂

    How do we convert this into a two-symbol machine? Well, if the number of distinct label colors is k, then we leave k spaces between each pair of adjacent “b”s, and we use these spaces to put “flags” that mark the presence of labels of different colors.

    I did not work out the details, but I cannot imagine that the number of states would reach into the thousands…

  188. Michael Says:

    And in some sense Busy Beaver is just an explanation device for the real answer: Arithmetic Beaver.

    Even Yudkowsky’s definition already depends on the chosen natural number model: «the length of the shortest contradiction in ZFC+CH» can be ill-defined or well-defined depending on the chosen model of PA (or ZFC, or ZFC+CH)

    And if we allow such dependence, we should use it to the max.

    So my answer to BB(1000000) is «the largest number that can be described as the minimal number (finite ordinal) satisfying some formula in ZFC language (including the functional symbols for standard operations and constant ø) with no more than 1000000 predicate symbols, functional symbols and constant symbols combined»

  189. Aula Says:

    Scott #174: You use “etc.” to stand for just the Hodge conjecture, as it’s the only unsolved Millennium Prize Problem that you don’t mention. While the other MPPs are indeed quite likely to be Pi_1, I’m not so sure that HC is. Of course that could just be because I don’t really understand anything about HC, but if there is a proof that it’s Pi_1, I’d like to know.

  190. Beaver sempre più Busy | Massimo Morelli's blog Says:

    […] al Busy Beaver, mi perdonerai se questa volta non mi documento affatto e mi limito a segnalarti questo interessante articolo (del solito Scott Aaronson) che divulga  un nuovo risultato sulla simpatica bestiola. Se sei […]

  191. jonas Says:

    Scott re #174: yes, I mean Π₁ instead of Π₀, and even with that change the question I asked wasn’t very good.

    I’m not sure about the status of Goldbach conjecture itself. Do the number theorists who actually understand that topic still consider it such a hard conjecture that they don’t expect it to be cracked in a few decades? There’s been a lot of advances on it.

    The Fermat generalizations and statements about the digits of π are a good suggestion, there are probably some that people consider very difficult. Although now you mention it, are there also Π₁ versions of Schanuel’s conjecture? Something like that might be even more convincing.

    About “3SAT having no circuits of size n**log(n)”, that’s a nice idea, if you put some realistic lower bound on n (like n is greater than 2**64 or something) then indeed you’d get a Π₁ statement that sounds like a good strengthening of P!=NP. I didn’t think there were Π₁ versions like this, it’s obvious only in retrospect, so thank you for mentioning that.

  192. Scott Says:

    Michael #188: Aha, but if I’m setting the rules of the biggest number contest, then I reject any number whose identity depends on an “intended model of set theory.” As it happens, I just had an email exchange about this with Agustin Rayo—he’s my colleague at MIT who invented Rayo’s number—and Agustin was clear and explicit that, if you reject the idea of an “intended model of ZFC” (something that would even decide, e.g., the Axiom of Choice and Continuum Hypothesis), then you should also reject the well-definedness of his number. I believe similar comments apply to Eliezer’s number.

    My rule is that the number you name need not come with an algorithm to compute it, but it does need to have a definition that “bottoms out in arithmetic and computation,” as BB(1000) clearly does and as Rayo’s number clearly doesn’t. Now, I freely admit that my rule is still somewhat ambiguous: in particular, once you generalize the Busy Beaver function to the ordinal levels of the arithmetic hierarchy, the whole big-number contest degenerates into a debate about which notations for computable ordinals we can “know” to be well-founded—which will in turn likely boil down to which large-cardinal axioms we can “know” to be consistent with ZFC.

    OK, but it’s crucial to understand that the point at which my contest dissolves into arguments about the consistency of this or that large-cardinal axiom, is still vastly before the point at which we’re talking about “intended models” for all of ZFC! 🙂

  193. Scott Says:

    jonas #191: FWIW, Terry Tao has expressed optimism about Goldbach, which seems reasonable to me, since there are so many related statements (Vinogradov’s Theorem) that have been proven. My opinion certainly doesn’t count for much, but I’d bet plenty of money against Goldbach being independent of ZFC.

    (All the more so because, as unlikely as it seems to me that Goldbach is independent of ZFC, it seems even more unlikely that we’d prove it if so, and therefore that my counterparty could collect! 🙂 )

  194. Stefan O'Rear Says:

    There seems to be a general pattern here of cases where diagonalization is “the only easy way out”.

    All natural arithmetical statements known to be independent of ZFC imply Con(ZFC).

    All natural computable predicates known to be outside P are DTIME(superpoly)-hard. (hey, if someone can prove NP=EXP via succinct certificates for generalized chess it would neatly solve both P=NP and ETH!)

    All natural functions known to be non-computable are hard for the halting problem (i.e. they lie at or above 0′, as opposed to intermediate Turing degrees).

  195. code golf addict Says:

    I’ve got Goldbach down to 27 now. At the link you can see code for the 27- and 31-state versions, as well as pseudocode explaining the 31-state version.

  196. anon Says:

    Has Terry Tao expressed optimism about binary (strong) Goldbach? I’m not an expert on this, but as far as I know, there’s an obstacle (known as “the parity problem”) which seems to make methods such as used for the weak Goldbach unusable for the strong case. I believe that experts think that something completely new is needed.

  197. Timothy Chow Says:

    Scott #193: Where has Terry Tao expressed optimism about Goldbach? Last time I checked, there was no plausible way around the notorious “parity problem” that makes Goldbach much more challenging than the odd Goldbach conjecture or Chen’s theorem.

  198. jonas Says:

    Scott re #192, by “bottoms out in arithmetic and computation”, do you mean to say that you want a definition somewhere within the arithmetical hierarchy? You probably don’t mean that exactly, because then you’d just name a number much higher than BB(1000000).

  199. Scott Says:

    anon and Timothy: Sorry, it was wrong of me to put words in his mouth. I remember he made a list of current open problems about structure and pseudorandomness among primes, talking very pragmatically about what techniques might or might not work for each, in which Goldbach appeared as “just another entry.” (In any case, I can’t find that list now—can anyone else?) But the fact that he included Goldbach on what read like a to-do list doesn’t imply any particular “optimism” on his part—maybe he just put it there for completeness!

  200. John Tromp Says:

    TM golfer #195: gratz on reaching 3^3 states / 378 bits.

    Is there a straightforward way to encode the transition table to avoid the n log n redundancy in state ordering?

    When listing a transition, one could use only enough bits
    to encode the number of already listed states, plus one to
    allow introducing a new state. But that would onlly appear to save a linear number of bits…

  201. John Tromp Says:

    A search for laver tables with periodicity > 16 takes at most 212 bits in binary lambda calculus, so somewhat simpler than goldbach. See

    courtesey of

  202. Weekly reading post #1 | David R. MacIver Says:

    […] 0.4 stochastic hours reading The 8000th Busy Beaver number eludes ZF set theory […]

  203. Avi Says:

    @Stefan #194

    >All natural functions known to be non-computable are hard for the halting problem (i.e. they lie at or above 0′, as opposed to intermediate Turing degrees).

    I don’t know very much about this, but what degree is the following function:

    Take Chaitin’s constant, and rearrange its binary digits as follows: for each of the sets {1st digit} {2-3rd digits} {4-7th digits} {2^n-(2^n+1)-1}, order the digits within in ascending order, i.e. zeros then ones. I can’t think of a good argument that this can be computed, and yet and oracle for it doesn’t seem to yield a halting oracle as well, because you don’t know the order of the digits. So even if you simulate enough programs to get something with the same number of 0s and 1s as my constant, maybe running more will turn a 0 to a 1 and a 1 to a 0.

    Is this trivially wrong somehow? Does this count as a “natural function”?

  204. John Sidles Says:

    Scott wonders (#199) “I can’t find that list [Terry Tao’s pragmatic discussion of structure and pseudorandomness among primes] now — can anyone else?

    Terry Tao’s three-part Marker Lecture post (from November of 2008) may be the “pragmatic discussion” that you recall. The three posts, which span four Marker Lectures, are titled “linear equations in primes,” “small gaps between primes,” and “sieving for almost primes and expanders.”

    In any event, this material is well worth reading.

  205. Juan Says:

    This is really cool.

    Sorry if the following is too trivial.
    I read your writing on big numbers and also found it entertaining. I saw the comments in the Berry paradox and they seem similar to what happens in Tarski undefinability of truth. So maybe one could circumvent that by saying something like “let D(N) be the smallest number not definable by formulas of quantifier complexity less than 100 (say), with less than N symbols” Definable meaning for example in ZFC, this would be a formula of quantifier complexity 101 (metamathematical) in the variable N (mathematical) and captures the spirit of that idea. This looks pretty weird though, I don’t even know if this depends on the model of ZFC. I would guess that roughly quantifier complexity 1 is something like busy beaver and higher would be something like the generalizations of busy beaver?

  206. Stefan O'Rear Says:

    Avi #203: I think that counts as a natural function. But we know that Chaitin’s constant is non-computable because it can be used to solve the halting problem. I suspect one of the following holds for your function:

    You can’t prove that your function is actually non-computable.
    You can, and I can extract a halting oracle from your proof.

  207. Joseph Van Name Says:

    John Tromp #201. Very nice. On Code Golf Stackexchange, I just asked for the shortest program that searches for the least n where 2^n>1*32 in the n-th classical Laver table. I also plan on asking another question there for the shortest code that halts if and only if it finds a counterexample to the statement 2*2^m=2^n then 1*2^m=2^n in the n-th classical Laver table. Feel free to post your solution there.

  208. mjgeddes Says:

    Sam #183

    I checked out the Hoffman TED talk. In so far as he’s saying anything true, it’s nothing new. Basically he’s just making the point that our perceptions are not reality , but rather ‘virtual reality’ models or icons created by our brains. All OK so far.

    But it appears he a fan of Chopra’s mystical nonsense (he works with the Chopra foundation) , and he’s basically just trying to promote the old discredited idea of idealism, that consciousness is the base-level of reality. It’s been refuted ever since the beginning of last century, but the new age movement still rakes in a lot of money telling people what they want to hear (basically there’s still lots of dollars to be made pandering to wishful thinking).

    It’s pretty clear that Hoffman and co. are dead wrong and Tegmark is basically correct: it’s math that’s actually the base-level of reality. Physical properties are emergent higher-level properties of mathematics , and mental properties such as consciousness are even higher-level properties generated by physical processes. So the mystics have everything arse-backwards.

  209. John Tromp Says:

    Stefan #206: Avi’s function is non-computable, because it has unbounded information about omega.

    We know that K(Omega_n) >= n + O(1), but knowing how many 0s and 1s are in the second half of n/2 bits would allow you to save about (log n)/2 bits. This gives a contradiction for large enough n of the form n=2^i…

  210. David Wu Says:

    Scott: Out of curiosity… 🙂

    Let n be the first natural number such that ZFC does not prove an explicit upper bound for BB(n). Conditional on ZFC being consistent, for what value of k would you currently put even odds on n >= k vs n < k?

  211. Richard Says:

    To understand what you’ve accomplished, what was the situation before this result? It was known that there exists an n such that BB(n) is independent of ZFC set theory, but was it known that it would be “straightforward” (time-consuming but doable without requiring creative insight) to also determine an explicit value of n for which BB(n) is independent of ZFC? Without using clever constructions, would people’s hunch be that this n would be much larger than 7,918 but not crazy large?–say, n would be less than a billion, as opposed to a googolplex.

  212. Avi Says:

    Tromp #209:

    Does that allow you to turn an oracle for my function into a halting oracle? Intuitively it seems not because it’s not giving *enough* information quickly enough.

  213. Scott Says:

    Juan #205: Yes, that’s apparently what Agustin Rayo and Eliezer Yudkowsky did—namely, attempt to define “the largest number definable in ZFC with 1,000,000 symbols or fewer” (which isn’t itself definable in ZFC, so there’s no Berry paradox). The problem is that, unless I’m seriously misunderstanding something, this could still depend on your intended model of ZFC, and therefore wouldn’t be allowed by my contest rules.

  214. Scott Says:

    David Wu #210: I dunno, maybe k=15?

  215. Scott Says:

    Richard #211:

      was it known that it would be “straightforward” (time-consuming but doable without requiring creative insight) to also determine an explicit value of n for which BB(n) is independent of ZFC? Without using clever constructions, would people’s hunch be that this n would be much larger than 7,918 but not crazy large?–say, n would be less than a billion, as opposed to a googolplex.

    I can actually address this question empirically! As a first step, we did encode Friedman’s statement in a straightforward way, without using any ideas to reduce the state count, and we got about 1,000,000 states. My initial expectation would’ve been that encoding ZFC directly in the same way would lead to even more states (though not more than 10,000,000), but Stefan O’Rear has now changed my view on that; I now think that too would lead to about 1,000,000 states. I don’t know whether people would’ve predicted higher or lower before we tried it, but certainly no one would’ve predicted a googolplex.

  216. Allan C. Wechsler Says:

    @code gold addict, #195: I tried to understand your 27-state machine, and made a couple of guesses about your notation convention, which must be wrong, because my hand simulations went off the rails quickly. Contact me at, please?

  217. John Tromp Says:

    Avi #213: I’m not sure, but I suspect you might.
    To recover the first n bits of Omega, you could request the reordered first 2^2^n bits of your oracle, and then approximate Omega from below until the segment frequencies match. At that point one would expect the first n bits to be correct.
    This is just intuition, and one needs to find a way to formalize it into a real proof…

  218. John Sidles Says:

    mjgeddes criticizes (#183) “the old discredited idea of idealism, that consciousness is the base-level of reality.”

    It is plausible that two essays that have been mentioned recently in Shtetl Optimized may help to rehabilitate and reinvigorate these idealistic notions; these two essays are Frank Wilczek’s Quanta Essay “Entanglement Made Simple” (of April 2016) and Robert Langland’s “Is there beauty in mathematical theories? (2013, search engines find both on-line).

    Wilczek’s (shorter) essay boils quantum entanglement down to two key principles, which Wilczek states explicitly as:

    (1) “A property that is not measured need not exist”, and (2) “measurement is an active process that alters the system being measured.”

    Langland’s (longer) essay authorizes and encourages us to apply Wilczek’s reasoning along the following lines:

    (A.1) Measurements are dynamical processes that reduce the entropy of the system being measured; (A.2) shared entropy-reducing processes produce flows of information that compose our shared objective reality.

    (B.1) Algorithms are dynamical processes that transform informatic flows; (B.2) shared informatic flows compose networks that distill and distribute our shared informatic reality.

    (C.1) Theorem-proving is a dynamical process that conditions our choice of algorithmic postulates; (C.2) shared algorithmic postulates compose inferential frameworks that diversely comprise our shared mathematical realities.

    Here (A.1-2) are simply a restatement of Wilczek’s entanglement principles, in a form that permits statements (B.1-2) and (C.1-2) to summarize Langland’s worldview as manifestly parallel to Wilczek’s.

    For details of this parallelism, see Langland’s discussion quantum dynamical processes, in light of which (for example) Langland’s Diagram A presents an evolutionary tree of quantum mathematical theories and concepts, while Diagram B surveys some of the mathematical names that are associated these theories and concepts.

    Langland’s perspective on mathematics in general, and quantum dynamics in particular, is algebra-centric (unsurprisingly, given Langland’s research interests). This perspective turns out to be advantageous, in that Wilczek’s perspective on quantum entanglement lends itself very naturally to algebraic elaboration along Langland’s lines.

    Note too that Adam Yedidia’s two-symbol Turing machines too can be naturally described in algebraic terms, as dynamical processes that operate upon Boolean state-spaces. Turing processes being universal, it is plausible (to me) that our deepest and most fundamental appreciation of Wilczek/Langland worldview will emerge from Yedidia-style investigations. Certainly Adam has made an outstanding start.

    Perhaps it’s legitimate to wonder whether there are any STEM disciplines that modern-day algebraists do not aspire to colonize and transform? Not everyone welcomes this increasing algebraic dominance, needless to say. 🙂

    Conclusion  Wilczek’s and Langland’s essays, considered jointly and algebraically, rehabilitate and reinvigorate (what #183 calls) “that old discredited idea of idealism, that consciousness is the base-level of reality”. The expanded notion is that “algebraic processes compose a natural base-level of reality”; a base-level that can be concomitantly realized physically (via A.1-2), informatically (via B.1-2), and cognitively (via C.1-2).

    What’s invigorating (for me) about the illumination that the Wilczek/Langland worldview casts upon Shtetl Optimized concerns, is that the implications of the Wilczek/Langland worldview are concrete and practical, and its implementations are (at present) largely unexplored.

    Is this broadening and deepening good news, for young researchers especially? Surely it’s reasonable to hope so.

  219. Avi Says:

    Tromp #217:

    Each piece of information we get is independent from the others. So if we want Omega’s first n digits, the only useful info is the frequencies of the n/2 – n bits. Frequencies above that don’t help at all. But that can never be known, unless the ones all come first. Because if the actual sequence is xxxx0xxxx1xxxx, then once we get there we can’t know that the real sequence isn’t maybe xxxx1xxxx0xxxx, and some other programs will halt to make it so. This isn’t a proof, because it might be useful some other way, but we can’t do it directly like you’d think.

  220. Avi Says:

    Someone on thinks it’s below 0′. So if that’s right and your proof above is right, then this is a “natural” example.

  221. John Tromp Says:

    Avi #221: Yeah, my suspicion was wrong. Information on later omega bits cannot help determine earlier bits (at least no by more than the complexity of specifying the “later”), else you could compress the whole lot.

  222. Sam Says:

    Very interesting post, and congrats to you and Adam for this work. I’m just curious how you find (as you say in the post) the problem of origin of life related to this BB function or your results.

  223. Shtetl-Optimized » Blog Archive » Three announcements Says:

    […] The Blog of Scott AaronsonIf you take just one piece of information from this blog:Quantum computers would not solve hard search problemsinstantaneously by simply trying all the possible solutions at once. « The 8000th Busy Beaver number eludes ZF set theory: new paper by Adam Yedidia and me […]

  224. Scott Says:

    Sam #222: I tried to explain that in the post—it’s just that they’re both questions about the minimum size of a computational artifact needed to get over some threshold of interesting behavior. In both cases, you know on abstract grounds that the object can exist, but you care about the actual numerical value of its size. In both cases, there’s a tradeoff between various factors, like the size of a higher-level description of what you want (RNA, DNA, Laconic, etc.) and the size of the machinery to interpret that description. Finally, in both cases, there are external chemical environments (or programming formalisms) that can make the problem harder or easier, but we care about the minimum size in a “generic” environment that’s conducive to such things in the first place.

    I don’t know that the analogy goes any deeper than that.

  225. mjgeddes Says:

    John #218

    I’ve read both the Wilczek and the Langland essays. Very interesting, but they provide no support at all for idealism. On the contrary, they actually support mathematical realism, the idea that mathematics is the base-level of reality!

    In the case of Wilczek essay, he’s talking about a fairly conventional interpretation of quantum mechanics, the Cophenhagen interpretation. Indeed, according to standard quantum mechanics, classical physical properties cannot be said to exist prior to measurement in a predefined experimental setup. But there’s still an underlying reality given by the mathematics of the quantum wave-function. And that’s the key point…the wave-function is not a physical entity, it’s mathematical! But consciousness is not involved here. The underlying reality is mathematical.

    In the case of Langland, he’s talking about the extraordinary power of mathematics (and algebra in particular) to explain reality. It does strongly suggest that the underlying reality is mathematical, but again, consciousness is not involved here.

    But we don’t need these results to see that mathematics is prior to consciousness. The ancient Godel theorems point to the same thing. Godel showed that in any formal systems of minimum complexity, there exist perfectly valid statements in the language of that system, that cannot be proved within that system.

    So mathematics always escapes or ‘slips the net’ of mind. This actually strongly suggests that mathematics is beyond (or prior to, greater than) the existence of minds, since mathematics can never be fully captured within minds. This is the polar opposite of idealism! ( if consciousness really were the base-level of reality, mathematics should be fully comprehensible to mind, and in reality, it just isn’t).

  226. Stefan O'Rear Says:

    Does anyone have up to date information on the status of BB(5), ideally a complete list of machines whose halting status is unknown? By my estimation, there are about \(10^{11}\) isomorphism classes of 2-symbol 5-state machines, well within the reach of modern cloud computing.

  227. Scott Says:

    Stefan #226: Good question! The last I heard publically, it had been narrowed down to about forty 5-state machines for which it’s still not proved that they run forever. See this webpage, which has a list of the machines. Heiner Marxen’s webpage or the Googology Wikia might also have more up-to-date information.

    Now, in 2009, I met with an MIT undergraduate named Daniel Briggs who said he’d narrowed it down still further, to about twenty 5-state machines. Alas, I don’t have his list (let alone proofs!). I believe Daniel then graduated and became a grad student at UMass Amherst. Alas, I can’t find a current email address for him—anyone else want to try (or maybe Daniel himself reads this blog? 🙂 )

  228. Royce Peng Says:

    Stefan #226

    The most well-known attack on BB(5) was conducted by Skelet (Georgi Georgiev) no later than 2003. His program was able to reduce the number of outstanding 5-state Turing machines to 164. Of these, 54 write a 0 on the first step, so they cannot write a higher number of 1’s then some machine that starts by writing a 1. (although they could still run the longest) Of the remaining 110, he claims to have proven all but 42 to not halt by hand.

    The unresolved Turing machines are listed here:

    Cloudy176 of the Googology Wiki was able to prove 14 of the unresolved machines to be nonhalting using a computer; unfortunately, I am unable to find the page where he identifies those 14 machines and describes his proof process. As far as I know, this is the only further progress to Skelet’s work in the last 13 years.

    Joachim Hertel also approached the problem (around 2009?) and was able to reduce the number of unresolved machines to 100. I don’t know where to find a list of his unresolved machines; it would be interesting to cross-reference Skelet’s and Hertel’s list.

    Also last decade, the RAIR lab launched an attack on the Busy Beaver problem, but used the “quadruple” version of Turing machines; that is, at each step the machine either wrote a symbol or moved left or right, but not both. They were able to reduce the number of unresolved 5-state machines to 98, and the number of 6-state machines to 42166. More information here:

    Currently, James Harland seems to be the main researcher into the Busy Beaver problem; he says he plans to find the Busy Beavers for both 5 and 6 states. (a lofty goal!) More generally, he wants to solve BB(m,n) for m states and n symbols given mn <= 12.

    The seeming inertia on BB(5), despite how close it seems, makes me rather doubtful that we will ever fully solve BB(6) and above. It's kind of like asking "when will we pick that fruit at the top of that 1000-foot tree?" when there is a fruit within reach that remains unpicked.

  229. Avi Says:

    Archived pages from Daniel Briggs’ mit site: (This is a list, although without proofs.)

  230. Avi Says:

    (The proofs are in the first link, actually, in the zip file downloads.)

  231. Scott Says:

    Avi #229, #230: Thank you!!! That was a superb contribution to the underdeveloped field of Turing-machine archaeology.

  232. Sniffnoy Says:

    Correction: Twitter user shachaf points out to me (why he didn’t just post this here, I don’t know) that in fact the function I said is not possible in Haskell is possible in Haskell due to some trickery with typeclasses. I’m pretty confused as to how this is possible, but as somebody’s written a Haskell equivalent of printf that works roughly that way, apparently it is.

    So, I should correct my statement form “this isn’t possible in Haskell” to “this isn’t possible in the Hindley-Milner type system”.

  233. Sniffnoy Says:

    You know, I can’t help but compare the attempt to compute BB(n) for small n to the attempt to compute R(n,n) for small n. Surely we can’t be closer to determining BB(5) than we are to determining R(5,5)? I guess there’s not a direct relation, but it would still seem very strange if true.

    I mean, I guess there is a relation between the two functions just due to the fact that R(n,n) is computable, but I don’t know abou a relation between their respective difficulties of determination. Like, maybe when you consider a Turing machine with enough states (call it m) to compute R(n,n), you find that the machine that actually achieves BB(m) is entirely different — like, without having to even find the maximizing machine, you can easily find ones that do more than your Ramsey-computing machines — and so the difficulty of computing R(n,n) is irrelevant to the difficulty of determining BB(m). Indeed, I’d be pretty surprised if that weren’t the case; why should BB(m) have to do with Ramsey theory computations? That seems like a strangely specific hypothesis.

    And then there’s the problem of formalizing what we even mean by the difficulty of determining R(5,5)! Like, if you make it R(n,n), then OK, R(n,n) is computable and BB(n) isn’t, so BB is the more difficult function, done. But that doesn’t help with individual values! And with the Goldbach example, well, that’s an unproven theorem, a Π_1 theorem, that can’t just be proven by computation. So it makes sense to say something is “at least as hard as the Goldbach conjecture”, in the sense that it would reduce the Goldbach conjecture to a finite computation. But determining R(5,5) is already a finite computation! Just a really difficult one.

  234. Stefan O'Rear Says:

    From my view the difference is that there’s an upper bound on \( R(5,5) \) which is easily computable, of reasonable size, and has all of the same essential properties. If I ever need \( R(5,5) \) for a proof, I can almost certainly just use 49 and it will work.

  235. This Turing machine should run forever unless maths is wrong - Ac Stories Says:

    […] size of these Turing machines, pushing them to the limit. Already a commenter on Aaronson’s blog claims to have created a 31-state Goldbach machine, although the pair have yet to verify […]

  236. This Turing machine should run forever unless maths is wrong | abidakun Says:

    […] size of these Turing machines, pushing them to the limit. Already a commenter on Aaronson’s blog claims to have created a 31-state Goldbach machine, although the pair have yet to verify […]

  237. Void Says:

    I know Dan, socially. (and I also enjoy recreational mathematics :)) haven’t seen him in quite some time, but this is fascinating and I’ll point him at it. Certainly going to enjoy spending a few hours poking at the 27 state machine. 🙂 Great stuff!

  238. sep332 Says:

    Wolfgang #155: In addition to the links from John Baez, here’s a video of someone introducing and using lambda calculus, aimed at a general audience.

  239. This Turing machine should run forever unless maths is wrong | 神刀安全网 Says:

    […] size of these Turing machines, pushing them to the limit. Already a commenter on Aaronson’s blog claims to have created a 31-state Goldbach machine , although the pair have yet to verify […]

  240. Paul Chapman Says:

    The linked article states,

    “Z is designed to loop through its 7918 instructions forever, but if it did eventually stop, it would prove ZFC inconsistent. Mathematicians wouldn’t be too panicked, though – they could simply shift to a slightly stronger set of axioms.”

    I’ve sent an email to the author stating that this is wrong; you can’t remove inconsistency in an axiom scheme by strengthening it. He disagrees, saying this is what better-informed sources have told him. Have I embarrassed myself?

  241. Scott Says:

    Paul #240: You’re absolutely right. Mathematicians would need to switch to a weaker (or at least incomparably different) set of axioms, one that didn’t prove all the theorems of the set that had been found to be inconsistent, not to a stronger set.

    Having said that, I suspect Jacob Aron meant the word “stronger” in the colloquial sense of “tough,” “resistant to breaking,” which just happens to be the opposite of the logicians’ sense of “able to prove a lot of theorems”! 🙂

  242. Jacob Aron Says:

    Just to confirm, Scott is correct – sorry for any confusion!

  243. Paul Chapman Says:

    Scott #241: I also suggested that there would probably be rather a lot of panic if ZFC were proved inconsistent. 🙂

    It’s a shame that the author didn’t attempt to talk about the busy beaver problem, but I understand to some extent the constraints upon writers for the mathematical laity (of which I generally classify myself a member).

    Now, I have a question: Let “God’s busy-beaver number” be the least integer N such that there is an N-state, two-symbol TM which does not halt but cannot be proved from (say) ZFC not to halt. We know so far that 5 <= N <= 7918. Do we know whether N is computable?

  244. Scott Says:

    Paul #243: The notion of “computable” applies only to functions, not to individual integers. (In some sense, every integer N is computable: just write a program that prints N as its output!)

    But probably you meant, “do we know whether the value of N is itself provable in ZFC?” The answer is no, we don’t: for all we know that’s also independent.

  245. Stefan O'Rear Says:

    Scott #244: The value of N cannot be proven in ZFC alone, because the existence of N implies Con(ZFC) (since it’s stating that something is not provable). [Also, stay tuned for a reduction of the upper bound in a day or two.]

  246. Scott Says:

    Stefan #245: You’re technically right of course … but come on! 🙂 One could certainly imagine that ZFC would prove the values of BB(k) for all k<N, and would also prove that a determination of the value of BB(k) for any k≥N would imply Con(ZFC) (i.e., that ZFC can’t prove those values conditional on its own consistency). Whether ZFC determines the precise threshold in that way is itself an interesting question.

    Eagerly awaiting your reduction of the upper bound!

  247. Zi Wang Says:

    I don’t see the point of doing busy beavers or put it this way if you specialise in recursion theory doing such stuff won’t get you a job. But shall I (where are the logicians?) remark that Razborov and Rudich has shown that almost (I put almost here because I don’t follow the developments) all known circuit lower bounds can be proven in a very weak fragment of arithmetics that doesn’t even prove the pigeonhole principle.

  248. Scott Says:

    Zi #247:

      I don’t see the point of doing busy beavers or put it this way if you specialise in recursion theory doing such stuff won’t get you a job.

    Yes, that’s precisely the nice thing about having tenure! 😉

  249. Mateus Araújo Says:

    This is pure, unfiltered coolness! Congratulations!

    I’m afraid I’m a bit late to the party, but I’d like to ask one question: is it ok to assert Con(SRP)? Do I get into any trouble if I simply assert the consistency of any of these large cardinal theories?

    I ask this because I believe that Turing machines either halt or run forever, and I’d like to have a theory that can prove that. Do I get that if I assert the consistency of some large cardinal theories?

    To be more precise, the Turing machine that looks for an inconsistency in ZF can be proven to run forever in ZF+Con(ZF). And the Turing machine that looks for an inconsistency in ZF+Con(ZF) can be proven to run forever in ZF+Con(ZF)+Con(ZF+Con(ZF)). Can I define some kind of “true believer” theory as some kind of limit of this tower of theories? Can this “true believer” theory prove that all Turing machines either halt or run forever? Is the “true believer” theory related to these large cardinal theories?

  250. Stefan O'Rear Says:

    Mateus Araújo #249:
    These are very good questions, and ones that have been driving the field for a very long time!

    Is it OK to assert …: Maybe! They haven’t been disproved yet. Another thing foundations of mathematics people look for is the consistency strength relation; if SRP is strictly weaker than some axiom X which has been studied for decades, you can have more confidence in it. (I haven’t read Friedman’s work enough to know what X might be, but the context of the post makes it sound like people think SRP is weak.)

    I beleive that Turing machines either halt or run forever…: Good. Statements that can be recast as “X turing machine runs forever” are known as \( \Pi_1 \) statements in the arithmetical hierarchy. Opinions vary on what mathematical statements are considered objectively meaningful, but \( \Pi_1 \) statements are accepted by nearly everyone.

    … and I’d like to have a theory that can prove that: Careful, you just stepped on the second incompleteness theorem. Any theory which is strong enough to handle the theory of Turing machines and is recursively axiomatizable (you have finitely many axioms, or failing that a program that can output all of the axioms in order) cannot prove any statement equivalent to its own consistency, and a recursively axiomatizable theory contains a Turing machine that’s equivalent to its own consistency, so T cannot resolve all Turing machines.

    In other words, finitely many new axioms cannot help. You could add an infinite number of axioms for each Turing machine, or equivalently an axiom for every true \( \Pi_1 \) sentence; this theory isn’t recursively axiomatizable, but it gets used occasionally.

    … as some kind of limit of this tower of theories?: Yes you can! You just need to take the infinite number of consistency axioms; but they can be output by an interactive process, so your true beleiver theory is recursively axiomatizable, and while it can prove \( \mathrm{Con}(\mathrm{ZF}_1) \), \( \mathrm{Con}(\mathrm{ZF}_2) \), etc, it cannot prove itself. Essentially, you’ve constructed a sequence of theories, and then your “limit” is the theory at the first limit ordinal \( \omega \). You can keep going by adding a consistency axiom for the true beleiver theory, on to \( \mathrm{ZF}_{\omega+1} \) and so forth.

    This has nothing to do with large cardinals, but it does send you deep into the theory of large countable ordinals. You can keep going as long as the ordinals are describable by a Turing machine; once you get to the Church-Kleene ordinal your theory \( \mathrm{ZF}_{\omega^1_{\mathrm{CK}}} \) might be able to decide all Turing machines — but it has stopped being recursively axiomatizable!

  251. Sniffnoy Says:

    Stefan: Actually, it seems that this particular sequence in a sense stops at ω+1?

  252. Stefan O'Rear Says:

    Sniffnoy #251: ah yes… that post is a much better reply than the one I wrote.

  253. Scott Says:

    Mateus #249: Excellent questions! Stefan already made a good stab at them, but to expand on what he said:

    In his 1939 PhD thesis, Alan Turing considered PRECISELY the question of whether, by adding enough consistency statements (Con(ZF), Con(ZF+Con(ZF)), and so on into the ordinal hierarchy), you can eventually prove all the Π1-sentences. Alas, he discovered that the answer is unsatisfying: in order even to define these ordinal theories, you need a Turing machine to list the axioms, and you can use that Turing machine to smuggle in any Π1-sentence you want, in a cheap way. For more, see my blog post What Alan T. Did For His PhD.

  254. Scott Says:

    Stefan #250: Awesome!!! I will call this out in my updates post. Keep us apprised of any further lowerings.

  255. Mateus Araújo Says:

    Stefan #250, Sniffnoy #251, Scott #253: Thanks a lot for answering my question! In retrospect it is clear that this question is the first that one should ask after learning about Gödel’s theorem, and I could have simply searched it instead of asking it.

    But… arghhhh!! This is the stuff nightmares are made of! Specially François’ last comment in Scott’s old post; it implies in a strong sense that it is not enough to simply have ZF and an infinite amount of faith in consistency to be able to prove all true $\Pi_1$ sentences; and it suggests that Stefan’s hope that going up to the Church–Kleene ordinal is enough is probably misplaced.

    But I guess I should be happy with this (I’m not), as otherwise Gödel’s theorem would be kind of lame; it would be an easy way out of its conclusion.

    One thing that I think is incorrect is Stefan’s statement that his nothing to with large cardinals; if I understood correctly they imply at least the consistency of ZF, so they are related at least to ZF+Con(ZF), if not to something higher in the tower.

    Is there any hope I can get the $\Pi_1$ sentences out of these large cardinals instead?

  256. Scott Says:

    Mateus #255: Well, for any fixed (true) Π1-sentence, there’s some sound theory that proves that sentence (just tack the sentence on as an additional axiom!), and Turing 1939 tells you that there’s a cheap encoding trick to make that theory “look like” just ZF plus some number of iterated consistency statements. (Which I believe is equivalent to: ZF plus some number of large-cardinal axioms.) On the other hand, Gödel’s Theorem tells you that, yes, however many consistency statements you tack on, as long as there’s still a computable enumeration of those consistency statements, you can construct a true Π1-sentence that you still can’t prove: namely, the consistency of your current theory!

  257. Mateus Araújo Says:

    Scott #256: Actually I did not understand why Turing’s cheap trick is interesting. He did write out the \(\Pi_1\) sentence explicitly (or rather the Turing machine M) in the Turing machine that was enumerating the axioms of F; so for human eyes it does not “look like” just ZF plus some consistency stuff.

    Are you suggesting that it is not possible to define some kind of “clean” axiom-enumerating Turing machine in order to rule out this sort of shenanigans?

  258. Stefan O'Rear Says:

    Mateus Araújo #255:

    FYI, the MathJax trigger is “backslash left-paren MATH HERE backslash right-paren”, NOT “dollar MATH HERE dollar” as TeX itself uses.

    You can run “MathJax.Hub.Typeset()” in a JavaScript console to make math blocks display in the preview area; would be nice if it did so automatically.

    There seem to be some other subtle differences between HTML scrubbing in the preview and HTML scrubbing after I post, which I’m slowly figuring out. <pre> and friends seem unreliable.

  259. Mateus Araújo Says:

    Stefan #258: Cool, thanks! I didn’t know you could run your own JavaScript in webpages like this. Very useful (in particular for this blog, before Scott decides to hack the preview area). Just a tip for others that may try this: you need to type your LaTeX, and then call MathJax.Hub.Typeset(). It is obvious in retrospect (if only the things that are obvious in retrospect to me were just obvious to me…)

    But I had already figured by trial and error which was the correct command, as you can see from my last post.

  260. Avi Says:

    It is obvious in retrospect (if only the things that are obvious in retrospect to me were just obvious to me…)

    Insert P=NP joke here

  261. Scott Says:

    Mateus #257:

      Are you suggesting that it is not possible to define some kind of “clean” axiom-enumerating Turing machine in order to rule out this sort of shenanigans?

    For specific computable ordinals, you can easily write a “clean” Turing machine to enumerate the consistency axioms up to that ordinal. But what I don’t think anyone knows how to do, is to give a general criterion to decide whether a given axiom-enumerating Turing machine is “clean” or “shenanigany.” And without such a criterion, you also don’t have a clean answer to the question of whether, by going high enough up in the computable ordinals, you eventually hit all the true Π1-sentences.

  262. Mateus Araújo Says:

    Scott #261: I think you’re asking too much if you demand a criterion to exist to differentiate between clean and shenanigany Turing machines.

    I would be enough if you could define a single clean Turing machine that takes the definition of the ordinal as an input and enumerates the axioms up to that ordinal.

  263. Scott Says:

    Mateus #262: Aha! The problem is, what exactly does it mean to “take the definition of an ordinal as an input”? When you get into really enormous ordinals, the only principled system we have for defining ordinals at all is to give a Turing machine that computes the ordinal’s order relation. (Well, either that or else something even worse, like a first-order definition of the ordinal in ZFC.) So then we’re right back where we started, trying to find some criterion to eliminate the shenanigany definitions.

  264. Mateus Araújo Says:

    Scott #263: 0.0

    I was reading the definition of some of these ordinals, and things do get pretty hairy.

    So what you’re claiming is that to even define those ordinals I need a Universal Turing machine?

    If so I would understand that it is pretty hard to tell whether you smuggled a \(\Pi_1\) sentence inside the definition of the ordinal or not.

  265. Royce Peng Says:

    Avi #230: Do you know exactly where the proofs are in the linked zip file? I see a bunch of computer files, but no text file explaining how it all works.

    I did manage to the web page where Cloudy176 described his results on BB(5):

    So Briggs apparently proved that 23 of the outstanding 42 machines of Skelet do not halt, and Cloudy176’s results add another 4, bringing the number of outstanding machines down to 15. (However, Briggs does mention another machine that should be labelled as an HNR, which brings it back up to 16.)

  266. Avi Says:

    Royce #265:

    $ cd Turing/proofs/
    $ find

    ./3/comments on 3rd.rtf
    ./3/specs for 3rd.rtf
    ./7/specs for 7th.rtf
    ./7/comments on 7th.rtf
    ./10/specs for 10th.rtf
    ./10/example for 10th.rtf
    ./10/comments on 10th.rtf
    ./8/comments on 8th.rtf
    ./8/specs for 8th.rtf
    ./8/proof for 8th.rtf
    ./8/supplement to eighth.rtf
    ./27/comments on 27th.rtf
    ./24/comments on 24th.rtf
    ./9/comments on 9th.rtf
    ./9/Comments2 on 9th.rtf
    ./9/comments3 on 9th.rtf
    ./9/9th specs.rtf
    ./9/9th output.rtf
    ./9/example for 9th.rtf
    ./15/comments on 15th.rtf
    ./17/comments on 17th.rtf
    ./11/specs for 11th.rtf
    ./11/11th output.rtf
    ./11/comments on 11th.rtf
    ./30/comments on 30th.rtf
    ./1/comments on 1st.rtf
    ./25/comments on 25th.rtf
    ./16/comments on 16th.rtf
    ./2/comments on 2nd.rtf
    ./18/comments on 18th.rtf
    ./12/specs for 12th.rtf
    ./12/comments on 12th.rtf
    ./5&6/comments on 5th & 6th.rtf
    ./26/specs for 26th.rtf
    ./23/comments on 23rd.rtf
    ./13/specs for 13th.rtf

    You want the file.

  267. John Tromp Says:

    Stefan O’Rear #96 Do you have an implementation of your ZFC proof searcher in a more traditional language (e.g. Haskell/Perl/Javascript) ?

  268. Joseph Van Name Says:

    Mateus Araújo. #249

    The transfinitely iterated consistency of ZFC (or SRP or another theory) only gives us a very small boost in consistency strength (for reasonably small countable ordinals) while large cardinal axioms provide a much larger boost in consistency strength and theorem proving power. For example, the existence of an inaccessible cardinal, the smallest large cardinal axiom, proves the iterated consistency of ZFC for many countable ordinals. Furthermore, the existence of an inaccessible cardinal proves the existence of many transitive models of ZFC while the existence of a single transitive model of ZFC proves the iterated consistency of ZFC. More generally, if J is a large cardinal axiom and K is a stronger larger cardinal axiom, then K in general implies the transfinitely iterated consistency of J and much more. See the bottom of this linked page for a description of the gap between the iterated consistency hierarchy and the beginning of the large cardinal hierarchy.

    Another difference between the transfinitely iterated consistency heirarchy and large cardinals is one can prove much more elegant theorems when one assumes large cardinal axioms while the iterated consistency hierarchy only seems to provide messy combinatorial statements which are independent of ZFC.

    I should mention that at least the smaller large cardinals (from inaccessibility to slightly above weak compactness) are in a sense a true believer theory. For example, if believes that the universe satisfies ZFC, then it is not too much of a stretch to believe that there exists a cardinal k where V_k satisfies ZFC. However, the inaccessible cardinals are precisely the cardinals k such that V_k satisfies second order ZFC. Other arguments have been put forth in favor of large cardinal axioms all the way up the large cardinal hierarchy and set theorists are generally in favor of the consistency and the actual existence of all large cardinal axioms. For instance, the mere fact that no inconsistency has been found in the large cardinal hierarchy up through I0 even though much mathematics has been developed involving large cardinals suggests that large cardinals are actually consistency and truly exist.

    Large cardinals are the way to go if one wants to prove results which cannot be established in ZFC. That being said, the extra theorem proving power that large cardinals provide has so far not been used to prove theorems in subjects such as complex analysis or about Riemannian manifolds, but Harvey Friedman believes that in the future, large cardinals will be systematically used for a wide variety of Concrete Mathematics in an essential, unremoveable, way [Friedman, Boolean relation theory and incompleteness]. For the past year, I have been working to fulfill Harvey Friedman’s prediction.

  269. Stefan O'Rear Says:

    John Tromp #267: No.

  270. Mateus Araújo Says:

    Joseph #268:

    I’m afraid that if spend too much time reading your wiki I’ll end up like Cantor 0.0

    But your statement that large cardinal gives you elegant theorems while iterated consistency gives you only messy combinatorial statements interests me: is there any example of such theorems that I would understand? I’m a physicist, no background in set theory.

    About your final paragraph: so you find it plausible that some concrete problem like the Riemann hypotheses will shown to be undecidable in ZF but provable in ZF together with some large cardinal axiom (stronger than just Con(ZF))? That would be absolutely fascinating.

  271. John Sidles Says:

    Joseph Van Name (#268) references Harvey Friedman’s text Boolean Relation Theory and Incompleteness (2011), whose Introduction contains these remarks:

    The finite, the discrete, and the continuous (on nice spaces) are generally considered concrete – although, generally speaking, only the finite is beyond reproach. […] To this day, no candidates for Concrete Mathematical Incompleteness have arisen from the natural course of mathematics.

    It’s natural to wonder which (if any) of the still-open Clay Millennium Prize problems plausibly are candidates for being “Concretely Incomplete” in Friedman’s sense.

    Note too that the StackExchange’s community wiki “For which Millennium Problems does undecidable -> true?” partially addresses this question.

    Instance 1  Terry Tao (among others) has speculated that the Navier–Stokes equations may support universal computation; even so it’s not obvious (to me at least) that universal “fluid computation” would imply “Concrete Incompleteness”.

    Instance 2  The infinite-dimensional quantum Hilbert spaces of the Clay Problem “Yang–Mills has a mass gap” are plausibly “not nice” in Friedman’s sense; still it would be mighty unsatisfying (to me) if the existence/nonexistence of a mass-gap was “Concretely Incomplete”.

    Instance 3  In the Complexity Zoo, the countably-finite set of Turing Machines that decides languages in P and NP is not obviously “nice” in Friedman’s sense, in the sense that appeals to computational oracles are essential to deciding complexity class membership (in all formulations of P versus NP that are known to me).


    A concrete conjecture  In regard to (1-3) above, issues of “Concrete Incompleteness” can be addressed within ZFC/HoTT not by extension, but rather by restriction, as follows:

    Concrete Resolution 1  “Concretely” accept that Navier Stokes flows are computationally universal (which to me doesn’t seem all that shocking).

    Concrete Resolution 2  Pullback the “not-nice” infinite-dimensional Hilbert spaces of quantum field theory onto the “nice” varietal spaces of countable-rank tensor produce spaces (AKA algebraic joins); then prove a mass gap (“Concretely”) on these nicer spaces.

    Concrete Resolution 3  Pullback the “not-nice” set of all Turing Machines onto the “nice” set of Turing Machines whose runtimes are formally decideable; then separate P from NP on this (“Concretely”) nice set of Turing Machines.

    A concrete conclusion  “Concrete” progress on at least some Millennium Prize problems may require that we pull back these problems onto “Concretely nicer” state-spaces (in Friedman’s sense of “Concretely nice”).

  272. John Baez Says:

    John Sidles wrote:

    The infinite-dimensional quantum Hilbert spaces of the Clay Problem “Yang–Mills has a mass gap” are plausibly “not nice” in Friedman’s sense; still it would be mighty unsatisfying (to me) if the existence/nonexistence of a mass-gap was “Concretely Incomplete”.

    I feel sure this won’t be an example of incompleteness. There are extensive numerical simulations showing that SU(2) Yang-Mills theory exists and has a mass gap. The challenge is just going from this to a rigorous proof. This is a very interesting challenge in analysis. But it’s much less “iffy” than global existence for Navier-Stokes equations, where I don’t think there’s a consensus as to which way it will fall.

    (Proving that SU(2) Yang-Mills theory exists is almost as hard as proving the mass gap: I believe the Clay Mathematics Institute combined these problems into one because they didn’t want an existence proof that used techniques that couldn’t settle further questions.)

  273. John Tromp Says:

    Stefan O’Rear: gratz on your incredible reduction from 5349 to 1919 states. what techniques enabled that?

  274. Stefan O'Rear Says:

    John Tromp #273:

    Deficient time estimation. I would have published the 1919-state version first, but that would have involved breaking my promise at #245, so I published a completely unoptimized version in order to keep that. It’s easy to get large improvements on a completely unoptimized thing.

    If you’re wondering what specifically I changed, look at the git history; the largest step was the first one, when the representation of the register machine was changed (as second-order effects, this made all of the inc/dec subroutines the same size, which increased the structural reguarity of the compiled expressions, improving BDD compression).

  275. The Busy Beaver Game | Azimuth Says:

    […] Aaronson, [The 8000th Busy Beaver number eludes ZF set theory: new paper by Adam Yedidia and me](, _Shtetl-Optimized_, 3 May […]

  276. John SIdles Says:

    John Baez (#272), please let me say that I agree with your points entirely, with the extension that “proving that SU(2) Yang-Mills theory exists” can be concretely realized as “proving that SU(2) Yang-Mills theory is realized as the effective dynamics of some well-posed post-Hilbert (stringy? varietal?) dynamical theory.”

    Last month’s Aguilar/Binosi/Papavassiliou review “The gluon mass generation mechanism: a concise primer” (April 2016) surveys the very many, immensely subtle, field-theory features that any such stringy/varietal post-Hilbert dynamical theory would have to exhibit, concretely and more-or-less naturally.

    Any such exhibition would be well-worthy of a Millennium Prize (needless to say). 🙂

  277. Joseph Van Name Says:

    Mateus Araújo #270. When I mentioned the elegant theorems from large cardinals, I was to a large extent referring to set theoretic results about objects of large cardinal size. However, large cardinals are often used to prove results about objects whose size is at most that of the reals or of some other relatively small size in forcing extensions. In particular, large cardinals prove the consistency of some results about objects whose cardinality is at most that of the real numbers. Large cardinals also sometimes (though not often enough) prove results about small cardinalities outright. For example, large cardinals prove that the continuous image of the complement of the continuous image of a closed subset of the real numbers is Lebesgue measurable. Furthermore, similar statements cannot be proven without large cardinal hypotheses.

    The only lines of investigation that I know of which use large cardinals to prove results about finite or countable objects are the work of Harvey Friedman and the work on self-distributive algebras (which are both mentioned here). Out of these lines of investigation, perhaps the most well-known result states that if a rank-to-rank cardinal exists, then the free self-distributive algebra on one generator can be embedded into an inverse limit of finite algebras (I have shown that this result extends to an arbitrary number of generators).

    In my biased opinion, it seems like in the near future through the self-distributive algebra line of investigation, people will prove many results in various fields of mathematics such as algebraic geometry or knot theory or other areas which cannot be proven without using large cardinal hypotheses. In fact, one could use groups or rings together with algebras of rank-into-rank embeddings to produce new self-distributive algebras, so perhaps these constructions could tell us something about the groups or rings being used (these constructions are very recent). However, actually proving that a statement about self-distributive algebras is independent of ZFC seems at the moment intractible. So yes, I could imagine that in the not too distant future someone would prove using large cardinals a result like the Riemann hypothesis or some other result.

    The predicted prolific use of large cardinals in finite or countable structures in the future is vastly different from the present state of mathematics since Friedman’s Grand Conjecture predicts that Every theorem published in the Annals of Mathematics whose statement involves only finitary mathematical objects (i.e., what logicians call an arithmetical statement) can be proved in EFA where EFA (Exponential Function Arithmetic) is a very weak fragment of Peano Arithmetic.

  278. Heiner Marxen Says:

    Stefan O’Rear. I had a quick look at your “zf.lac”, and either I’ve not understood it, or it has a “bug”.

    I take it, that function arguments in Laconic are passed “by reference”. There are not any (hidden) copies of the passed in vars/tapes. The way you pass “t0” from function “car()” into function “unpair()” conflicts with its usage inside of “unpair()”: the statement out2 = t1; clobbers out2 aka t0, and the next statement out1 = t0 - t1; does not find the intended value in t0, anymore.

    Excuse me if this is the wrong platform, or the above is gibberish. Anyhow, what makes you confident, your code is free of bugs? Do you do any checks?

    Otherwise: wow! I admire this kind of creative work!

  279. Stefan O'Rear Says:

    Heiner Marxen #278:

    I’m aware of that issue, I mentioned it last week and PR 3 addresses it. I’ll deal with that in a couple days when I get to addressing Mario’s (optimization) feedback as well.

    I agree that more validation is needed. I’ve done a fair amount of validating on squaresaresmall.nql, but nothing on the machines that are actually expected to run forever.

  280. Busy beavers defeat Infinity | Drawing Blanks Says:

    […] The 8000th Busy Beaver number eludes ZF set theory: new paper by Adam Yedidia and me […]

  281. Donal Tromp – All About Donald Trump Says:

    […] The 8000th Busy Beaver number eludes ZF set theory: new paper by Adam Yedidia and me […]

  282. Robert Sawyer ("r.e.s.") Says:

    The number of states in machine Z is easily reducible from 7918 to 7767 (= 7918 – 151) by simply merging 151 pairs of complementary “half-used states”. Because the ERROR state never occurs for this machine, any pair of states of the form [abcd(ERROR-|efghij), klmn(opqrst|ERROR-)] can be merged into a single state of form abcd(opqrst|efghij), provided only that every “goto klmn” in the machine also be replaced by “goto abcd”. E.g., the pair of states [0041(ERROR-|0042bL), 0000(0001bR|ERROR-)] can be merged into 0000(0001bR|0042bL), with every “goto 0041” elsewhere being replaced by “goto 0000” (in this instance, there’s only one such occurrence, namely 0037(0038aR|0041bR) must be changed to 0037(0038aR|0000bR)). In machine Z, I count 151 states of the first kind above, and 180 of the second kind; hence, 151 merges can be done without essentially altering the machine. (I used the same principle to reduce the number of states in what was possibly the first TM to compute a number larger than Graham’s number — see the webpage linked to my name above.)

  283. This Turing machine should run forever unless maths is wrong | TreasureReading Says:

    […] size of these Turing machines, pushing them to the limit. Already a commenter on Aaronson’s blog claims to have created a 31-state Goldbach machine, although the pair have yet to verify […]

  284. Why Writing Correct Software Is Hard (Part 1) – Cloud Up2date Says:

    […] past May, Adam Yedidiya and Scott Aaronson published a program that compiles down to a Turing machine that works by that principle (or something equivalent to […]

  285. donal tromp Says:

    […] The 8000th Busy Beaver number eludes ZF set theory: new paper by Adam Yedidia and me […]

  286. foobar Says:

    Is this just an example of the witchcraft that is non-constructive mathematics? (joking)

  287. Pascal Ochem Says:

    As a notorious open problem with a hopefully small TM,
    there is the existence of an odd perfect number.
    In pseudo C:
    int i, s, d;
    i = 1;
    s = 0;
    for(d = 1; d < i; d++) if(i%d == 0) s = s+d;
    if(s == i) return;
    i = i+2;

  288. Tim Converse Says:

    This blogpost, and in particular the question of whether there are “super-short programs that already exhibit “Gödelian behavior” inspired me to write a prediction-market question:–100-states-be-found-with-behavior-that-cannot-be-characterized-by-mathematical-proof/

    Scott, I implicitly made you the arbiter of how the question resolves. 🙂

    Comments welcome, on either or here.