## Rosser’s Theorem via Turing machines

*(Thanks to Amit Sahai for spurring me to write this post!)*

**The Background**

We all remember Gödel’s First Incompleteness Theorem from kindergarten. This is the thing that, given a formal system F, constructs a sentence G(F) that’s a mathematical encoding of

“This sentence is not provable in F.”

If F proves G(F), then F proves both that F proves G(F) and that F doesn’t prove G(F), so F is inconsistent (and hence also unsound). Meanwhile, if F proves Not(G(F)), then it “believes” there’s a proof of G(F). So either that proof exists (in which case it would render F inconsistent, by the previous argument), or else it doesn’t exist (in which case F is unsound). The conclusion is that, assuming F is powerful enough to express sentences like G(F) in the first place, it can’t be both *sound* and *complete* (that is, it can’t prove all and only the true arithmetical statements).

OK, but the way most people like to state Gödel’s Theorem is stronger than that: **no sufficiently-powerful formal system F can be both complete and consistent.** Note that soundness implies consistency, but not vice versa. (If I believe that there’s a giant purple boogeyman on the moon, then presumably my belief is

*unsound*, but it might be perfectly

*consistent*with my various other beliefs about boogeymen.)

Unfortunately, neither Gödel’s original proof, nor computer scientists’ favorite alternative proofs, actually give you the stronger statement about completeness and *consistency*. And this has been a persistent problem when I teach Gödel in my undergraduate computability and complexity class.

Where’s the catch in Gödel’s argument? It’s in the case where F proves Not(G(F)) (i.e., that G(F) is provable), even though in reality G(F) is true (i.e., G(F) *isn’t* provable). In that situation, F would clearly be *unsound*, but it might not contain any *contradiction*—basically because, no matter how long you looked, you could never rule out F’s (false) belief that G(F) is provable. Indeed, F would be what I like to call a *self-hating theory*: a theory, like F+Not(Con(F)), that pessimistically believes in its own inconsistency, even though in fact it’s perfectly consistent. (By contrast, if F arrogantly believes in its own *consistency*, then it *can’t* be consistent by the Second Incompleteness Theorem! There’s a lesson there somewhere…)

The way Gödel himself solved this problem was by introducing a new concept called ω-consistency, which is intermediate between consistency and soundness. He then showed that F can’t be both complete and ω-consistent. (Why didn’t Gödel simply talk about soundness? Because unlike consistency or ω-consistency, soundness is a “metatheoretic” concept that’s impossible to formalize in F. So, if he used soundness, then the First Incompleteness Theorem couldn’t even be *stated*, let alone proved, in F itself, and that in turn would create problems for the proof of his *Second* Incompleteness Theorem.)

Anyway, from the standpoint of an undergrad class, the fear is that, once you start talking about “ω-consistency,” all the romance and self-referential magic of Gödel will vanish in a morass of technicality.

So surely we can dot the i’s here (or rather, put the umlauts on the ö’s), and prove the stronger, cleaner statement that F can’t be both complete and *consistent*?

Yes we can—but to do so we need Rosser’s Theorem: a strengthening of Gödel’s Theorem from 1936 that’s much less well-known among the nerd public than it ought to be. In Rosser’s proof, we replace G(F) by a new sentence R(F), which is a mathematical encoding of the following:

“For every proof of this sentence in F, there’s a shorter disproof.”

If F proves R(F), then it also proves that there’s a *disproof* of R(F) that’s shorter than the proof of R(F) whose existence we just assumed. So we can *look* for that disproof (since there are only finitely many strings of symbols to check), and either we’ll find it or we won’t—but in either case, we’ll have revealed F to be inconsistent. Meanwhile, if F proves Not(R(F)), then it proves that there *is* a proof of R(F) with no shorter disproof. So in particular, it proves that there’s a proof of R(F) that’s *no longer* than the proof of Not(R(F)) whose existence we just assumed. But once again, we can *look* for that proof (there are only finitely many strings to check), and either we’ll find it or we won’t, and in either case, F is revealed to be inconsistent.

Notice what the Rosser sentence accomplishes: it creates a *symmetry* between the cases that R(F) is provable and R(F) is *dis*provable, correcting the asymmetry between the two cases in Gödel’s original argument.

Alright, so then why don’t I just teach Rosser’s Theorem in my undergrad class, instead of Gödel’s?

I’ll tell you why: because, when I teach Gödel to computer scientists, I like to sidestep the nasty details of how you formalize the concept of “provability in F.” (From a modern computer-science perspective, Gödel numbering is a barf-inducingly ugly hack!)

Instead, I simply observe Gödel’s Theorem as a trivial corollary of what I see as its *conceptually-prior (even though historically-later) cousin*: Turing’s Theorem on the unsolvability of the halting problem.

For those of you who’ve never seen the connection between these two triumphs of human thought: suppose we had a sound and complete (and recursively-axiomatizable, yadda yadda yadda) formal system F, which was powerful enough to reason about Turing machines. Then I claim that, using such an F, we could easily solve the halting problem. For suppose we’re given a Turing machine M, and we want to know whether it halts on a blank tape. Then we’d simply have to enumerate all possible proofs in F, until we found *either* a proof that M halts, *or* a proof that M runs forever. Because F is complete, we’d eventually find one or the other, and because F is sound, the proof’s conclusion would be *true*. So we’d decide whether M halts. But since we already know (thanks to Mr. T) that the halting problem is undecidable, we conclude that F can’t have existed.

“Look ma, no Gödel numbers!”

**The “New” Observation**

The above discussion leads to an obvious question:

**Is there also a proof of Rosser’s Theorem that uses only Turing machines—analogous to the computer-scientist-friendly proof we just gave for the “original” Incompleteness Theorem?**

My initial worry was that the answer would be no. For not only is Rosser’s sentence more complicated than Gödel’s, but it depends essentially on an *ordering* of proofs—and it’s not clear what such an ordering would correspond to in the world of Turing machines.

Why did this worry me? Because it threatened my conviction that computer programs are “really” at the core of Gödel’s Theorem, whatever any tradition-minded philosopher or logician might claim to the contrary. If even the modest step from Gödel to Rosser required abandoning the computability perspective, then my faith in the Almighty Turing Machine would be shaken.

But never fear! A few months ago, I found a short, simple, Turing-machine-based proof of Rosser’s Theorem. While this seemed too small to write up as a paper, ~~I’d never seen it before (please let me know if you have!)~~, so I thought I’d share it here. (**Update:** Makoto Kanazawa points me to a basically-similar argument in Kleene’s 1967 textbook. So, you can consider what follows to be a “popularization” of Kleene.)

The first step is to define the following variant of the halting problem:

**The Consistent Guessing Problem**

*Given as input a description of a Turing machine M:*

*If M accepts on a blank tape, then accept.**If M rejects on a blank tape, then reject.**If M runs forever on a blank tape, then either accept or reject, but in any case, halt!*

It’s easy to show that there’s no Turing machine to solve the Consistent Guessing Problem, by a modification (arguably, even a simplification) of Turing’s original argument for the halting problem. Indeed, I put the unsolvability of the Consistent Guessing Problem on last semester’s midterm, and at least half the students got it. (Damn! I guess I can’t use that one again.)

See it yet? No? Alright, so let P be a Turing machine that solves the Consistent Guessing Problem. Then we can easily modify P to produce an new Turing machine Q that, given as input a description ⟨M⟩ of another Turing machine M:

- Rejects if M(⟨M⟩) accepts.
- Accepts if M(⟨M⟩) rejects.
- Halts (either accepting or rejecting) if M(⟨M⟩) runs forever.

Now run Q on its own description ⟨Q⟩. If Q(⟨Q⟩) accepts, then it rejects; if Q(⟨Q⟩) rejects, then it accepts. So the only remaining option is that Q(⟨Q⟩) runs forever, violating the third condition.

From the unsolvability of the Consistent Guessing Problem, I claim that Rosser’s Theorem follows. For suppose we had a complete and consistent (but *not* necessarily sound!) formal system F, which was powerful enough to talk about Turing machines. Then using F, we could solve the Consistent Guessing Problem, as follows. Given as input a Turing machine description ⟨M⟩, start enumerating all possible proofs and disproofs of the statement “M accepts on a blank tape.” Accept as soon as you find a proof, or reject as soon as you find a disproof.

Because F is complete, you must eventually find either a proof or a disproof (and therefore halt, either accepting or rejecting). Also, because F is consistent, if M really rejects then F can’t prove that M accepts, while if M really accepts then F can’t prove that M doesn’t accept (since in either case, the contradiction could be discovered in finite time). So you’ll accept if M accepts and reject if M rejects. But this means that you’re solving Consistent Guessing! Since we already showed there’s *no* Turing machine to solve Consistent Guessing, we conclude that F can’t have existed.

QED: the moral order of the universe is restored, and the Turing machine’s exalted position at the base of all human thought reaffirmed.

(Incidentally, you might wonder whether Gödel’s *Second* Incompleteness Theorem, on the impossibility of a consistent F proving its own consistency, can also be proved in a Turing-machine-centric way. To anticipate your question, the answer is yes—and better yet, it even involves Kolmogorov complexity! See, for example, this beautiful recent paper by Shira Kritchman and Ran Raz.)

So, will Gödel’s Theorem always and forevermore be taught as a centerpiece of *computability *theory, and will the Gödel numbers get their much-deserved retirement? I don’t see a reason why that *shouldn’t* happen—but alas, the consistency of my prediction isn’t enough to imply its metatheoretic truth.

Comment #1 July 21st, 2011 at 5:43 pm

Nice trick with avoiding the Godel numbers. However, I always considered them as the first compiler, and analogized them to what a compiler does. And yes, I know they compile statements in first-order logic, not statements in some programming language.

The Godel numbering hack is the source of so many, many reductions from one system to another, I would think. In some way, I think the technique of them is as important a contribution as the result he got with it.

Comment #2 July 21st, 2011 at 5:53 pm

“(From a modern computer-science perspective, Gödel numbering is a barf-inducingly ugly hack!)”

It doesn’t have to be this way. Gödel numbering doesn’t *HAVE* to use prime decomposition or the Chinese remainder theorem. If it’s in a computer science program, define the Gödel number of a formula to be the number which, when written in binary, equals the .txt file containing the formula written in ASCII.

Comment #3 July 21st, 2011 at 6:18 pm

Jay and Sam: Thanks for the comments!

I completely agree that Gödel numbering was a major advance in human thought (how could it not be? it helped pave the way for Turing machines 🙂 ), and also agree that there’s no fundamental reason for the numbering scheme to be as ugly as the usual one.

But I still maintain that, for a CS class, there’s a major advantage to the halting-problem-based proof: namely, it doesn’t require giving

anyjustification for why sentences like G(F) ought to be expressible in the language of F. Indeed, we don’t even need to describe F at all, except hand-wavingly.It’s true that we need to believe that one could construct a Turing machine to enumerate all the theorems of F. But for people with programming experience,

thatclaim (unlikeclaims about expressibility in formal systems) is “intuitively obvious.”Comment #4 July 21st, 2011 at 6:23 pm

This is basically how Kleene’s 1967 textbook presents Rosser’s Theorem.

http://books.google.com/books?id=uV6Sp8gl3PcC&lpg=PR7&ots=HfXuowZ5og&dq=Kleene%20%22mathematical%20logic%22&lr&pg=PA275#v=onepage&f=false

Comment #5 July 21st, 2011 at 6:55 pm

Makoto: Thanks for the reference! I added a comment about it to the post.

Comment #6 July 21st, 2011 at 7:01 pm

It just occurred to me that I don’t know the answer to the following extremely-interesting question: do my “Consistent Guessing Problem” and the Halting Problem belong to the same Turing-degree?

Clearly Consistent Guessing ≤

_{T}HALT, but I don’t see the other direction.Ifthe answer were negative, then Consistent Guessing would be the first example I’d ever seen of a “natural” problem intermediate between computable and the halting problem.Obviously Consistent Guessing is not a language, but that shouldn’t prevent us from talking about its Turing-degree (just like we can talk about a promise problem being NP-complete, BPP-complete, etc).

Comment #7 July 21st, 2011 at 8:16 pm

Many thanks for this great post 😉

Comment #8 July 21st, 2011 at 8:35 pm

I’m not so sure I favor the complete retirement of Godel numbers. They are a useful concept when one wants to talk about other decidability issues. For example, while they aren’t strictly used in the resolution of Hilbert’s tenth problem, it is probably a lot easier to see what is going on there if one has seen something like Godel numbering before.

Comment #9 July 21st, 2011 at 8:39 pm

Sam, unfortunately if one does use a Godel numbering that isn’t something like the standard one used by Godel then the technical details of showing that it actually does what you want become uglier. Godel numbering in a more or less standard form makes it much easier to see that Robinson arithmetic is essentially the minimal object to which Godel’s theorems apply. (Although that’s really much more of a foundations and a logic issue than a compsci issue.)

Comment #10 July 21st, 2011 at 9:47 pm

Joshua: Alright, I agree that the Gödel numbers can come out of retirement when one wants to talk about the decidability of weaker or stronger theories of arithmetic (in other words: “which theories can and can’t reason about Turing machines?”). I was suggesting retiring them from their “day job,” of supporting the conclusion that Gödel’s Theorems (and Rosser’s, etc.) hold for those theories that

arestrong enough to reason about Turing machines.Comment #11 July 21st, 2011 at 10:13 pm

Thanks, Scott, for the post.

Here’s a direct proof that encodes Rosser’s trick (that I started sketching on Scott’s white board yesterday). It assumes, like Scott’s version above, that the formal system F can encode statements that talk about TMs halting or not halting, and that proofs based on a sequence of valid configurations of Turing Machines, and the meaning of special instructions “Halt” and “Loop”, constitute valid proofs in F:

Consider the following TM T (which ignores its input):

1) Via the recursion theorem, obtain own description T.

2) Let A be the statement “T does not halt (on empty input).”

3) Enumerate all strings y in lex order, and for each one:

3a) First check if y is a valid F-proof of (Not A). If so, Loop.

3b) Then check if y is a valid F-proof of (A). If so, Halt.

Now suppose that F is both complete and consistent.

Then there are only two cases, either (1) or (2) below:

(1) There is a proof y of A, but no proof of (Not A). Let y be the lexicographically first such proof.

But then the sequence of configurations of T enumerating all proofs up to y is a proof that T Halts, i.e. it is a proof of (Not A). This is a contradiction.

(2) There is a proof y of (Not A), but no proof of A. Let y be the lexicographically first such proof.

But then the sequence of configurations of T enumerating all proofs up to y, together with the instruction “Loop”, is a proof that T Does Not Halt, i.e. it is a proof of (A). This is a contradiction.

QED

Comment #12 July 21st, 2011 at 10:44 pm

To answer Scott #6:

HALT does not reduce to Consistent Guessing (CG).

Here’s a proof sketch:

We’ll build a language L that’s a solution (i.e., refinement) of CG. On stage i we’ll try to set finitely many bits of L so as to ensure that the ith machine M_i doesn’t solve HALT when given oracle access to L.

If this works, then we’re done! So assume for contradiction’s sake that we get stuck on some stage i. We’re going to use this premise to solve HALT—without an oracle. This absurdity will prove the result.

So—say we’re stuck at stage i, and we have some finite setting L[i] to some of the bits of L, consistent with CG.

Let L* be L[i] augmented with all of the bits ‘forced’ by CG: i.e., L*(M) = 0 whenever M halts with output 0, and similarly for 1. So, L*(M) still is undefined for all but finitely many of the machines M that run forever.

Now, for each input M to M_i, by our assumption we have that

M_i^[L’](M) = HALT(M)

whenever L’ is an extension of L*. An application of Konig’s lemma tells us that for each M there is a number n = n(M) such that any extension of L* with arbitrary values up thru input lengths n, causes M_i(M) to output HALT(M).

So, here’s how we can determine the value HALT(M): we dovetail simulations of all Turing machines. On stage j we simulate the first j machines for j steps each. Whenever a machine halts with output (0 or 1), we extend our finite approximation of L*. (This approximation begins with the finitely many values of L[i], hard-coded into the algorithm). Let L*[j] be the finite approximation to L* on the j-th stage.

On the second part of stage j, we simulate M_i(M) on

allextensions of L*[j] up to length j. That is, we take the values we have so far, and we try extending them in all possible ways.We then simulate M_i(M) on all of these extensions as oracle. If on

allsuch simulations, the machine halts with some output b, without ever trying to access an undefined oracle bit, then we output “HALT(M) = b”.The claim is, this always halts with the right answer. Using our earlier analysis with Konig’s lemma (and the fact that L*[j] converges to L*), we find that for large enough j, we will output the correct bit, IF we reach stage j.

So we just need to argue that we never output a wrong bit earlier. But this is easy: at every stage j, at least one of our simulations is an initial segment of a correct simulation of M_i(M) on a valid CG oracle. Thus that simulation always either halts with the correct output, or doesn’t halt, or tries to access an undefined oracle bit. So we never get a unanimous incorrect vote.

Comment #13 July 21st, 2011 at 10:47 pm

When I said

“We then simulate M_i(M) on all of these extensions as oracle”

I must add, “for j steps each.”

(We don’t want to get stuck on the j-th stage.)

Comment #14 July 21st, 2011 at 11:14 pm

Scott #6:

If the answer were negative, then Consistent Guessing would be the first example I’d ever seen of a “natural” problem intermediate between computable and the halting problem.There are actually many examples of this, as long as you’re reasonably generous about what counts as “natural”. For instance, let a “computable tree” have as nodes a subset of the natural numbers; we should be able to compute membership in the tree, as well as children and parent (assume max fanin is 2).

Every infinite tree of finite fanout has an infinite path–Konig’s lemma. But, does every

computableinfinite tree have acomputableinfinite path? The answer is no, as can be proved thru a diagonalization argument.A natural next question is whether there’s a computable inf. tree, such that any infinite path in the tree can be used to solve HALT. However, it’s a theorem that for any computable infinite tree T of bounded fanout, there is some infinite path in T that cannot be used to solve HALT. In fact, the same is true if we replace HALT with any noncomputable language.

You can read about fun results of this type in Bill Gasarch’s article “A Survey of Recursive Combinatorics”.

Comment #15 July 21st, 2011 at 11:18 pm

Thanks so much, Andy!!

I think this counts as, by far, the

mostnatural example of a Turing-intermediate problem that I’ve ever seen. The key is that, while thereisa Friedberg-Muchnik-like diagonalization buried inside the argument that CG is not equivalent to HALT, that diagonalization is left blithely unspecified by the problem statement, rearing its head only in theanalysis.Indeed, to me this example seems

sonatural that it might count as a counterexample to Wolfram’s “Principle of Computational Equivalence.” 🙂Comment #16 July 21st, 2011 at 11:33 pm

Is this the same Rosser who has a book on math logic that has been on my shelf for 30 some years, and I have always planned to start reading “real soon now”?

Comment #17 July 21st, 2011 at 11:48 pm

Raoul: While I can’t

provethe answer is yes, the circumstantial case is an extremely strong one… 🙂Comment #18 July 21st, 2011 at 11:53 pm

Amit #11: Thanks so much—your version is slick (and would work very well in a computability class as long as one had first proved the Recursion Theorem)! Now

get back to writing your talk.🙂Comment #19 July 22nd, 2011 at 6:04 am

Scott, I know you know this, I just wanted to mumble some Google search terms for anyone probing further.

Even after Friedberg and Muchnik provided an answer to Post’s Problem (find an r.e. degree not computable and not equivalent to HALT, a so-called Turing incomplete degree) people kept working on “Post’s Program”, to answer the question by formulating a stated property to guarantee incompleteness. There’s a paper by Soare and Harrington, “Post’s Program and incomplete recursively enumerable sets”, which outlines a lattice-theoretic property which can be stated in a 3-line formula (with what looks like 4 alternations of quantifier!).

This property doesn’t explicitly reference the numbering of

Turing machines at all, which is, after all, a Goedel numbering which you’ve slickly hidden by treating M as both a description and a numerical input to machine descriptions.

Comment #20 July 22nd, 2011 at 8:53 am

Woops—in my “proof” above (#12), I neglected an important, tricky case: the case where M_i can’t be induced to make an incorrect output by any finite extension, but can be induced to run forever on some input and on some *infinite* extension.

My apologies, and I’ll think more about this.

Comment #21 July 22nd, 2011 at 1:55 pm

What’s the “right” way to make CG concrete for figuring out the question of Turing degree? Should we think of CG as a set of languages, and a Turing reduction to CG as a machine which works for any choice of oracle in CG? Unless I’ve misunderstood, that’s the interpretation Andy D is using. Let’s call that set CGmax.

The reduction in the easy direction (CG Problem <=T HALT) gives oracle Turing machines deciding some, but not all, elements of CGmax. There are languages in CGmax which do not reduce to halting, e.g. that encode an undecidable language in the undetermined bits.

It probably doesn't matter: any reduction to CG has to be able to handle weak and strong oracles. But could there be a reduction which needs an upper bound on the power of a CG oracle, not just a lower bound?

Would it be useful to try to define a subset CGmin which is in some sense minimal? Candidate definition: a language L is in CGmin if it solves the Consistent Guessing Problem AND is Turing reducible to every language in CGmax?

Answer: Nope! (realized while writing the comment) If a problem Q can be reduced to CGmin, then it can be reduced to CGmax by composing the reductions. Duh! Well at least that rules out some exotic approaches to CG vs. HALT 🙂

Comment #22 July 22nd, 2011 at 2:19 pm

Hmm… it seems likely that CGmin, as I defined it, is empty unless HALT reduces to CG — so it’s not going to be much help in proving that HALT doesn’t reduce to CG. Oops! I liked the aesthetics of the minimization construction, but it doesn’t work.

Comment #23 July 22nd, 2011 at 2:23 pm

Chris H.:

What’s the “right” way to make CG concrete for figuring out the question of Turing degree? Should we think of CG as a set of languages, and a Turing reduction to CG as a machine which works for any choice of oracle in CG? Unless I’ve misunderstood, that’s the interpretation Andy D is using.

Yeah, that’s also the interpretation I had in mind. It’s the one that most closely models the way promise problems are treated in complexity theory.

Comment #24 July 22nd, 2011 at 3:17 pm

En route to a famed complexity zoo in a foreign land, you come to a crossroads. You don’t know which path to take, but fortunately there are three local Consistent Guessing Problem Oracles present to help you, dressed in the traditional manner. A nearby sign informs you should you ask an oracle about about a non-terminating machine, one oracle always claims it accepts, one oracle always claims that it rejects, and one oracle makes an arbitrary response (it would be scandalous for any oracle to lie about a machine which halts).

Puzzle #1: How many questions must you ask of an oracle to get your PhD thesis?

Puzzle #2: Your travelling companion wishes to continue on, but you prefer to linger at the bucolic crossroads. How can you solve your halting problem by questioning the oracles?

Comment #25 July 22nd, 2011 at 4:51 pm

Could we define an alternative “non-uniform” (might be overloading the term too much) reduction to a promise class like CG? A problem Q is non-uniformly Turing-reducible to CG if for any language solving CG, there exists a Turing machine which solves Q using that language as an oracle. The choice of machine is allowed to depend on the choice of oracle, but every oracle must be powerful enough to support the reduction.

Scott’s suggested quantification for taking a class A relative to a promise problem oracle set B is exists(a:A).forall(b:B).a(b). Non-uniform reduction reverses that to forall(b:B).exists(a:A).a(b). There could be problems which reduce to CG non-uniformly but not uniformly.

(We can contrast the usual relativization AB = exists(a:A.exists(b:B).a(b), but it’s not a perfect analogy, because B is a class of problems, not a set of solutions.)

I suspect this is another dead end: natural oracle sets for promise problems will include uncountably many unhelpfully random or perverse oracles. Non-uniformity is just an invitation to cheat in the definition of the oracle set.

Comment #26 July 22nd, 2011 at 6:51 pm

The “direct” proof I sketched above can be strengthened with a minor addition that also yields a version of the Second Incompleteness Theorem:

Namely, the proof establishes the following claim:

Claim: Let F be a formal system F that can encode statements about TMs halting or not halting, and that proofs based on a sequence of valid configurations of Turing Machines, and the meaning of the special instructions “Halt” and “Loop”, are valid proofs in F.

Then there exists a specific statement A, expressible in F, such that if F is consistent, then we have that: (1) A is true, (2) there is no proof for A in F, and (3) there is no proof for (Not A) in F.

[[Note that the proof I gave above shows just (2) and (3), but by the definition of the Turing Machine T, this implies (1).]]

An immediate corollary is that of course such an F cannot be both complete and consistent.

But it also implies the following version of the Second Incompleteness Theorem: Suppose F is also sufficiently expressive that the proof of the Claim sketched above can also be encoded into a valid F-proof that consistency of F implies (A). Then it follows that if F can prove its own consistency, it must be inconsistent.

Comment #27 July 22nd, 2011 at 8:08 pm

Anybody knows what textbooks cover Godel’s incompleteness theorems and related results using this very nice Turing machine-based approach ?

I see that the updated version of this post cites Kleene’67, but is there anything more recent ?

Comment #28 July 23rd, 2011 at 11:03 am

I you may have rediscovered a 1972 theorem of Jockusch and Soare that a Turing degree computes a completion of PA if and only if it is a DNR2 degree, in other words if and only if it computes a separating set for { e : \phi_e(0) = 1} and {e : \phi_e(0) = 1}. There is some info on that in the Wikipedia article on PA degrees, with a reference.

Comment #29 July 23rd, 2011 at 11:04 am

BTW this post was mentioned here: http://math.stackexchange.com/questions/53321/prove-godels-incompleteness-theorem-using-halting-problem

Comment #30 July 23rd, 2011 at 11:34 am

Re Christopher H: the thing you have called “non-uniform reducibility” is known as “Muchnik reducibility” or “weak reducibility” in the literature. The uniform version is called “Medvedev reducibility” or “strong reducibility”.

Comment #31 July 23rd, 2011 at 11:39 am

First, I would like to congratulate you for this very, very nice post! Very clear explanation. This way of proving by Turing Machines (TM) is certainly much more attractive than the original ones for computer scientists like me.

I have just a remark:

These TM-based proofs assume the Church-Turing Hypothesis, right? So when you say that a formal system F decides, you are assuming that this is (always) the same as “some Turing machine decides”.

Although Church-Turing Hypothesis is not currently threatened or doubted at all, the original proofs seem to have the advantage of not needing it.

Or am I missing something?

Best regards.

Comment #32 July 23rd, 2011 at 12:48 pm

P.A.S.: No, the original proofs have no such advantage over the halting-problem-based proofs.

For the purpose of the arguments in this post, you can think of Turing machines as just some formal mathematical objects, which

(a) can’t solve their own halting problem, but

(b)

couldsolve their own halting problem if there were a formal system F with suitable properties.You don’t need the extra philosophical belief that Turing machines represent the “right” model of computation.

Now, it’s true that the most general formulations of Gödel’s and Rosser’s Theorems talk about formal systems that are “recursively-axiomatizable”, and that here, “recursively” is defined in terms of Turing machines. But crucially, that’s true even for the

originalproofs based on Gödel numbering, if you want to state them in the most general way—not just for the proofs based on the halting problem!Comment #33 July 23rd, 2011 at 2:30 pm

OK, I have a better handle now on the complexity of Consistent Guessing. As suspected, it can’t be used to solve HALT; in fact, it can’t be used to solve

anyfixed incomputable language.The proof is simple. Let CGsol denote the collection of languages which solve CG. I claim that CGsol can be defined as the set of infinite paths in a computable binary tree. (Proof: exercise!)

Any such collection is called a Pi^1_0 class. As I mentioned earlier (#14), it is known that for any incomputable L, any Pi^1_0 class has a member A, such that L is not Turing-reducible to A. Thus this is true of CGsol. QED.

I wrote up a self-contained proof (link to follow shortly); somehow I wrote it all up before realizing that the proof boils down to this known result. It is at least a good review of a nice computability technique. In this approach, rather than constructing an oracle by finite extensions (i.e. standard diagonalization), we instead build a nested sequence of infinite

trees, such that our oracle will be an infinite path in the intersection of all the trees.By the way, there are powerful variants of the result on Pi^1_0 classes; one of the strongest is the “Low Basis Theorem” (see wiki), although I don’t see that that result implies the one I mentioned. I also don’t have a reference on hand for the result I mentioned; Carl M or others, maybe you can help me out? Thanks!

Comment #34 July 23rd, 2011 at 3:05 pm

My writeup about CG is here. The definitive reference on Pi^1_0 classes, by Douglas Cenzer, is here, or in case of limited attention span, here is their wiki entry.

Comment #35 July 23rd, 2011 at 3:06 pm

Sorry, should be “Pi^0_1 classes”.

Comment #36 July 23rd, 2011 at 5:53 pm

Re Andy D: the paper that presented both the low basis theorem and the cone avoidance theorem you mention is “Pi^0_1 classes and degrees of theories”, Jockusch and Soare, Trans. AMS, 1972. That paper is full of other interesting results as well. You may also be interested in the Wikipedia article on PA degrees and the other references there.

Comment #37 July 24th, 2011 at 8:46 am

I am concerned with your interpretation of soundness. I think it means that truth is preserved, rather than that truth is created. You say that, because the system is sound, logical conclusions are true. This is false, because there exists a theory that is sound and whose axioms and theorems are false.

Comment #38 July 24th, 2011 at 1:08 pm

Zirui: You’re confusing soundness with consistency.

Comment #39 July 24th, 2011 at 3:16 pm

Like Micki St. James I’m suspicious. A Godel numbering is essentially the same trick as one uses to make the code of a Turing machine available as input on the tape of another Turing machine. You haven’t avoided using Godel numbers at all!

Comment #40 July 25th, 2011 at 1:34 am

Tom and Micki St. James: Obviously, which proof you prefer is largely a matter of taste. But having said that, if you don’t agree that the proof based on the halting problem is superior, then your taste is mistaken! 😉

Seriously: I agree that the “traditional” proof and the halting-problem-based proof

bothinvolve a mathematical object thattakes as input a coded description of itself—and that this represents an advance in human knowledge that made its first appearance with Gödel numbers and that’s up there in importance with fire and the wheel. Inthatsense, you’re absolutely right that I “haven’t avoided using Gödel numbers at all.”All I’ve done, if you like, is to recognize the fact that in the 80 years since Gödel’s paper, the syntax/semantics switch he pioneered has changed our concepts so thoroughly that, in proving Gödel’s Theorem today,

there’s no longer any need to explicate the switchas painstakingly as Gödel did the first time around! In fact,tryingto do so might only confuse people.For me, the “modern” (meaning post-1936 🙂 ) proof of Gödel’s Theorem has the great advantage of

modularity. First you show (or better yet, depending on your purposes, simplyassume) that your formal system is capable of expressing statements about computer programs. Then you reduce the entire rest of the proof to some straightforward exercises in computer programming.By contrast, since the modern concept of a computer didn’t exist yet in 1931, Gödel was forced to resort to “programming without recognizing it for what it is”: programming in the hacky, inconvenient, low-level programming language of first-order formulas with addition and multiplication over the natural numbers. My claim is precisely that,

becausehe suffered for us, today we don’t need to!Comment #41 July 25th, 2011 at 2:37 am

What a nice way of putting it! Gödel was programming in an inconvenient low-level programming language and today there’s no need to belabor programming to a class of programmers. At least Gödel didn’t have to worry about multiplication of natural numbers throwing an overflow exception.

Which makes me wonder, what will we be able to explain so much more compactly once we all have experience with quantum algorithms?

Comment #42 July 25th, 2011 at 11:53 am

Which makes me wonder, what will we be able to explain so much more compactly once we all have experience with quantum algorithms?I think it’s already the case that a large part of the content of modern physics becomes easier to explain once people have some experience doing mathematical manipulations with qubits! (Which doesn’t itself require any physics background.) Aspects of QM that would require ponderous explanations by Einstein, Heisenberg, or Bohr can now be seen with a few lines of qubit manipulations.

Comment #43 July 26th, 2011 at 3:26 pm

I absolutely agree that the computational viewpoint is superior! In fact, by complete coincidence, I’d been thinking exactly of a computational proof of incompleteness last week. I also came up with what I thought of as the “consistent extension problem”.

The way I arrived at the consistent guessing problem is, if I understand you correctly, the opposite of the way you arrived at it. Assuming Rosser’s theorem there is no complete extension of PA, and therefore no computable consistent extension of the Turing machine which accepts theorems of PA and rejects negations of theorems. (So not only is consistent extension not possible in general, it’s not possible in this *particular case* either!)

In fact, the deduction rules of first-order calculus are themselves a (non-deterministic) computational process. I’m not sure if this (rather trivial) observation is fully recognised.

However, I’m not sure I agree with you that Godel numbering is as important as you think it is … or at least there’s more to be said on this issue. One can well imagine a computational engine represented by a directed graph which manipulates other directed graphs. No encoding or “Godel numbering” is then required to state the halting problem.

In fact I think lambda terms might be an example of such a computational process which acts on structures of the same form as itself. Self-referential reasoning is much more convincing in such a context, I think.

Comment #44 July 26th, 2011 at 5:04 pm

the book “an introduction to the general theory of algorithms” by michael machtey and paul young (late seventies) is a nice reference for how to present logic material to computer scientists (including the use of recursively inseparable sets for incompleteness and undecidability results). not by accident: the authors were well-trained in classical recursion theory but were moving to complexity theory in its early days.

lorenzo

Comment #45 July 27th, 2011 at 11:55 am

It’s true that we need to believe that one could construct a Turing machine to enumerate all the theorems of F. But for people with programming experience, that claim (unlike claims about expressibility in formal systems) is “intuitively obvious.”This is not true at all: Turing machines are terrifically limited and clumsy, and typical undergraduates (even CS majors at top universities) have a lot of trouble programming them. It’s not at all intuitively obvious that they are actually equivalent to more flexible notions of computation, even to people with substantial programming experience.

So merely by talking about “Turing machines” you are already skimming over a huge mess of coding details. Why not go all the way, and just use a modern programming language instead? Then you’ll have something powerful enough that it genuinely is obvious to any programmer that it can do everything you want it to. (Of course, the reason is that talking about “Turing machines” is traditional and sounds more theoretical. This is the same reason why people talk about “Goedel numbering” even though it’s really a very simple idea.)

In any case, there’s no reason to complain that Goedel numbering is complicated and technical while cheerfully talking about Turing machines, which are equally complicated and technical, in exactly the same way.

Comment #46 July 27th, 2011 at 1:00 pm

WRT Turing machines,

in the new book, “P, NP, and NP Completeness”, Oded Goldreich says something to the effect that Turing Machines, Formal Languages, and Automata are basically worthless and a huge impediment to learning Complexity theory.

I am far from being competent to judge the merit of that opinion, but I like it. While I have degrees in math and physics, I am totally self taught in CS, with the usual result that I study the things that I think are fun but avoid those that seem tedious. Thus I am happy to study any book on algorithms, data structures, discrete math, or programming. Even the logistics of software development is somewhat interesting, in an engineering sort of way.

But every time I try to read about Turing Machines, Formal Languages, or Automata theory, I need a nap. And when I wake up, I pick up a different book!

On a related note, I am impressed by the hobby of building Turing machines out of Legos, e.g.,

http://www.youtube.com/watch?v=i-1LxIEMC58

http://www.youtube.com/watch?v=cYw2ewoO6c4

Comment #47 July 27th, 2011 at 3:53 pm

continuing; Oded’s Ph.D. adviser was Shimon Even. It is too bad they didn’t write a book together, it would probably be called “Even and Odd”

Comment #48 July 27th, 2011 at 4:01 pm

Anonymous #45: In the context of this post, when I say “Turing machines,” I

meanany modern programming language of your choice (or better yet, pseudocode). That’s how little the coding details matter here.Comment #49 July 27th, 2011 at 4:02 pm

Raoul Ohio says “So merely by talking about “Turing machines” you are already skimming over a huge mess of coding details. Why not go all the way, and just use a modern programming language instead?”

the concept of “acceptable programming system” is kind of nice as well…

lorenzo

Comment #50 July 27th, 2011 at 10:50 pm

For the last 10+ years, I have been teaching Discrete Math to CS students, many of whom have yet to discover the joy of TCS. These students regard DM and TCS as obstacles on the path to making big bucks working on computer games. Since some of them will in fact be rich pretty soon, I point out that there are lots of rooms in the new building that don’t have anyone’s name on them yet.

Most students quickly understand the halting problem when presented in C/C++/Java. I.e, suppose you have a C++ function with prototype

bool halt(string prog); // prog a valid C++ program

that lives in a file halt.h, and an implementation that returns true if and only if the program prog halts. Then consider a program named bummer with the following code:

#include “halt.h”

int main(){

while(halt(bummer));

return EXIT_SUCCESS;

}

This makes a pretty nice test problem. I give a couple of points for reproducing equivalent code, a couple more for an analysis of whether program bummer halts or not, and a couple more for demonstrating why this shows there cannot be a halting function.

Extra credit: why won’t the program bummer compile as written?

Comment #51 July 28th, 2011 at 10:23 am

I’m going to push back some against the people who are arguing for using “modern programming languages” rather than extremely minimal formal models of computation in presenting CS and logic theory results. Preemptive disclaimers: (1) I agree that the modern programming language approach is best when you only have a fairly limited time to interact with your audience — it is the approach I have used and would use again when giving a single 50 minute lecture on these topics or writing a single blogpost. (2) I am defending minimal formal models, not Turing Machines in particular. All of my arguments would also apply to, for example, RAM machines.

All of these points come down to one basic point: Modern programming languages are easy to program in, but hard to simulate.

(1) Suppose you prove Godel or Rosser’s theorem as described in this post. Then you have proved that there is no complete consistent theory which can describe the behavior of an idealized modern computer. A smart student will respond “But I’m not trying to prove theorems about how computers behave, I’m trying to prove theorems about elementary number theory!” To respond to him, you have to talk about some variant of Goedel numbering. By the way, I really did encounter such a smart student, after presenting a talk like this at PROMYS (although it took quite a while to hash out what his objection was).

(2) Suppose you want to prove that 3-coloring is NP-complete. So, given an algorithm, you must give a way to encode its behavior in a graph to be 3-colored. If the algorithm comes in the form of a Turing machine (or other minimal formal model), this can be presented in an hour. Describing how to compile C code to a graph is plainly absurd. To give a computability example, rather than a complexity one, the same issue arises when presenting the solution to Hilbert’s 10th problem.

(3) It is easy to understand that you can’t solve Halting for programs that can explicitly include other programs. It is not so easy to understand that very basic programming structures, such as those used in PostScript files or LaTeX macros, are already enough to making halting unsolvable. See http://stackoverflow.com/questions/235984/the-halting-problem-in-the-field for some more entertaining examples.

Comment #52 July 28th, 2011 at 10:41 am

Hi David,

Yeah, I suspect that a presentation of Godel’s Theorem using (say) C or Lisp would only work well for C or Lisp hackers respectively. As I said, personally I just use pseudocode!

On the other hand, when I teach my class I’m ALWAYS in the situation of very limited time! And while I don’t have time to develop first-order logic, by the time I get to Godel I HAVE already explained and justified the Church-Turing Thesis, and the equivalence of all “reasonable” programming formalisms.

Comment #53 July 28th, 2011 at 2:12 pm

Let’s say I have an oracle for CG, but I know that it answers “0” on all TMs that do not halt and accept (so if it rejects or loops forever, then it outputs 0). This is also known as the ACCEPTANCE problem.

Is it known whether HALT reduces to ACCEPTANCE?

Comment #54 August 5th, 2011 at 1:15 am

Here is a new preprint that is related:

http://arxiv.org/abs/1108.0992

Comment #55 August 6th, 2011 at 7:11 am

This is not a pipe!

I first stumbled across this blog today from a reference in metafilter. Its been a long time since Ive read Hofstadter, but the way I felt when I first read GEB is the way I feel after reading some of your stuff.

Thank you!

Comment #56 August 10th, 2011 at 1:48 am

Nice post. Just one question in regards to this part: “Also, because F is consistent, if M really rejects then F can’t prove that M accepts, while if M really accepts then F can’t prove that M doesn’t accept (since in either case, the contradiction could be discovered in finite time). ”

I am not sure how the contradiction could be found. Suppose that F proves that M accepts, but that in reality M rejects. Is the contradiction found by deriving more theorems from F? Or is the contradiction found by running M? If it’s the latter, that really isn’t a contradiction within F.

I am sure I must be missing something…

Comment #57 April 26th, 2012 at 4:16 am

It seems that your proof depends on the supposition that there can be a formal system that is “powerful enough to talk about Turing machines”. But isn’t the notion of a Turing machine a notion that inherently involves actual physical entities, whereas a formal system is a system that is entirely independent of physical entities?

In particular, the notions of halting and not halting are notions that refer to physical events. So, isn’t it the case that if a system contained within itself references to such events, then it cannot be a formal system? And doesn’t that completely invalidate your proof?

Comment #58 April 26th, 2012 at 7:47 am

Rudi: No, there’s no difference

whatsoeverbetween Turing machines and formal systems regarding their “physicality.” Both of them are well-defined mathematical objects. (Have you seen the formal definition of a Turing machine as a 5-tuple? If so, you’d know that halting is definable purely as the Turing machine reaching a certain state, which in turn boils down to a function reaching a certain value after being iterated a finite number of times.)Sure, you can build a physical artifact (out of transistors, Legos, whatever) that implements the logic of a Turing machine, but you can also build a physical artifact that implements the logic of ZF set theory!

So the question just boils down to one of convenience—i.e., which of the two formalisms is

betterfor explaining Gödel to CS majors? My contention is that the Turing machine formalism is better.Comment #59 April 29th, 2012 at 4:05 pm

Clearly, there is a difference between a Turing machine that includes physical descriptions in its definition and a formal system that does not. I think what you mean is that you assert the thesis that asserts that there is a formal system which can be interpreted as corresponding precisely to a Turing machine in terms of inputs and outputs. That certainly makes discussion of statements regarding Turing machines much easier, since we can then ignore any physical descriptions, which tend to confuse the issues.

Now, given that there is a formal system which operates in the same way as a Turing machine, then such a formal system will consist of an alphabet of symbols, and a set of rules that determine which combinations of the symbols are valid combinations of the system, and which determine how one valid combination of symbols is generated from another valid combination.

But your proof says: Given as input a description of a Turing machine M …

But, in terms of the formal system, that is saying that you can have as an initial value for some function of this formal system, the entire definition of the formal system itself.

That can’t be possible, since it requires more symbols than are in the alphabet of the formal system to define the formal system. And that means that all your proof shows is that that isn’t possible. Your proof says nothing about another formal system that is a meta-language system to your Turing formal system, i.e, a formal system that can make statements about your Turing formal system – because your proof relies on the step that any such formal system of you proof is required to run on its own description, as in ‘Now run Q on its own description ?Q?.’

Which, as pointed out above, is not possible.

Comment #60 April 29th, 2012 at 7:53 pm

That can’t be possible, since it requires more symbols than are in the alphabet of the formal system to define the formal system.

That’s complete nonsense. I can define the C programming language using nothing but the ASCII characters on my keyboard—the very same ASCII characters that are used in C itself. And I can easily feed a C program its own code as input. And C is Turing-universal, so there’s no difference between it and Turing machines in this regard. All these things are not merely talked about, but

done, in intro CS courses.Now, it’s true that one can (at least formally) define a more powerful class of machines able to solve the halting problem for Turing machines, similar to how one can define a more powerful formal system ZF+Con(ZF) able to prove the consistency of ZF.

In no waydoes that conflict with anything that I wrote—indeed, it’s part of what welearnfrom Gödel and Turing’s results, of which I was giving an exposition.Anything further from you will go to the moderation queue,

unlessyou demonstrate comprehension of the above.Comment #61 May 3rd, 2012 at 7:03 am

You are of course correct regarding the alphabet. This is the problem when one tries to summarize briefly what might otherwise take several pages. My apologies.

I shall try to put my principal point in another way. Would you agree that any complete definition of a formal system can only be stated in a language that is a meta-language to the formal system? And in such a meta-language, the alphabet and all rules of the formal system are defined. So there can be a symbol string of such a meta-language that contains the complete definition of the formal system.

When you say that one can feed a C program its own code as input, isn’t that rather missing the point? – since the code of a program is not the complete definition of the programming language of the program – the complete definition requires, for example, the rules of the programming language. And while, of course, that entire symbol string can be a string value input, but isn’t it the case that your proof assumes that the input can be analyzed by the formal system to retrieve all of the information in the symbol string that is given by that symbol string within the syntax of the meta-language? But if this was possible, wouldn’t that mean that a symbol string of the formal system, in the syntax of the formal system, would contain the complete definition of the formal system, including all of its own rules? In your proof, I don’t see any proof that that is possible.

Comment #62 May 3rd, 2012 at 7:42 am

Rudi Marron #61:

Would you agree that any complete definition of a formal system can only be stated in a language that is a meta-language to the formal system?

Absolutely not, and that’s a very profound misunderstanding.

It’s possible, for example, to give a complete definition of ZF set theory within ZF set theory (and indeed, Gödel does exactly that as one step in the

proofof his theorems).Of course, you can’t prove the

consistencyof ZF set theory within ZF set theory, but that’s a separate question, which has nothing to do with the theory’sdefinabilitywithin ZF.I keep telling you this, and you keep not getting it. I suggest you read up on logic until you understand this point, and then we can talk more. 🙂

Comment #63 May 29th, 2012 at 10:49 am

[…] Comments by Tsuyoshi Ito and Sasho Nikolov have helped greatly in disambiguating the terms decidable versus independent versus neither provable nor refutable. Further references in this regard are Jeremy Avigad’s course notes “Incompleteness via the Halting Problem” and Scott Aaronson’s weblog essay “Rosser’s Theorem via Turing machines“. […]

Comment #64 May 14th, 2013 at 5:24 am

[…] “Incompleteness via the Halting Problem“, Scott Aaronson’s weblog essay “Rosser’s Theorem via Turing machines” and Luca Trevisan’s weblog post Two interesting […]

Comment #65 July 29th, 2013 at 9:24 am

i think you can use the CG problem to solve the Halting problem. i’ll show this by using a proof of Rice’s theorem that uses nontrivial decision procedures to solve the Halting problem:

Let T be a program we can call in JavaScript that, given a string p which codes for a program in JS as input, returns 1 if p(“0”) = 1 or returns 0 if p(“0”) = 0 or returns either 0 or 1 otherwise.

Then:

function Halt(p,i) {

if(T(“function (j) { while(true); }”))

return ~T(“function(j) {” + p + “(” + i + “); return 0;}”);

else

return T(“function(j) {” + p + “(” + i + “); return 1;}”);

}

this can be used to solve the halting problem for decision procedures.

some people above claim that CG can not be used to solve the halting problem. i can’t follow their reasoning.

Comment #66 August 10th, 2013 at 8:27 am

i tried to leave this comment weeks ago but somehow it hasn’t appeared.

i realised that if two TMs fail to terminate on an empty tape and are extensionally equivalent then they may still have different descriptions and a CG function isn’t required to map those to the same value. this shows that the reduction to Rice’s Theorem i tried above doesn’t work.

Comment #67 November 17th, 2013 at 11:36 am

[…] Gödel’s First Incompleteness Theorem can be proven as a corollary of the undecidability of the halting problem (e.g. Sipser Ch. 6; blog post by Scott Aaronson). […]

Comment #68 March 8th, 2016 at 10:00 am

Minor question: when you say, “sound and complete (that is, it can’t prove all and only the true arithmetical statements),” do you mean only and all true arithmetical statements?

Comment #69 March 8th, 2016 at 11:10 am

venky #68: What’s the difference? I mean the system proves “the truth, the whole truth, and nothing but the truth”—i.e., that it proves S iff S is a true statement of arithmetic.

Comment #70 March 8th, 2016 at 3:03 pm

I over-read the statement and equated sound with “all” and complete with “only” because of the order in which the words appeared. That threw me off a bit, because I am learning all these terms as I go along. Thanks.

Comment #71 March 16th, 2016 at 10:57 pm

[…] Background: I’m working on a talk that presents Godel’s first Incompleteness Theorem from a computability-theoretic perspective. The idea is to show that the first incompleteness theorem follows from the unsolvability of the halting problem. See Scott Aaronson’s post for a high-level description: http://www.scottaaronson.com/blog/?p=710 […]

Comment #72 May 22nd, 2016 at 11:43 pm

[…] Scott Aaronson gives a good definition for the first theorem, […]

Comment #73 May 24th, 2016 at 12:09 am

[…] Scott Aaronson gives a good definition for the first theorem, […]