Sidesplitting proofs

One thing I’ve always liked about theoretical computer science is the number of proofs that are patently ridiculous—whose concluding steps seem to call more for a gong or cymbal than a “QED” box.  This is a property that CS theory inherited from logic, and that it shares with several other mathematical fields (though by no means all of them).  The titans of the comedy-proof genre are of course Gödel and Turing’s undecidability results, the latter of which arguably found its best expression as a poem.  But there are other examples all over the place, and many aren’t as well known as they should be.

I was reminded of this side of theory when my student Andy Drucker and I came up with yet another absurd proof: basically, a theorem that’s true for one reason if a certain algebraic problem is hard, and true for a completely different reason if the problem is easy.  We’re still writing it up, so at Andy’s request I won’t spill the joke yet.  For now, please content yourself with the following tried-and-true komedy klassics.

Theorem 1 (folklore): E (that is, the class of problems solvable in 2O(n) time) does not equal PSPACE, the class of problems solvable in polynomial space.  (Though we have no idea how to prove which one is bigger than the other one—or that they’re incomparable, as seems most likely.)

Proof: Suppose E=PSPACE.  Then E=EXP by padding, where EXP is the class of problems solvable in 2poly(n) time.  But that would contradict the Time Hierarchy Theorem.

Theorem 2 (classic, attributed to Levin): One can give a fixed, explicit algorithm, which finds satisfying assignments to Boolean formulas in polynomial time whenever they exist, assuming P=NP.

Proof: let M1, M2, … be a list of Turing machines that take a SAT instance φ as input.  The algorithm is as follows: dovetail (that is, run a step of M1, then another step of M1 and a step of M2, then another step of M1 and M2 and a step of M3, etc.), halting when one of the machines has output a valid satisfying assignment for φ.  If P=NP, then there’s some Turing machine Mi in the list that solves SAT, and that causes the whole algorithm to work in polynomial time assuming φ was satisfiable.  (The fact that you’re also simulating quadrillions of other machines merely slows things down by a “polynomial factor,” independent of the input size n.)

Theorem 3 (Gutfreund, Shaltiel, Ta-Shma): Let A be an algorithm that’s supposed to solve SAT in polynomial time (that is, find a satisfying assignment whenever one exists), but that actually fails on some SAT instance of size n.  Then if someone gives you the source code of A, you can, in time polynomial in n, find a specific SAT instance that actually witnesses A’s failure.

Proof: By the Cook-Levin Theorem, you can create a SAT instance φ(x) which encodes the statement, “x is a SAT instance of size n on which A fails (that is, either there’s a satisfying assignment A fails to find, or A outputs an assignment for x that isn’t satisfying).”  Then feed φ as input to A.  There are two cases: on the one hand, if A succeeds, then it’s helpfully provided you with a SAT instance on which it itself fails.  On the other hand, if A fails on φ, then φ itself is the SAT instance you were looking for.

Theorem 4 (Barak et al.): There exist programs that can’t be obfuscated—that is, for which having the actual code of the program lets you do something that you couldn’t do if you could only run the program as a subroutine.

Proof: Let P be a program that takes a string x as input, and does the following.  First, if x=a, where a is some n-bit “secret string” hardwired into P’s source code, then P(a) outputs another n-bit secret string b.  Second, if x is the source code of a program Q such that Q(a) outputs b (after some fixed number of steps, say t=O(n)), then P outputs a third secret string c.  Third, if x satisfies neither constraint, then P(x) outputs “FAIL.”  Now, given the source code of P, it’s easy to find c: just run P with its own code as input.  On the other hand, if you can only run P as a subroutine, then (unless you get extremely lucky) it will take exponential time to find any x for which P(x) outputs anything besides “FAIL.”  Hence it’s infeasible to find c by running P, and yet there’s no way to obfuscate P’s source code so as to hide c.

Theorem 5 (attributed by Razborov and Rudich to Wigderson): No natural proof can prove better than a half-exponential lower bound on the circuit complexity of the discrete logarithm problem.  (Here half-exponential refers to a function f—which exists, but can’t be described analytically—such that f(f(n)) grows exponentially with n.)

Proof Sketch: Suppose we can prove an f(n) lower bound on the circuit complexity of discrete log, using a natural proof.  Then by the definition of natural proof, there exists a 2O(n)-time algorithm to distinguish a truly random function g:{0,1}n→{0,1} from a function with circuit size f(n).  This means that any efficiently-computable pseudorandom function family (PRFF), with seed length m=f(n), can be distinguished from a random function in exp(f-1(m)) time.  By standard equivalence theorems in cryptography, this means that any purported one-way function—so for example, the modular exponentiation function—can be inverted in exp(f-1(n)) time.  In other words, to prove a natural f(n) lower bound for the discrete logarithm problem, you must also discover an exp(f-1(n))-time algorithm for the discrete logarithm problem.  As you show the discrete log problem to be harder, you simultaneously show it to be easier!  And when f is greater than half-exponential, the lower bound becomes greater than the upper bound.

What is it that makes these theorems funny?  (Alright, maybe not ha-ha funny, but at least snort-snort funny?)  This is one case where a few readers might want me to break the rule about never explaining a joke.  Theorems 1 and 2 are sort of like “you’re lost,” as an answer to the balloonist’s plea of “where am I?”: they’re logically impeccable yet tell you nothing whatsoever about what you wanted to know.  Theorems 3 and 4 are like someone who’s so hungry he devours the menu at a restaurant, not even realizing that the menu itself was listed on the menu.  They seem to involve a category mistake: a reference gluttonously repurposed as a referent only to become a reference again.  (This, of course, is the joke behind Gödel’s theorem as well.)  Theorem 5 is like a literary critic proving there’s no ‘reality’ separate from race, class, and gender biases, using arguments that are so well-grounded, even a white male plutocrat would have to concede their truth.  The case is a self-immolating one: every argument that makes it stronger necessarily makes it weaker as well.

So what’s your favorite sidesplitting proof?

41 Responses to “Sidesplitting proofs”

  1. Roona Says:

    “whose concluding steps seem to call more for a gong or cymbal than a “QED” box.”

    Rimshot, perhaps?

    “And thus by reducing exact cover to the halting problem, we see that P = NP.”

    Da dum boom chish!

  2. Erik Says:

    Of type 2, there’s always the mother of all diagonalization arguments, Cantor’s proof that the real numbers have a strictly greater cardinality than the natural numbers.

    Of type 1, my favorite has always been the following proof that there exist irrational numbers x and y such that xy is rational:
    Proof: Consider √2√2. This is either rational or irrational. If it is rational, then we are done. If it is irrational, then (√2√2)√2 = √2√2√2 = (√2)2 = 2, which is rational.

    I think that the key to humorous proofs of type 3 is that they aren’t your typical proofs by contradiction, where some effort goes into proving something that directly (or nearly so) contradicts the assumption. Whatever humor I may have gotten out of the first reductio ad absurdum proof I saw has long faded. What makes these special is that both of the contradictory facts are quite a bit removed from the assumption. The humor is proportional to the similarity between the proofs of each half of the contradiction. Of course, I can’t think of any good simple candidates off the top of my head.

  3. Carlos Says:

    I recently learned about the proof of the non-deterministic time hierarchy theorem (that’s on Boaz & Barak, chapter 3) and it strikes me as a pretty ridiculous one. After chasing the exponentially-many implications that are setup for the lazy diagonalization, maybe the dashed arrow on Figure 3.1 should be labeled “The Aristocrats”.

  4. Scott Says:

    Erik: LOL, I was ready to ask whether your proof assumes √2 is irrational, and if not how you handle the first case, and if so why it’s not completely obvious that there exist irrational x,y such that xy is rational (x=y=√2). Then I realized that your “multiplication” must be exponentiation, WordPress having eaten the superscripts. I took the liberty of fixing it for you. :-)

  5. Jadagul Says:

    Almost anything relating to the Axiom of Choice can be pretty funny. Banach-Tarski is well known, but my favorite is the infinite hats problem ( I’ve always been fond of both the proof that the rationals have same cardinality as the natural numbers, and the proof that the reals don’t.

    My second favorite proof is an analytic number theory paper (unfortunately, I don’t know what the paper was). The proof went something like: “Suppose the Riemann hypothesis is true. Then we make a long, convoluted argument which proves X. Now suppose the Riemann hypothesis is false. We now make a completely different convoluted argument which proves X. Therefore X.”

    But my absolute favorite (again, I don’t know the actual paper, and if anyone knows these papers I’d love a link): “First, assume the continuum hypothesis. If the continuum hypothesis is true, we’ve proved Y. Now, we prove a theorem that says ‘if a proof exists that relies on the continuum hypothesis, some other proof which doesn’t rely on the continuum hypothesis also exists.’ Thus we’ve proven that there is a proof of Y. QED.”

  6. John Armstrong Says:

    whose concluding steps seem to call more for a gong or cymbal than a “QED” box

    Gene-Gene the Turing Machine?

  7. Jair Says:

    I’m a fan of the following proof that the cardinality of the set A of all functions from R to R is higher than R. Suppose there exists function f from R onto A. Then consider the function g from R to R defined as g(x) = f(x)(x) + 1. Then since f is onto, there exists a number a so that f(a) = g; that is, f(a)(x) = g(x) for all x. This includes a; thus f(a)(a) = g(a) = f(a)(a) + 1, a contradiction.

  8. Erik Says:

    It’s interesting. While I love both the proof that |N|≠|R| and the proof that |N|=|Q|, the first one strikes me as amusing (though not nearly as amusing as the first time I saw it, when I’d never even heard of a diagonalization argument), while the second one just strikes me as clever. There’s just something about diagonalization that hits the same emotional buttons that poetic justice, irony, self-fulfilling prophecies, and looping time-travel stories do.

    I think the feeling may be even stronger with arguments in complexity and computability. This is possibly because we conceptualize the constructions as actual programs, and as a result, we imagine them with a more physical existence, thus increasing our “surprise” when they suddenly display impossible features.

    I think there are results which are funny themselves, even if the proofs themselves don’t have that humor. Just about anything extremely nonconstructive that makes indispensable use of the axiom of choice or of classical logic does that. (There should be words to describe such results…”choosy” and “nonconstructable”?) There are also results that are funny primarily because of the way they’re presented, usually due to a silly, but usefully memorable, like the Ham Sandwich Theorem. And then there are those that are funny only because of the name, like the Hairy Ball Theorem, which is funny sounding, even if your mind is not in the gutter. (As it is, it’s enough to reduce a topology class full of grad students to giggling 7th graders.)

  9. Anonymous Says:

    I don’t understand the claim “Then E=EXP by padding”. Can someone elaborate on that?

  10. micah Says:

    (Assuming the axiom of choice) For any uncountable cardinals a and b, the Ramsey number R(a,b)>2^(aleph_0).

    Proof: It suffices to produce a 2-edge-coloring of a complete graph whose vertex set has size 2^(aleph_0), such that any monochromatic subgraph is at most countable. Let the vertex set of our graph consist of the real numbers. Now, take a well-ordering of the reals. For any edge {x,y}, color it red if the well-ordering agrees with the standard ordering; color it blue if it disagrees. Then any monochromatic subset of R is either well-ordered or anti-well-ordered under the standard ordering. Either way, it’s at most countable.

  11. Scott Says:

    Anonymous: Here’s a more self-contained argument.

    Suppose PSPACE=DTIME(2O(n)). Then for some polynomial p, there exists a p(n)-space algorithm to simulate arbitrary 2n-time Turing machines. Now, if we feed a 2n-time Turing machine M an input x∈{0,1}n followed by n2-n trailing zeroes, then the input has n2 bits—so M is now allowed to run for 2n^2 steps. (This is what “padding” means.) On the other hand, the amount of space needed to simulate M only increases to p(n2), which is still a polynomial in n. So DTIME(2O(n^2)) is still contained in PSPACE, and hence is equal to PSPACE (since even DTIME(2O(n)) contained PSPACE).

    So by transitivity, DTIME(2O(n))=DTIME(2O(n^2)). We now have our violation of the Time Hierarchy Theorem.

  12. g Says:

    Jadagul@5: I think the RH-or-not-RH proof you’re thinking of is Littlewood’s proof that pi(x)-li(x) changes sign infinitely often.

  13. Rahul Says:

    Kleene’s recursion theorem is comedy in its purest form.

    In complexity theory, the Impagliazzo-Kabanets-Wigderson proof that NEXP in SIZE(poly) implies NEXP=EXP is to me the most perversely delightful. The statement of the theorem has nothing to do with randomness, yet randomness and derandomization are essential to the proof. The idea of randomness appears to act as some sort of catalyst here…

  14. Thomas Says:

    Here’s one from fixed parameter complexity theory I think is quite funny. Two relevant definitions:
    * A parameterised problem is called Fixed Parameter Tractable (FPT) if it admits an algorithm with runtime O( p(n) * f(k) ) for some polynomial p and arbitrary f.
    * A kernelisation is a “preprocessing” algorithm that takes time p(n) and reduces the instance to size f(k), again for polynomial p and arbitrary k.

    Theorem. A (decidable) parameterized problem has a kernelisation iff it is Fixed Parameter Tractable.

    Proof. One way is simple, the other seems like cheating.

    (Kernel => FPT, simple) Run the kernelisation. This takes time p(n) and gives an instance of size f(k). Solve the kernel by any algorithm, say with running time f2. Total running time is O( p(n) + f2(f(k)) ). This is fixed parameter tractable.

    (FPT => Kernel, the fun part) Since the problem is FPT, there exists an algorithm for it with runtime O( p(n)f(k) ).
    If n

  15. Thomas Says:

    Hm, less-than signs ate the fun part of my previous post. :( Apologies. Retrying without them…

    (FPT => Kernel, the fun part) Since the problem is FPT, there exists an algorithm for it with runtime O( p(n)f(k) ).
    If n less than f(k), then the instance already fulfills the conditions of being a kernel; do nothing.
    Otherwise f(k) less than n. Then O( p(n)f(k) ) is polynomial in n. Use the FPT algorithm to solve the problem (in time polynomial in n) and output some YES or NO instance (of constant size) accordingly.

  16. MattF Says:

    Physicist’s proof of the Pythagorean theorem:

    The figure is a right triangle with acute angle t and with sides a, b, c, where c is the hypotenuse.
    A line segment from the right angle meets the hypotenuse c at a right angle and divides the big right triangle into two smaller similar right triangles.

    A scaling argument shows that the area of the big right triangle, Area(c,t), must be of the form c^2 f(t):

    Area(Lc,t) = L^2 Area(c,t);


    Area(c,t) = c^2 Area(1,t)
    = c^2 f(t)

    But the dissection of the big right triangle into two smaller right triangles shows that

    Area(c,t) = Area(a,t) + Area(b,t)

    so that

    c^2 f(t) = a^2 f(t) + b^2 f(t).

    and therefore

    c^2 = a^2 + b^2

    QED (Rimshot)

  17. David T Says:

    you guys took my favorites but here are two other nice ones.
    The first is Euclid’s Proof of the Infinitude of Primes (c. 300 BC):
    Nice because its simple and is one of the first ones ever.

    The second is Proof of the Euler product formula for the Riemann zeta function:
    When i first looked at the two sides i said to myself however do you reach from one side to the other…

  18. Paul Goldberg Says:

    This is an interesting topic; the idea of humorous proofs could even maybe play a part in our sporadic attempts to give mathematics more popular appeal in the wider world…

    I am wondering whether proofs associated with mathematical puzzles are considered to be of this kind. e.g. the dominoes-on-chessboard seems like an obvious example.

  19. Qiaochu Yuan Says:

    David, I’m not sure where the Euler product formula fits in. Once you see the connection to prime factorization, I think it falls under clever rather than amusing. (Perhaps the corollary that there are infinitely many primes because the harmonic series diverges might be considered amusing.)

    Anyway, along those lines, I think Furstenburg’s proof of the infinitude of the primes is amusing because it’s not clear (at least to me!) which step of the proof is doing the work; I guess you could call it mathematical sleight of hand. Tim Gowers has a nice unraveling of this proof, but I can’t seem to find it.

  20. Narad Rampersad Says:

    My favourite is Alon and West’s solution to the Necklace splitting problem. It seems utterly bizarre to me that the only proofs of this fundamentally combinatorial result require techniques from topology or measure theory!

    What about all the various famous results obtained by the probabilistic method? For example, Erdos’ proof that there exist graphs with arbitrarily large girth and chromatic number, etc.

  21. gowers Says:

    A number of years ago, Thomas Schlumprecht discovered a Banach space X with the property that every complemented subspace Y (that is, subspace on to which you can project continuously) had a further complemented subspace Z that was isomorphic to the entire space. At the time the following two problems were open. 1. Find two non-isomorphic spaces Y and Z such that Y is isomorphic to a complemented subspace of Z and Z is isomorphic to a complemented subspace of Y. 2. Find a new space (a few very classical examples, such as Hilbert spaces, were known) that is isomorphic to all its complemented subspaces.

    It is easy to see that if Schlumprecht’s space does not solve problem 2 then it solves problem 1. I’m not sure whether even now anyone has managed to decide which. This I’ve always found funny in your sense, even if it’s in a slightly different category from your two examples. It raises the question: what is the appropriate credit for a proof that for two famous problems one of the four possible answer pairs yes-yes, yes-no, no-yes, no-no is ruled out? (E.g. what would the Clay Foundation say if you could prove that P doesn’t equal NP but needed to assume RH?)

  22. anonymous Says:

    “what would the Clay Foundation say if you could prove that P doesn’t equal NP but needed to assume RH?”

    They would start an advertising campaign saying that the prize money for the *positive* resolution of RM is temporarily raised to $2M USD, for as long as P vs. NP stays unresolved. :)

  23. Scott Says:

    Tim and anonymous: Or maybe the prover of the original result should get $1M, held in escrow until RH is proved? (In other words, $1M conditional on RH, just like the result itself?) Other people who were extremely confident of RH’s provability could then choose to loan that person money, fully expecting to get it back as soon as RH was proved.

    A possible problem I foresee: what if, in the meantime, someone else proved P≠NP unconditionally?

  24. Raoul Ohio Says:

    Reminds me of the WV $M Sweepstakes; $1/Yr for a MYr.

  25. asdf Says:

    First seen on

    for any mathematical proposition P, it is easy to build a physical machine M(P) with two lights labelled “yes” and “no”. The “yes” light will be lit if P is true, and the “no” light will be lit if P is false.

    Proof: build two machines, one lighting the “yes” light, the other lighting the “no” light. For any P, one of these two machines is M(P). You just don’t necessarily know which one.

  26. odd thought Says:

    What if you prove that the number of Clay Millenium Prize conjectures that are true is even?

  27. ki Says:

    An oldie but a goody: often, the shortest path to a truth about real numbers is through the complex plane.

  28. Jonathan Says:

    Does Theorem 2 really incur only a constant factor slowdown? It seems to me that it roughly squares the running time of Turing machine Mi.

  29. GMHurley Says:

    To Jadagul and g:
    The proof you mention is described in Littlewood’s Miscellany (pp110-12, 1986 edition): it’s not Littlewood’s proof that pi(x)-li(x) changes sign infinitely often, but Skewes’ proof of an upper bound for the first sign change.

    To David T:
    Also in the Miscellany, Littlewood comments on the proof of the Euler product formula for the Riemann zeta function: “it was introduced to us at school, as a joke (rightly enough, and in excellent taste).” (p89)

  30. Joe Shipman Says:

    The theorem that was proved by assuming the Continuum Hypothesis and then showing that the Continuum Hypothesis can be eliminated from any proof is the Ax-Kochen theorem on the decidability of the first-order theory of the p-adics.

    Littlewood has a bunch of proofs in his Miscellany that qualify as jokes, in addition to his reference to the Euler product.

    A list of funny RESULTS (not proofs) must include the Banach-Tarski paradox in first place.

    The proof of the Hales-Jewett theorem (which is much stronger than and easily implies Ramsey’s and Van Der Waerden’s theorems without the proof being longer) is side-splittingly funny because the induction is set up in such a ridiculously inefficient way — the main obstacle to finding the proof is simply an inability to think big enough.

    Conway’s algorithm for infallibly finding your way out of any maze (defined as a finite undirected connected graph with a distinguished node, where the edges from each node are labeled with distinct integers, and your only options are to pick an integer and if such a label exists on one of the edges from your vertex you will be moved along it to the other end, without receiving any information on whether you have actually moved until you happen upon the exit, and you initially know nothing about which graph you are in or where you are in it) is funny for a similar reason (those who have read this far should be able to figure it out).

    The double negation interpretation embedding classical logic into intuitionistic logic, substantially deflating a major foundational controversy, is also pretty funny (is it also due to Godel?).

    There is a very funny (and perfectly rigorous with a little care) visual proof of the non-cancellation theorem for knots, which turns a finitely presented polygonal knot into an infinite “wild” knot in order to perform a dazzling sleight-of-hand.

    Mascheroni’s theorem that you don’t need a straightedge to do straightedge and compass constructions is funny. That reminds me of another Godel result that is funny in the same way — you don’t need exponentiation because it can be defined in terms of addition and multiplication.

    I’ve got lots more but let’s see if anyone responds to what I’ve already said….

  31. Arne Peeters Says:

    I think Cantors Diagonalization is funnier if you “sharpen” it to T != A (With T the transcendentals and A the algebraic numbers) since now the smaller set doesn’t look equal to but actually greater than the bigger set.

  32. Scott Says:

    Jonathan: Yeah, you’re right, if you want a constant-factor slowdown you need to do the dovetailing in a different way (e.g., spend half your time on M1, a fourth on M2, an eighth on M3, etc.)

  33. David T Says:

    How about this?
    A proof that there is at least one thing we will both disagree upon.

    Claim: There is at least one thing we will both disagree upon.
    Proof: You can either agree or disagree with the claim.
    If you agree then QED.
    If you disagree then QED.

  34. Anonymous Says:

    If you agree then QED.

    No, it means I agree with you that there is something we disagree upon, but we might both be wrong, in which case we would think we disagreed about something even though in fact we always agreed.

    This could actually occur in practice:

    You: “Hey, didn’t we disagree about something years ago?”

    Me: “Yeah, that sounds familiar, although I can’t remember what it was. I guess this means there is indeed at least one thing we disagree about.”

    Even assuming our opinions haven’t changed (let’s assume we’re both terribly set in our ways), we could be wrong. For example, we could be hazily remembering a time when we disagreed vehemently with some third party.

    The upshot is that agreeing on something doesn’t mean it must be true. :-)

  35. Nathaniel Johnston Says:

    @Anonymous – By disagreeing with his proof, I think you just proved David T right ;)

  36. komponisto Says:

    Surely the proof of Russell’s paradox (viewed as the assertion that there exist classes which are not sets) has to qualify.

    Mathematical neophytes might also be amused by “joke theorems” like this (and variations):

    Theorem: Every natural number is interesting.
    Proof: If not, then by well-ordering, the set of uninteresting natural numbers has a least element n. The number n, being the smallest uninteresting natural number, is therefore pretty interesting. Hence the theorem follows by contradiction.

  37. Jeremy H Says:

    I’m actually a bit surprised that no one has mentioned random oracles yet. These definitely fall under the “You’re lost” category.

    In particular, $P^A\not=NP^A$ with probability 1.

  38. Jonathan Vos Post Says:

    David T’s Comment #33 is related to the first known lawsuit between a law teacher and a law student. The story is said to have happened in ancient Greece.

    The Law Teacher advertised that he was so good, that his students were all guaranteed to win their first lawsuit, or their tuition money would be returned.

    His first student sued the teacher immediately after graduating.

    Student: “This is my first case. If I win, then I win, so I get my money back. If, on the other hand, I lose, then I will have lost my first case, so then I win, because according to the contract, I get my money back. Hence, whether I win or lose, I get my money back. QED.”

    In front of the judge, the teacher waits his turn, and then speaks in his defense.

    Teacher: “My student has learned a little, but not enough. It is not germane whether that is my fault or his. Be that as it may, if I win the case, then I win, he loses, and he collects nothing besides the experience. Not a drachma of mine need be forked over. If, to the contrary, I lose the case, then, per the contract, the student did NOT lose the first case, and thus is not entitled to the money back. Hence, if I win, he loses, and if I lose, he loses. Either way, I need pay nothing. QED.”

    So, what does the judge do?

    [Comment: the Law is not axiom-based, but precedent-based]

  39. ambrosiac Says:

    How about “Any language L, not in P, can be obtained by diagonalization” (meaning: by presenting strings appropriately to a fixed standard enumeration of polynomial-time Turing machines M_0, M_1, …)

    Indeed, for each M_i there are infinitely many strings s on which M_i gives the “wrong” answer: s in L but not in L(M_i), or s not in L but in L(M_i). Consider M_0 and feed it one after the other strings from the standard string enumeration until you find an s as above; call it s_0. Consider M_1 and do the same until you find an s as above and different from s_0; call it s_1. And so on. Every string eventually appears in the sequence s_0, s_1, … (since any single string is accepted by infinitely many M_i and rejected by infinitely many M_i). Thus s_0, s_1,… is a permutation of the original enumeration of strings and when fed into M_0, M_1, … the diagonalization criterion produces L.

    It is safe to say that the above joke teaches us nothing whatsoever about P, NP, PSPACE… or for that matter anything else. :)

  40. Ted Diesel Says:

    I took a few theoretical computer science classes years ago. I seem to recall an argument in which one already had two algorithms that were individually inadequate but with complementary characteristics. One then constructed a new (randomized) algorithm with favorable properties by simply flipping a coin to determine which of the two pre-existing algorithms to run.

    For all I know, this sort of thing could be a common technique (perhaps on the order of “Why did the chicken cross the road?” for this crowd), but I found it amusing. (After a little searching, I think my class experiences were likely related to MAX-SAT.)

  41. asdf Says:

    Nobody has mentioned the proof of Goodstein’s theorem, using transfinite ordinals. The much later Paris-Harrington theorem showed that Goodstein’s theorem (a fairly straightforward arithmetic statement) can’t be proved with Peano arithmetic alone. I’ve wondered for a long time whether Goodstein’s theorem can be proved with classical analytic number theory (e.g. using contour integrals) in a well-motivated and traditional way (i.e. not with reverse-mathematics games like encoding transfinite induction into sets of reals or anything like that).