Sean Carroll, who many of you know from Cosmic Variance, asked the following question in response to my last entry:

I’m happy to admit that I don’t know anything about “one-way functions and interactive proofs.” So, in what sense has theoretical computer science contributed more in the last 30 years to our basic understanding of the universe than particle physics or cosmology? (Despite the fact that I’m a cosmologist, I don’t doubt your statement — I’d just like to be able to explain it in public.)

I posted my response as a comment, but it’s probably better to make it an entry of its own. So:

Hi Sean,

Thanks for your question!

Of course I was joking when I mentioned “objective standards” for ranking scientific fields. Depending on which questions keep you up at night, different parts of “humankind’s basic picture of the universe” will seem larger or smaller. (To say that, of course, is not to suggest any relativism about the picture itself.)

What I can do, though, is to tell you why — by my own subjective standards — the contributions of theoretical computer science over the last 30 years rival those of theoretical physics or any other field I know about. Of course, people will say I only think that because I’m a theoretical computer scientist, but that gets the causal arrow wrong: I became a theoretical computer scientist because, as a teenager, I thought it!

It’s probably best to start with some examples.

- We now know that, if an alien with enormous computational powers came to Earth, it could prove to us whether White or Black has the winning strategy in chess. To be convinced of the proof, we would
*not*have to trust the alien or its exotic technology, and we would*not*have to spend billions of years analyzing one move sequence after another. We’d simply have to engage in a short conversation with the alien about the sums of certain polynomials over finite fields. - There’s a finite (and not unimaginably-large) set of boxes, such that if we knew how to pack those boxes into the trunk of your car, then we’d also know a proof of the Riemann Hypothesis. Indeed,
*every*formal proof of the Riemann Hypothesis with at most (say) a million symbols corresponds to some way of packing the boxes into your trunk, and vice versa. Furthermore, a list of the boxes and their dimensions can be feasibly written down. - Supposing you do prove the Riemann Hypothesis, it’s possible to convince someone of that fact,
*without revealing anything other than the fact that you proved it*. It’s also possible to write the proof down in such a way that someone else could verify it, with very high confidence, having only seen 10 or 20 bits of the proof. - If every second or so your computer’s memory were wiped completely clean, except for the input data; the clock; a static, unchanging program; and a counter that could only be set to 1, 2, 3, 4, or 5, it would still be possible (given enough time) to carry out an arbitrarily long computation — just as if the memory weren’t being wiped clean each second. This is almost certainly
*not*true if the counter could only be set to 1, 2, 3, or 4. The reason 5 is special here is pretty much the same reason it’s special in Galois’ proof of the unsolvability of the quintic equation. - It would be great to prove that RSA is unbreakable by classical computers. But every known technique for proving that would, if it worked, simultaneously give an algorithm for
*breaking*RSA! For example, if you proved that RSA with an n-bit key took n^{5}steps to break, you would’ve discovered an algorithm for breaking it in 2^{n^1/5}steps. If you proved that RSA took 2^{n^1/3}steps to break, you would’ve discovered an algorithm for breaking it in n^{(log n)^2}steps. As you show the problem to be harder, you simultaneously show it to be easier.

Alright, let me stop before I get carried away. The examples I’ve listed (and hundreds more like them) are not exactly discoveries about *physics*, but they don’t have the flavor of pure math either. And even if they have some practical implications for computing (which they do), they certainly don’t have the flavor of nitty-gritty software engineering.

So what are they then? Maybe it’s helpful to think of them as “quantitative epistemology”: discoveries about the capacities of finite beings like ourselves to learn mathematical truths. On this view, the theoretical computer scientist is basically a mathematical logician on a safari to the physical world: someone who tries to understand the universe by asking what sorts of mathematical questions can and can’t be answered within it. Not *whether* the universe is a computer, but what *kind* of computer it is! Naturally, this approach to understanding the world tends to appeal most to people for whom math (and especially discrete math) is reasonably clear, whereas physics is extremely mysterious.

In my opinion, one of the biggest challenges for our time is to integrate the enormous body of knowledge in theoretical computer science (or quantitative epistemology, or whatever you want to call it) with the rest of what we know about the universe. In the past, the logical safari mostly stayed comfortably within 19^{th}-century physics; now it’s time to venture out into the early 20^{th} century. Indeed, that’s exactly why I chose to work on quantum computing: not because I want to build quantum computers (though I wouldn’t mind that), but because I want to know what a universe that allows quantum computers is *like*.

Incidentally, it’s also why I try hard to keep up with your field. If I’m not mistaken, less than a decade ago cosmologists made an *enormous* discovery about the capacity of finite beings to learn mathematical truths: namely, that no computation carried out in the physical world can ever involve more than 1/Ξ ~ 10^{122} bits.

Best,

Scott

Did anyone actually read Dr. Karp’s NYT essay?

“Algorithms are good at describing dynamic processes, while scientific formulas or equations are more suited to static phenomena. Increasingly, scientific research seeks to understand dynamic processes, and computer science, he said, is the systematic study of algorithms.Biology, Dr. Karp said, is now understood as an information science. And scientists seek to describe biological processes, like protein production, as algorithms. βIn other words, nature is computing,β he said.Seems to me, that Dr. Karp was saying pointing out an unarguable truth: a world of exponentially complex evolved systems (like galaxies, genomes, and brains), as described by exponentially large datasets (like the HGP, the PDB, and the SDSS) is a wonderfully good match to a world of exponentially complex computing machines (like Intel’s multicore processors), and exponentially many complexity classes and algorithms (the informatic theory that Scott and his computer science colleagues are charting).

Just to be provocative, is it really credible anymore that all this complexity is merely masking an underlying simplicity?

Perhaps the idea that the world is algorithmically compressible should be regarded as merely a quaint assumption of previous centuries, since disproved by theory, experiment, and observation.

The great challenge being, that even if the previous paragraphs are true, they are poorly matched to human cognitive preferences.

After all, who wants to spend up to a decade in graduate school, just to master an infinitesimally small fraction of an irreducibly complex body of knowledge?

Anyone who has a good answer to this question, has a key that will unlock the world of 21st Century science and technology.

“If you proved that RSA took 2^{n^1/3} steps to break, you would’ve discovered an algorithm for breaking it in n^{(log n)^2} steps.”

Why does this happen (could you please link a paper?). Also, wouldn’t this mean that you just couldn’t show a lower bound of 2^{n^{1/3}} since the outcome wouldn’t make sense?

Despite my computer science degree, I haven’t seen a lot of these results before β or at least, I don’t recognize them the way they’re formulated. Do you think you could reference the main theorem or paper behind each of your five points?

I’d love to see a longer list of examples of amazing discoveries in TCS (or “QE”, as you suggest). The first 5 were so fascinating!

So first I’ll pimp this Arxiv journal article about some interesting interesections between physics and computer science:

NP-Complete Problems and Physical Reality[PDF].From the comments here, it looks like people are very surprised and keenly interested to hear about these concepts in computer science.

You’re a pretty good writer — have you thought about writing a pop science book? There are zillions of pop science books about physics, math, biology… but honestly, I can’t think of any notable popular theoretical computer science books. There are *lots* of books about how computers are affecting (or will be affecting) human society, but those are about technology more than anything else.

I think it would be really cool to read about “Big Ideas” in theoretical computer science. Most people in the IT industry (myself included) have no idea what Ph.D computer scientists do all day.

So first I’ll pimp this Arxiv journal article about some interesting interesections between physics and computer science:

NP-Complete Problems and Physical Reality[PDF].From the comments here, it looks like people are very surprised and keenly interested to hear about these concepts in computer science.

You’re a pretty good writer — have you thought about writing a pop science book? There are zillions of pop science books about physics, math, biology… but honestly, I can’t think of any notable popular theoretical computer science books. There are *lots* of books about how computers are affecting (or will be affecting) human society, but those are about technology more than anything else.

I think it would be really cool to read about “Big Ideas” in theoretical computer science. Most people in the IT industry (myself included) have no idea what Ph.D computer scientists do all day.

I thought your essay describing TCS as mathematical theology did a better job of accomplishing the goal of this essay. Maybe that’s because it gave more detail about one example. Or maybe just because I saw it first.

Thanks! I should try to come up with a corresponding list for particle theory and cosmology.

The business about 10^122 bits is extremely hypothetical, but possible. It depends both on the dark energy being a cosmological constant that will never decay, and on the accessible bits being described by the holographic principle. Both quite plausible, but far from established.

Sean

Oh, alright:

(1) IP=PSPACE (Shamir 1990).

(2) 3-dimensional bin-packing is NP-complete.

(3) NP has zero-knowledge proofs (Goldreich-Micali-Wigderson 1986); the PCP Theorem.

(4) Width-5 branching programs can compute NC1 (Barrington 1986); corollary pointed out by Ogihara 1994 that width-5 bottleneck Turing machines can compute PSPACE

(5) Natural proofs (Razborov-Rudich 1993); in particular Wigderson’s observation about natural proofs for discrete log.

Scott, if you keep writing and posting things like this, then with probability > 2/3, I’m going to get sucked out of physics and back into CS — which is where I thought I was heading in 1994, until I showed up at Kenyon College and found out it didn’t

havea CS department. Oops. Shoulda read the catalog. It had Ben Schumacher, though.Anyway, this comment is nothing more than a compliment and an exhortation to (a) keep doing it, (b) write the damn book, and (c) work in a reference to the Dept. of Computer Science & Applied Theology, which is the phrase that singlehandedly pushes

A Fire on the Deeponto my top 10 list of science fiction.The business about 10^122 bits is extremely hypothetical, but possible. It depends both on the dark energy being a cosmological constant that will never decay, and on the accessible bits being described by the holographic principle. Both quite plausible, but far from established.Yeah, yeah, alright — I thought of these caveats, but they interfered with the rhetorical point I was building up to, so I decided to absorb them into “If I’m not mistaken”

they don’t have the flavor of pure math eitherWhy not? Pure math comes in many flavors, and I don’t see why this can’t be one of them.

Maybe the point is that these theorems — all of your examples are rigorous theorems — are associated with practical algorithms. So what? A lot of mathematics is associated with other areas. Even if it is completely satisfactory as pure mathematics, it may still have applications.

I think that part of the reason that computer scientists “get no respect” is that they are kind of elitist. What percent of CS papers are available on the web for free (not at ludicrous prices like 30 bucks each)? For example, what percent of the papers you mentioned for your 5 examples are available on the web for free? Why is it that it took physicists instead of so called “computer” scientists to invent arXiv. It seems that theoretical computer scientists look down their noses at computers and computer programmers.

I doubt that arXiv has anything to do with this. The physicist community happens to be very good at explaining all the cool and mind-blowing concepts in their discipline, and they’ve been doing that consistently for decades — much longer than arXiv has been in existence.

There are zillions of pop science books about physics, math, biology… but honestly, I can’t think of any notable popular theoretical computer science books.I respect popular science writing as much as anyone, but it is easy to spend too much time kicking yourself if your area doesn’t benefit from it. At the end of the day, popular science is written to entertain people, and hopefully to attract interest, but not to teach them. It isn’t another form of school. Popular science will inevitably overreward some areas and underreward others.

In particular, popular science has a “glass ceiling” that will prevent you from communicating many great ideas. I had an epiphany about this when I was at the house of a relative many years ago. He was a smart guy with a far-above-average interest in poular science, but he did not have advanced training in mathematics. I think that he had more mathematics books than I do; certainly he had more math biographies.

At one point he was interested in the Rubik’s cube. The Rubik’s cube is different from other popular mathematics, becuase it goads you to actually solve the damn thing instead of just reading about mathematicians at work. So he bought four books on how to solve the Rubik’s cube. But with a good understanding of undergraduate group theory, one would have been more than enough! Without some solid mathematical intuition of some kind (not necessarily a group theory course), books on the Rubik’s cube are downright painful. That really revealed the glass ceiling.

It seems that TCS is generally beyond the glass ceiling of popular science. I think that it’s because TCS is made up of theorems. It’s not so hard explain what a theorem is. But it is hard to explain, except to those who have joined the club, how much it matters to prove theorems. In fact some science journalists have discussed their impression that theorems are just a chore that can be skipped, postponed, or hopefully soon relegated to computers.

The responses so far are all very interesting.

Would it be impolite to encourage “lurkers” to speak up? I have many friends who are far more creative & smarter than me, and is remarkable how much encouragement these people need in order to speak up.

Perhaps this need for encouragement is BECAUSE my friends are so creative and smart (while, on the other hand, village idiots like me don’t need much encouragement, duh!).

Anyway, all you shy creative people, please consider yourself encouraged.

Greg: I’m delighted you agree with me that TCS, whatever else it is, is also “perfectly satisfactory as pure math”! But alas, not all mathematicians see things the way you do: to many of them, the goal of “real” or “deep” math is to start from dizzyingly-general theorems about infinite-dimensional spaces and then generalize them even further. The TCS goal, of course, is to take laughably-concrete problems that we can’t solve and then identify special cases that we

stillcan’t solve. Same standards for deciding what’s true; different aesthetic.That said, the TCS and math aesthetics drew noticeably closer in just the past decade (see Luca’s blog for more on that theme), and maybe they’ll be indistinguishable at some point in the future. Basically, I see TCS and math as Galapagos finches that lived on different islands for quite a while, but that are probably still interfertile.

n^{(log n)^2} is o(2^{n^1/3}). Does this mean no natural proof can show a 2^{poly(n)} lower bound for discrete log?

I think that the result is stronger, that there is no natural proof that discrete log is not in P. You can’t even get an O(n^{log n}) lower bound, for example.

n^{(log n)^2} is o(2^{n^1/3}). Does this mean no natural proof can show a 2^{poly(n)} lower bound for discrete log?Yes, it does. Good eyes.

Indeed, you can conclude something stronger: that no natural proof can show a better lower bound for discrete log than “half-exponential,” meaning a function f such that f(f(n)) is exponential. Such functions exist, but can’t be defined analytically.

Greg: No, that’s not correct (see my above comment). To get that you need to make a stronger assumption, e.g. that there exists a one-way function that takes 2^n^epsilon time to invert.

You are my hero now. Seriously, that list was one of the absolutely most funky things I’ve read in quite some time.

Regarding the book: yep, alright, definitely, will do, point taken, duly noted, excellent idea, high on the list, comin’ any day, no doubt about it…

Greg — that’s a very interesting and insightful story about the Rubiks cube.

I have to respectfully disagree about Theoretical Computer Science being beyond the glass ceiling of popular science. The five paragraphs that Scott posted were perfectly readable — and totally fascinating. I have no doubt that a sufficiently motivated Ph.D could write a *great* book to elaborate on these topics for the educated layperson.

Just think, in a year-and-a-half we could all be reading Scott’s blockbuster new title,

Quantitative Epistemology: What Can We Know, and When Can We Know It?C’mon, you know it’ll sell like hotcakes!Well, I’ll buy one, anyway.

Anonymous 1:21, I’m pretty sure all of the results mentioned can be found at one of two sites, Citeseer and ECCC. Journal articles can usually be found here, as well as on authors’ websites. TCS people have less funding than physicists and the free-access spirit seems pretty strong.

If I can’t find the exact result I’m looking for on a search, I find an article on a related topic, then hop to the bibliography entry that seems closest, and repeat. Sort of like internet routing, and works OK.

Your original list was confusing, and the explanation helped a lot. Btw, RSA is not based on discrete logarithm… the best algorithms for factoring and DL have similar — but not equal — runtime.

Scott:

But alas, not all mathematicians see things the way you do: to many of them, the goal of “real” or “deep” math is to start from dizzyingly-general theorems about infinite-dimensional spaces and then generalize them even further. The TCS goal, of course, is to take laughably-concrete problems that we can’t solve and then identify special cases that we still can’t solve. Same standards for deciding what’s true; different aesthetic.Speaking as a student of “pure” mathematics who (nonetheless?) finds TCS to be compellingly “mathematical” and generally fascinating, I think that these apparently different aesthetics are actually equivalent. After all, when you try to identify an unsolvable special case of some unsolvable concrete problem, what you are actually doing, it seems to me, is trying to hone in on the problematic phenomenon more closely. If you think about it, this is exactly what mathematicians do: when they try to generalize a theorem, they are trying to identify the theorem’s “root cause”.

Both of these cases are manifestations of a desire is to understand the “real” (i.e. deep, philosophical, theological, if you will) reasons why things are and must be the case, as opposed to being content with enumerating truths for “practical” use. This, if anything, is what distinguishes the aesthetic of “pure” mathematics from that of its “applied” counterpart.

Hi Scott

Let me try to suggest a skeleton for a good answer to Sean Carroll on some main insights developed in theoretical computer science which with more effort can be explained to a layperson. I would personally prefer a description which at the end is not polemic, apologetic, or too sophisticated.

1. The notion of an algorithm. Some basic insights from the theory of algorithms: the importance of storing of hashing and of backtracking

2. The idea that a computer cannot do everything. Intractability and the NP = P problem

3. What computers can (magically) do very quickly. FFT and other basic algorithms in mathematical and scientific computing.

4. What computers can do but hardly: Linear programming. Semi definite programming and related optimization problems.

5. “Beyond a reasonable doubt:” the notion of probabilistic proof

6. The notion of interactive proofs. Like in court, interactive process can raise the ability of reaching the truth.

7. Secrets and lies. The depth and fundamental importance of cryptography. (And while talking about crypthography, sure, the bizzare and amazing notion of zero-knowledge proofs.)

8. (If we want to reach the cutting edge:) The notion of probabilistically checkable proofs and hardness of approximation.

9. Distributed system. The difficulty of cooperation and synchronization.

10. Underatnding, using tools developed in theoretical computer science, the notions of “learning” and “knowledge”.

11. Merging computation and economics: lies and strategies.

12. Quantum computation.

Of course, this refer only to a small part of theoretical computer science (STOC/FOCS).

“What percent of CS papers are available on the web for free (not at ludicrous prices like 30 bucks each)? For example, what percent of the papers you mentioned for your 5 examples are available on the web for free?”

I found 4 out of 6 of those papers free online, and the other two (both written in 1986) on the ACM Digital Library, where they could be accessed by joining SIGACT for $10. Many, many papers are available for free online at Citeseer, and many more for a small fee by getting an ACM DL account.

And if you can’t find the original paper (say, because the result is old), try course lecture notes.

I myself have been exhorting my colleagues to post

alltheir papers (including early ones) on their homepages. As for why many of them haven’t, I think the answer has less to do with elitism than with laziness, inertia, and misguided copyright-wussiness.Interesting list, but I think it would do good to be more honest about the caveats. If I’m not wrong, the following are among them:

(2) Not true as stated. You need to know the length of the purported proof in advance.

(3) Dependent on an assumption. It would be true if NP were especially hard or especially easy but not in between.

(4) The fact that the memory wipe occurs precisely in intervals of a second, not more, not less, is critical.

(5) This says nothing about computation, it has nothing to do with science. It’s merely about the paucity of techniques we have to prove lower bounds.

cheshire cat:

(2) Nope. I only gave the “if” direction initially; for the “only if” direction, I was careful to include an upper bound on the length of the proof.

(3) Yes, I was assuming one-way functions.

(4) As long as you have a reasonable

lower boundon the interval between memory wipes, you’re fine.(5) You’re entitled to your view of Razborov-Rudich, but I strongly disagree with you. RR kicks in as soon as you’ve “broken the code” for a complexity class — that is, discovered a feasible means to test if a function is in the class or not. It tells us that any P!=NP proof will have to work by going through a back door,

withoutbreaking the code of P. While not a reason to give up, this strikes me as something fundamental, and not at all a trivial artifact of our ignorance.Scott, yeah, Statement 2 was very cleverly phrased

But the whole “breaking the code” argument is awfully weak to me. There are a lot of complexity results showing we are unlikely to “break the code” for P/poly, it certainly doesn’t follow that we are unlikely to ever show P not equal to NP. In general, proof strategies trying to conclude B by proving A => B where A is false are not very effective.

Here is great book on bordering on pop comp sci: written by a physicist – go figure:

The Dreams of Reality

Heinz Pagels

Pagels also wrote a wonderful book on cosmology.

To wit, among other comments: “…in order to express the invariant structure of a theory we must always use some representation — a specific language, often mathematical — but the invariant is really independent of that specific representation.”

And so, “if algorithms are good at describing dynamic processes” then this is only true within the context of the narrow set of their representations as we understand them, based on our previous theories, or perhaps better, said dynamic processes can be described in an infinite multitude of representations of which we with our limited “cognitive preferences” are only privy to an achingly small few. And really, isn’t static phenomena only a snapshot of the dynamic continuum anyway?

–deltacephei

Not all mathematicians see things the way you do: to many of them, the goal of “real” or “deep” math is to start from dizzyingly-general theorems about infinite-dimensional spaces and then generalize them even further.Of course I can’t be happy about this description of pure mathematics, and frankly I’m a little surprised to hear it from you. As I said, pure math comes in many different flavors. Naturally, when you see a flavor of it that is far from your own, it might impress you as excessively abstract and on top of that pretentious. But wouldn’t it be safe to suppose, if well-regarded people are doing the research, that their area can also be made to look unpretentious and beautiful?

More specifically, the presence of infinite-dimensional spaces is a strange criterion for deciding that topic is counting angels on the heads of pins. I don’t see that any of the seven Clay Prize problems directly refer to infinite-dimensional spaces. The Yang-Mills problem comes the closest, but that’s because it’s a problem from physics!

But even if an area does use infinite-dimensional spaces, is that so bad? After all, P (i.e., boolean functions computable in polynomial time) is an infinite-dimensional vector space over the field with two elements, and so is Delta^nP. Should I say that the conjecture that the polynomial-time hierarchy does not collapse is exactly one of your absurdly general conjectures about infinite-dimensional spaces?

Granted, non-collapse is a conjecture rather than a theorem. The set of oracles is also an infinite-dimensional vector space over Z/2, and the set of generic oracles is a highly abstract subset of this vector space. Do you think that it pushes the limits to prove a separation result using generic oracles?

But maybe these examples are too clever by half. If any theorem naturally fits the description of an absurdly general fact about infinite-dimensional spaces, it would be the spectral theorem for self-adjoint operators on a Hilbert space. Do you seriously mean to say, given that you are a quantum information theorist, that the spectral theorem is too abstract to matter?

The TCS goal, of course, is to take laughably-concrete problems that we can’t solve and then identify special cases that we still can’t solve.That is sometimes the goal, except that it’s usually a disingenuous goal because everyone knows that the special cases are just as hard as the general cases. At least, so it went with Avi Wigderson’s special case of P vs #P at the ICM in Madrid.

That said, the TCS and math aesthetics drew noticeably closer in just the past decade (see Luca’s blog for more on that theme), and maybe they’ll be indistinguishable at some point in future. Basically, I see TCS and math as Galapagos finches that lived on different islands for quite a while, but that are probably still interfertile.I don’t know quite when or whether TCS drew closer to pure mathematics, but I can point to the fact that both Luca and Avi gave invited talks at the International Congress of

Mathematicians. Could the TCS community be disregarding some important olive branches? It is true that TCS and math are on different “islands”, in the trivial sense of being in different departments. That, I think, is the only real schism. It’s like what they say about America and Britain: two nations divided by a common language.I have to respectfully disagree about Theoretical Computer Science being beyond the glass ceiling of popular science.The glass ceiling is in different places for different people, and in different venues. I am thinking of what gets published in the New York Times Science section. That is not at the level of what is written by a PhD that you might find interesting, but rather more at the level of what you might write for people on the next rung down the ladder.

Look at how much the Times watered down Sara Robinson’s article on the result that primality is in P. It was still a very nice article, but it was radically restricted to the basics. The Times really seems to believe that the significance of theorems is all but beyond their readership, that is only worth mentioning when the story is as big as the Poincare Conjecture or Fermat’s Last Theorem. Although I personally think that the Times is too conservative on this point, they probably aren’t completely wrong.

Greg, there are tons of popular science books on the Riemann hypothesis, Godel’s proof, string theory, etc. Are you seriously suggesting that it is possible to “dumb down” these topics enough to write about them, but that TCS is not? In my own experience TCS is actually easier to explain to lay people than those topics. When my own pop sci TCS book gets published (in the alternate reality where it actually gets finished) we can debate whether or not I was right.

Also, let’s not forget that GEB, one of the grand daddies of popular science books, was largely about computer science.

Greg, there are tons of popular science books on the Riemann hypothesis, Godel’s proof, string theory, etc.I didn’t have the impression that there are “tons” of popular books on the Riemann hypothesis and Godel’s theorem. There are a few. Unlike with a newspaper article, with a book you have the freedom to limit your audience. So yes, you can have popular expositions about a theorem like Godel’s theorem, but it will sell less well than popular expositions of other things. Which is not to say that I am against these efforts. I think that popular exposition is very important. Popular exposition of TCS is a good idea, but it is unlikely to be a great idea.

The one guy who managed to penetrate, or maybe temporarily raise, the glass ceiling of exposition in mathematics was Martin Gardner. In his heyday, he was fantastic. (He is more or less retired now.) But there isn’t anyone else like Martin Gardner.

String theory is a rather different case, as is GEB. Even though string theory is growing closer to rigorous mathematics, its popular expositions still dwell on the physics. They don’t discuss theorems. Likewise GEB slipped in Godel’s theorem in the midst of many non-mathematical topics. Its discussion of computer science tended to wander away from rigor. But I agree that GEB is a partial exception.

Another partial exception is Benoit Mandelbrot.

Okay, I have to admit to the fantastic book “The Mathematical Experience”, by Davis and Hersh. It has full respect for theorems and I know from examples that it puts fire in the bellies of mathematics students. This book would have completely contradicted my point, if it had sold better.

Another interesting example is Raymond Smullyan. He introduced formal logic and Godel’s theorem by way of logic puzzles. That was very clever and I admire Smullyan too, but he did not sell on the same scale as Feynman and Hawking. The only mathematical writer who has come close to that in very recent years is John Allen Paulos, who did it by bemoaning society’s ignorance of mathematics.

About example 1: That would have been true if Chess was in PSPACE. Alas, it is known to be EXP-Time-complete:

Aviezri S. Fraenkel, David Lichtenstein, “Computing a Perfect Strategy for n*n Chess Requires Time Exponential” in N. ICALP 1981:

278-293

Perhaps if we had few aliens, and we could talk with each of them while preventing them talking with each other…

anonymous, that’s only one generalization of chess. For example, I think Fraenkel and Lichtenstein ignore the “no position must be repeated thrice” rule. Games are generally EXP-time complete if they are loopy, otherwise not. There are generalizations of chess which are PSPACE-complete.

So in short: Scott could weasel his way out of this objection too.

Ah, just put a polynomial upper bound on the number of moves…

We are not chess gods yet.

But we are getting close.

Both Gardner and Smullyan had that rare gift: the common touch. E.g., I have Smullyan’s

Chess Mysteries of Sherlock Holmes, which is a breezy account of chess analysis … retrograde analysis, naturally!As for Martin Gardner, his

Flight of Peter Frommis superior moral fiction (pretty heavy duty fiction, actually) that explains WHY Gardner shifted his attention from theology to mathematics.But for purely “Aaronsonesque” mathematical fiction, isn’t there only one modern-day contender: Rudy Rucker? After all, Rucker’s

White Lightis the only fiction known to me where the protagonist begins by physically checking into Hilbert’s Hotel. And Rucker’sMaster of Space and Timeis even more hilarious.Dystopian mathematical fiction is a rare category. By far the most dystopian example I know is Carter Schultz’

Radiance. Within US defense circles, even knowing that that this book exists is grounds for raised eyebrows—Schultz’ fiction is perhaps nearest that the west has to scientific samizdat.Radiance‘s author is Scholz, not Schultz.How about the arithmetic hierachy and the fact that you can classify logical complexity by quantifier patterns, for classical logic, and by bounding the quantifiers it links up with the polynomial hierarchy. Pure mathematical logic, driven by concerns about provability and the foundations of mathematics, meets resource bounds and turns into TCS.

As to the comment about games being generally EXP-time complete, or less, note that Go is EXP-space complete and there are variants of chess and go whose complexity is probably even worse.

Now, what about areas of TCS other than complexity theory?

While examples are great, there are also some general principles behind why TCS is so interesting.

Computation is an inherently interesting object. The hardness of computations explains why physics, even if it reached the “theory of everything”, does not solve all the problems of biology, chemistry, psychology etc…

What is fascinating is how little we know about it at the moment. TCS seems to be at the level corresponding to 17th century physics. This is what makes it so interesting: not what TCS already done but what it might do in the future.

Scott, could you give me a good reference specifically on the Riemann Hypothesis angle?

Scott, could you give me a good reference specifically on the Riemann Hypothesis angle?What I said applies to formal proofs of any statement whatsoever. The Riemann Hypothesis was just an example.

Now, what about areas of TCS other than complexity theory?A main reason I went

intocomplexity theory is that I couldn’t find so many tender nuggets in the other parts of TCS. If any distributed computing or programming language people disagree with that assessment, don’t flame me — nugget me!I wasn’t flaming, just asking a sincere question. There is, however, an ambiguity – whether we’re asking for significant developments in TCS that would be worthy of writing up in a book, or Sean’s original question, significant developments, especially recent ones, that help us understand the universe.

The discovery of phrase-structured grammars, or that you can build set-theoretical models for the lambda calculus, or the discovery of lazy evaluation strategies for programming languages, or the discovery of the various linear logics are all important but they have limited relevance outside CS, excet sometimes to cognitive science. It’s difficult to think of anything outside complexity theory that applies more broadly.

Scott, what TCS results do you intend in example

#1 with the chess strategy? It sounded like the god has a proof of a theorem describing the winning strategy and this is then encoded as probabilistically checkable proof. But maybe you mean something else.

Scott, for some interesting work in distributed computing, a ‘friend’ of mine did some interesting work in robust file distribution techniques. You might find the actual theory behind it to be so straightforward as to be trivial, and I would generally be inclined to agree, but there’s this funny business of other smart people having attacked the same problem and come up with much more complicated, less serviceable solutions.

Bram, funny you should mention that — while your friend’s file distribution software might not be theory, it

issomething that I downloaded just a week ago, and that I’ve been happily using ever since to watchSouth Parkepisodes. Tell your friend I said thanks — he’s enabled a whole new level of procrastination!Scott, what TCS results do you intend in example #1 with the chess strategy?As I said before, the result in question is IP=PSPACE [Shamir, 1990].

One interesting thing about the chess example is that it’s not sufficient merely to know a winning strategy for chess. So the aliens’ job would be much harder than merely playing optimally against any opponent.

My last post should have been qualified by assuming some sort of plausible conjecture, of course.