The Aaronson $25.00 Prize

[Update]

For those of you who’ve been living in a non-wifi-enabled cave, four days ago Stephen Wolfram awarded a $25,000 prize to a 20-year-old undergraduate named Alex Smith, for proving that a particular two-state, three-symbol Turing machine is universal. The prize was to celebrate the fifth anniversary of Wolfram’s paradigm-smashing, foundation-shaking masterpiece, A New Kind of Science. (More from Bill Gasarch’s blog, Nature, and Scientific American.)

Smith sounds like a swell guy who entered this contest for exactly the right reasons: he was at his parents’ place over summer break and had nothing better to do. He deserves the money, and I sincerely hope the CS theory community hasn’t heard the last from him.

Predictably, though, as soon as this story broke I started getting emails from journalists asking me about the far-reaching scientific implications of the new universality proof. In trying to give them an honest answer — one that wouldn’t be misunderstood, or spun to support a pre-existing storyline with which I disagreed — I inevitably came off like an ornery old sourpuss. From Scientific American:

Of course, there may be a reason the problem languished. “Finding the smallest universal [Turing machines] is a neat recreational pursuit,” quantum computation researcher Scott Aaronson of the Massachusetts Institute of Technology says, but “it’s no longer seen as connected to the central questions of the field.” …

“The impact of NKS on all the areas of computer science and physics I’m familiar with has been basically zero,” he says. “As far as I can tell, the main impact is that people now sometimes use the adjective ‘Wolframian’ to describe breathtaking claims for the trivial or well-known.” [Martin] Davis offers a sunnier take: “The book has a lot of beautiful pictures.”

And from Nature:

The solution isn’t hugely relevant to modern computer science, says Scott Aaronson, a computer scientist at the Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts. “Most theoretical computer scientists don’t particularly care about finding the smallest universal Turing machines,” he wrote in an e-mail. “They see it as a recreational pursuit that interested people in the 60s and 70s but is now sort of ‘retro’.”

Having partially degrumpified, in the remainder of this post I wish to offer something positive.

But first some background: a month after NKS came out, I wrote a review of it for the journal Quantum Information and Computation, in which I examined Wolfram’s claims about quantum mechanics and computational complexity, and explained what I saw as the problems with them. (Rather than rehash the review, I’ll just point you there if you’re interested.)

Today I’d like to celebrate the fifth anniversary of my critical review of NKS, by offering a $25 prize for stodgy, conventional work in the field of quantum complexity theory.

The Aaronson $25.00 Challenge

In NKS, Wolfram places himself among those computer scientists and physicists who doubt the possibility of quantum computers, not for any practical reason but as a consequence of their disbelieving quantum mechanics itself. As he writes on page 771:

Indeed within the usual formalism [of quantum mechanics] one can construct quantum computers that may be able to solve at least a few specific problems exponentially faster than ordinary Turing machines. But particularly after my discoveries in Chapter 9 [‘Fundamental Physics’], I strongly suspect that even if this is formally the case, it will still not turn out to be a true representation of ultimate physical reality, but will instead just be found to reflect various idealizations made in the models used so far.

Here, then, is the challenge:

If a quantum computer can efficiently solve a problem, can it also efficiently convince Wolfram that the solution is correct? More formally, does every language in the class BQP admit an interactive protocol where the prover is in BQP and the verifier is in BPP?

In other words: can quantum computers always “show their work”? It’s obvious, for example, that if a quantum computer spit out the factors of a 5,000-digit number, you wouldn’t have to believe quantum mechanics (or even know what it was) to check whether the answer was right. I’m asking whether every problem solvable by a quantum computer has the same property. And to make things fair to the quantum computer, I’ll let it give not just a static proof but also an interactive protocol, by which a distrustful polynomial-time classical verifier could become convinced, to arbitrarily high confidence, that the quantum computer knows the right answer.

(An example for the uninitiated: suppose you had two graphs G and H, and suppose you picked one of the graphs at random, randomly permuted its vertices, and gave the result to a quantum computer. And suppose the quantum computer could unfailingly tell you which graph you started with. Clearly this should convince you that G and H are not isomorphic — since if they were isomorphic, then the quantum computer couldn’t have done better than guessing! And this is true even though you never received a proof of non-isomorphism that you could hand to someone else.)

I’ll award $25 either for a proof that every quantum computation can be “counter-Wolframized,” or for an oracle relative to which some quantum computation provably can’t be. If both problems are solved then I’ll award $25 for each. Every serious submission will be reviewed by a Prize Committee consisting of me. The Committee may also choose to award smaller prizes for partial results.

Note: Much as I’d like to “pull a Wolfram,” the beautiful question above was (to my knowledge) first asked by Daniel Gottesman, at a conference in February 2004. Also, the idea of a $25 prize was suggested to me by Mike Mosca.

Update (10/30): A commenter pointed me to this thread in the Foundations of Mathematics (FOM) mailing list, which contains an actual technical discussion of Smith’s universality proof. Of particular interest:

  1. an argument by Vaughan Pratt that Smith’s universality proof is wrong,
  2. a response by Todd Rowland of Wolfram Research,
  3. a post from Wolfram himself, which, though written in his trademark way, comes the closest I’ve seen of anything by him to addressing actual hard questions about the definition of universality, and
  4. this comment from John McCarthy: “In the 1950s I thought that the smallest possible (symbol-state product) universal Turing machine would tell something about the nature of computation. Unfortunately, it didn’t. Instead as simpler universal machines were discovered, the proofs that they were universal became more elaborate, and [so] did the encodings of information.”

In judging the correctness of Smith’s proof, the key question is what counts as “universality.” As I explained to the journalists who emailed me, the rules of Wolfram’s prize left a huge gray area by explicitly refusing to specify this. In particular: what kinds of input and output encodings are allowed? How do we make sure the real computational work is done by the Turing machine itself and not the encoding procedures? Does the tape have to be filled with 0’s initially, or can it be filled with other patterns, and if so which ones? Since the two-state Turing machine in question has no “halt” state, what external conditions can we impose to determine when the machine has halted?

Still, I decided not to make a fuss about such things in my original post, since it seemed clear from Smith’s writeup that (1) he was aware of these issues, and (2) there was some nontrivial sense in which he proved universality. I wasn’t going to lose sleep over which sense, for the simple reason that I’d never lost sleep over the (2,3) universality question itself!

74 Responses to “The Aaronson $25.00 Prize”

  1. Fuggs Says:

    Twenty five bucks??? You must be pretty sure noone’s gonna solve your problem to risk half your research budget.

  2. Bram Cohen Says:

    If they’d asked me I’d have sounded like an even bigger grouch. None of the actual work here was done by Wolfram (same for the cellular automata 110 result), and Wolfram’s ‘principle of computational equivalence’, which he’s predictably touting this result as evidence for, is totally busted. Specifically, there’s Presburger arithmetic, and there are weakened logics which can prove their own consistency because they’re incapable of diagonalization.

    Even more damning, I’m quite certain that a bunch of the cellular automata Wolfram classifies as nontrivial fall into the middle category, in particular one which in his book he proudly proclaims that he went through two million initial starting states and they all did something semi-complicated but eventually settling down. Duh.

    I will double your offer Scott, and give a prize of $50 to anyone who demonstrates that one of the automata which Wolfram classifies as non-trivial, and hence supposedly universal, is actually non-universal.

  3. Job Says:

    Is NBPP a class? Sounds like you’re asking if BQP C NBPP. NBPP would contain NP, so if the answer is no then BQP !C NP and that sounds like it’s worth more than $25. 😀

  4. Scott Says:

    You must be pretty sure noone’s gonna solve your problem to risk half your research budget.

    Fuggs, I’m even further out on a limb than you think: we’re talking $25 of my personal money.

  5. anonymous Says:

    Wolfram should be envied. He is rich and can do whatever research he likes without worrying about what others think.

    Much of his research may lead nowhere. But at least he’s having fun.

  6. Greg Kuperberg Says:

    The first case that I would ask about is Simon’s algorithm (with respect to an oracle). How would a prover, even one with unlimited computational power, establish that there is no hidden period?

  7. Scott Says:

    Job: The class I’m talking about has no standard name, but I’ve been calling it IPBQP (that is, interactive proofs with BQP verifier). It’s clear that IPBQP ⊆ BQP, and the question is whether IPBQP = BQP.

    To answer your first question: no, I don’t know of any class called NBPP (though there is a class called ∃BPP, as well as MA and AM). NBPP would not be a good name for the class I have in mind: the name needs a ‘Q’ somewhere to refer to the prover being a quantum computer, as well as an IP for interactive protocol.

    We already know, from the famous IP=PSPACE theorem, that if a quantum computer can solve a problem then an omniscient prover can convince you of the answer. (This follows since BQP⊆PSPACE.) So the interesting question is whether the quantum computer itself can do so: if it’s powerful enough to solve a problem, is it also powerful enough to convince a classical computer that it did?

    Proving the answer is no unconditionally would indeed be worth more than $25, as it would imply P≠PSPACE! That’s why I only asked for an oracle relative to which the answer is no. Obviously that’s still a hard problem (if it weren’t I wouldn’t be offering money for it), but I think it’s entirely within the realm of possibility.

  8. Greg Kuperberg Says:

    Okay, there is something that troubles me about using Simon’s problem as a counterexample. Namely, I am told that statistical difference is a problem in SZK, and that SZK is closed under complementation. So statistical indistinguishability is in SZK, and therefore in AM. So it looks like some versions of Simon’s problem are in AM, even if Merlin is asked to prove non-existence of a period. However, I am getting confused as to what is really going on.

  9. Scott Says:

    Greg: Simon’s problem is the first example I thought of as well, and I still see it as the best route to an oracle separation.

    The difficulty is this: we know Merlin can convince Arthur that a black-box function is one-to-one rather than two-to-one, using either approximating counting or else the “pick a random x, send f(x), tell me what x was” trick. The “only” problem with these protocols is that Merlin’s strategy can’t be implemented in quantum polynomial time, because of the BBBV lower bound. But how can we characterize all possible protocols that convince Arthur that f is one-to-one, and show that Merlin’s strategy can’t be implemented in BQP for any of them?

  10. Scott Says:

    Much of his research may lead nowhere. But at least he’s having fun.

    I’m having fun as well.

  11. Greg Kuperberg Says:

    At the moment I still want to fish for an oracle relative to which BQP is not even in IP.

    Here is a case to consider. In our paper we discuss an imitation random quantum oracle, which we think send |00…0> to an approximately random state |psi>. Now post-condition the oracle (or rather the classical oracle that is the source of its entropy) on the outcome that the first qubit of |psi> is either |0> or |1>. The result is a kind of hardest-possible oracular problem which is in BQP, similar to the way that the random oracle can be used to make very hard NP-complete problems. Is this problem in IP?

  12. Greg Kuperberg Says:

    I am not convinced that Wolfram is having fun with his shtick. He’s having ego, but that may not be any fun at all.

  13. Scott Says:

    Greg, that problem might indeed be outside IP, if only we knew how to analyze it.

    (Note, on the other hand, that Recursive Fourier Sampling is in IP and even IPBQP.)

  14. Greg Kuperberg Says:

    If you could show that BQP = IPBQP for all oracles, then you would know how to analyze the mock quantum oracle in particular. I have the feeling that if you’re smart enough to do the latter, you’re also smart enough to do the former.

    I also note that you could theoretically owe $50. Someone might show that BQP = IPBQP for, say, all algebraic oracles; and at the same time find some oracle for which they are not equal.

  15. Greg Kuperberg Says:

    (Argh, tricked again by a preview that’s better than the final result.)

  16. Scott Says:

    I also note that you could theoretically owe $50. Someone might show that BQP = IPBQP for, say, all algebraic oracles; and at the same time find some oracle for which they are not equal.

    Yeah, I said that. My own guess, for whatever it’s worth, is that BQP = IPBQP in the real world but not in all relativized worlds.

  17. Peter Sheldrick Says:

    anonymous,

    I don’t care if hes having fun. If you give me tons of cash on the condition that i go totally bat-shit crazy, i would pass.

    I don’t know what you guys think, but the thought that everything could be understood in terms of tiny cellular automata, just makes me feel sick.

  18. Thomas S. Says:

    Proving the answer is no unconditionally would indeed be worth more than $25, as it would imply P≠PSPACE!

    A nonrelativistic BQP≠IPBQP would also separate BPP from BQP, wouldn’t that be a lot more interesting than P≠PSPACE?

  19. Thomas S. Says:

    Rats, I didn’t get the subscripts on IPBQP… it showed up correctly in preview, when I used html ‘sub’ ‘/sub’ tags.

  20. Kris Says:

    Hi, I tried to read the pdf of the review, but the typesetting was badly broken (see e.g. page 3). Is the file corrupted?

  21. Douglass Says:

    Scott, I am a computer science major !! But while I was searching google, I found your blog. What am I supposed to do now?

  22. Andy Says:

    And I thought you wanted everyone to like you … I guess everyone doesn’t include Wolfram =D

  23. Scott Says:

    Kris: Sorry about that! The PDF displays fine for me, but I guess not in everyone’s viewer. I changed the link to point to the arXiv page.

  24. teaspoon Says:

    I don’t know what you guys think, but the thought that everything could be understood in terms of tiny cellular automata, just makes me feel sick.

    Funny… I thought the criticism of Wolfram was that it’s obvious that everything can be understood in terms of cellular automata.

  25. egal Says:

    You have all the right to spend your $25 as you wish, even to be the only member in your prize committee. I would wish more people or companies financing research, any kind of research even if you think that it is not worth it. But so is your personal opinion, the field of small Turing machines has been fruitful and still productive, you should inform yourself a little more…

  26. onymous Says:

    Wait, Wolfram said something about “my discoveries in Chapter 9 [’Fundamental Physics’]“? I read that chapter, but I must have overlooked the part where there were “discoveries” and not “vigorous but unconvincing hand-waving.” Oops.

  27. Dave Bacon Says:

    Scott I would work on your problem, but only if you promise not to work on it 😉

    BTW the other contribution Wolfram has made was that we get to use “A New Kind of X” to mock “X”. I do hope he writes a sequel, “An Even Newer Kind of Science.”

  28. Peter Sheldrick Says:

    Funny… I thought the criticism of Wolfram was that it’s obvious that everything can be understood in terms of cellular automata.

    Sorry, i forgot that on this blog the equation

    Everything = All a CS guy would ever want to know

    holds. Excuse my faux pas ;).

  29. Scott Says:

    Scott I would work on your problem, but only if you promise not to work on it

    I can promise I won’t have much time to work on it!

  30. Wolframian Says:

    My favorite Wolfram story is that his company had to put orange juice all over the office to remind him to eat something besides meat.

    Why you may ask? Because homeboy gave himself scurvy.

    I’m all for the typical eccentricity that comes with our field, but seriously. Fucking scurvy?

  31. El Christador Says:

    Still, you have to admit the invention of tungsten was some nice work.

  32. Greg Kuperberg Says:

    Because homeboy gave himself scurvy.

    Aha! The truth comes out on his attitude towards piracy.

  33. Greg Kuperberg Says:

    BTW the other contribution Wolfram has made was that we get to use “A New Kind of X” to mock “X”. I do hope he writes a sequel, “An Even Newer Kind of Science.”

    Indeed. The best review of his book on Amazon is a single punctuation mark: “A New Kind-Of Science.”

  34. anonymous Says:

    Wolfram must have got a lot of flack for claiming priority on so many things. He probably thinks of himself as being embroiled in a modern day Newton/Leibniz controversy, although he probably refers to the “Newton/Leibniz” thing as the “Newton/Leibniz/Wolfram” thing.

  35. Scott Says:

    Not the “Wolfram/Wolfram/Wolfram” thing?

  36. Coin Says:

    Yeah, I said that. My own guess, for whatever it’s worth, is that BQP = IP_BQP in the real world but not in all relativized worlds…

    A nonrelativistic BQP≠IPBQP would also separate BPP from BQP, wouldn’t that be a lot more interesting than P≠PSPACE?

    Hello, please excuse the dumb question, but what does “relativized” / “nonrelativistic” mean in this context? Does “in a relativized world” just mean “assuming some set of oracles to be accessible”?

  37. Oleg Says:

    While ‘principle of computational equivalence’ is invalid as stated – there is plenty of room between trivial and universal, I am wondering if something can be salvaged.
    For example: what can we say about the probability of a random binary string to be an index of a complete set? Define “probability” and “completeness” to taste.

    The mere fact that it took 12 years to solve Post’s problem suggests that these in-between sets might be rare beasts.

    Or then, maybe not. It’s one thing to find a set that looks non-universal and it’s quite another to find a set that is provably so.

  38. Scott Says:

    Coin: Not a dumb question, just a basic one. Yes, “in a relativized world” means “assuming some oracle A that every machine has access to.” An “unrelativized separation” is one that holds in the real world, with no oracle around.

    “Nonrelativistic” means far from the speed of light, where the Newtonian approximation remains valid. 🙂

  39. Scott Says:

    Oleg:

    what can we say about the probability of a random binary string to be an index of a complete set?

    Under the usual probability measure (where strings of length n get weighted by ~2-2n), the probability of a random binary string being the index of a complete set will be some uncomputable real number between 0 and 1 (like Chaitin’s constant). Not sure what that tells us though.

    The mere fact that it took 12 years to solve Post’s problem suggests that these in-between sets might be rare beasts.

    Indeed, fifty years later we still don’t have any “natural” example of a problem that’s intermediate between computable and r.e.-complete. On the other hand, we do have natural problems that we think are intermediate between P and NP-complete (factoring and graph isomorphism being two of them).

    For me, Wolfram’s PCE ultimately boils down to “all computational systems are either universal or ‘obviously simple,’ except for the exceptions.”

  40. Coin Says:

    Scott, thanks.

  41. Jack in Danville Says:

    Scientists, correspondents, lend me your blog.
    I write not to praise Wolfram, but to bury him.
    The ego researchers reveal is forever.
    The good applications software they produce
    Should be forgotten. So let it be with Wolfram.
    The noble Kuperberg sees an explanation in
    Wolfram’s scurvy; and the indefatigable
    Aaronson mocks filling in the minutia
    Of Science. And this scientist we are here to dis
    Hasn’t held an academic position in decades,
    Started his own software company, and
    Echoed some cellular automata can model anything.
    And so, with the permission of our host, and
    Regular correspondents, for they are all
    Scientists and mathematicians, I will
    Put Wolfram in the ground.
    He wasn’t the first. Not to see universal
    Computation in automata, not to declare
    Modeling reality in software, and certainly
    Not to espouse Strong AI!
    But isn’t all this “We can model it to
    Arbitrary granularity” nothing more than
    A posteriori conjecture? For who has actually
    Done such a thing? Or having modeled
    Even a single human life up til NOW,
    Or filled Cyc with an ontology
    Overflowing, coaxed that code to
    Actually do something interesting.
    And so, when scientists make conjectures,
    Like Strong AI, does than make it Science?

    …I offer the Fox challenge: $250 to anyone who can show (to my satisfaction) Strong AI is more than a conjecture.

    P.S. Where’s Democritus?

  42. Jack in Danville Says:

    Damn..should have been “…does that make it Science?”

  43. anonymous Says:

    2,3 Turing Machine Proof of Universality Flawed?

    http://cs.nyu.edu/pipermail/fom/2007-October/012156.html

  44. Job Says:

    Strong AI sounds like a Godel sentence, that’s $250 i won’t take a shot at.

  45. Oleg Says:

    Scott,
    I was thinking more about the uniform probability measure – i.e. what fraction of n-bit strings belongs to a given set and how that fraction behaves as n->∞ It’s not because this definition is any more fundamental than the usual one, but simply because it makes the problem more fun.

    Here’s an example of what I had in mind: The set of non-minimal programs is simple, therefore not m-complete. OTOH I think I can prove it has positive uniform density. This doesn’t answer my question just yet but it gives me hope.

    Basically, I am just trying to do a kind of a binary search, trimming away obviously true and obviously false versions of PCE until something interesting remains.

    The question whether P < NP definitely fits the bill but I am afraid I am out of my depth here…

  46. anonymous Says:

    More here:

    http://forum.wolframscience.com/showthread.php?s=c34045c0feab14e8092c93f9d4f0268b&threadid=1472

  47. Scott Says:

    anonymous: Thanks so much for pointing me to that FOM thread! That’s the first technical discussion of Smith’s universality proof I’ve seen anywhere.

    As I explained to the journalists who emailed me, the rules of Wolfram’s prize left a huge gray area, since they explicitly refused to specify what counts as “universality.” For example, what kinds of input and output encodings are allowed? Does the tape have to be filled with 0’s initially, or can it be filled with other patterns, and if so, which ones? Also, the Turing machine in question has no “halt” state, so what external conditions can we impose to define when the machine has halted?

    Still, I decided not to make a fuss about such things, since it seemed clear from Smith’s writeup that (1) he was aware of these issues, and (2) there was some nontrivial sense in which he proved universality. I wasn’t going to lose sleep over which sense, since I’d never lost sleep over the (2,3) universality question itself!

    In particular, this post from Wolfram himself, though written in his trademark way, comes the closest I’ve seen of anything by him to addressing some actual technical questions about what constitutes universality.

  48. Greg Kuperberg Says:

    All right, then, suppose that a d-dimensional CA X is “fixed-pattern universal” if, given any other d-dimensional CA Y, there is a mapping from cell states of Y to square blocks of X of a fixed size, so that one time step of Y corresponds to some T time steps of X. And I guess it could be okay, if needed, to map states of X in the square blocks back to states of Y, other than just those that arise in the mapping from Y to X.

    With these very stringent conditions, are there fixed-pattern universal CAs? If so, what are the simplest known examples?

  49. Oleg Says:

    @Bram Cohen:

    Consider the famous Collatz 3x+1 function, which can be encoded as a cellular automaton (see Wolframs’ book or http://arxiv.org/abs/nlin/0502061). As far as we know an iterated application of the function always converges to 1→4→2→1 which makes it very unlikely to be universal.

  50. Bram Cohen Says:

    Oleg: Yes, his new claim in the FOM thread that machines which are undecidable but not universal are rare is more than a little bit absurd. I specifically conjectured about machines he already has specifically drawn attention to and claimed to be universal though, not just any random machine.

    Scott: It’s funny how even when acknowledging for the first time that, gee, universality ought to have an actual definition (a concept which ANKOS seems to make clear he doesn’t understand) Wolfram still manages to sound like he invented the entire field. There are lots more encoding issues which are quite unclear. For example, one of the simple cellular automata with extremely grungy looking output can only transmit information in one direction.

  51. Kurt Says:

    For Jack: For the purposes of your challenge, can you give us the precise definition of “strong AI” that you’re using? Hmmm, while you’re at it, maybe you could give a definition of “to my satisfaction” and “is more than a conjecture”. Or is this a Wolfram-style challenge?

    Perhaps you should give the $250 to Scott to hold as an underwriter for the challenge…I’m sure he’d use safeguard it well. 🙂

  52. Stas Says:

    Here is a reply from Alex Smith to Vaughan Pratt:
    http://cs.nyu.edu/pipermail/fom/2007-October/012164.html

    He suggests Pratt’s reasoning is mistaken.

  53. Jack in Danville Says:

    Kurt,

    The Wikipedia article on Strong AI, http://en.wikipedia.org/wiki/Strong_AI, attributes the definition “An artificial intelligence system can think and have a mind” to John Searle.

    Job might be right, but I don’t know how to turn Strong AI into a Gödel sentence. If anyone can maybe we can decide if it’s decidable. Short of that “to my satisfaction” will have to do. That’s a Boolean observable, btw.

    An hypothesis, for instance, is more than a conjecture. Strong AI is less than an hypothesis because it is not falsifiable, notwithstanding the Turing Test chorus that may well arise in response to that assertion. I would go a step further and say it’s not even a scientific conjecture, because they have the hope of becoming hypotheses if not theorems someday.

    Anyway, I thought folks were being a bit catty about Wolfram (and y’all are above that!), so I thought it would be fun to play Marc Antony for a bit. 🙂

  54. lylebot Says:

    Jack, note that the Wikipedia definition is not very precise. What does “think” mean? What does “mind” mean? If you have precise definitions for those, then you have the answers to questions that have troubled generations of philosophers. That, I believe, was Kurt’s point.

    See also this classic Scott Aaronson post.

  55. Amir Michail Says:

    What I liked about Wolfram’s NKS is his systematic exploration of cellular automata to find ones with interesting behavior.

    So maybe we can do something similar in a different context.

    For example, how would one conduct a systematic study of web apps to identify those with the most interesting behaviors? Is it possible to simulate users to some extent to make predictions about the virality of an app? Is it possible to automate our search for viral apps in some way?

  56. roland Says:

    Does Wolfram always put every sentence in a seperate paragraph?

  57. roland Says:

    i meant separate

  58. anonymous Says:

    Could someone help me understand the subtlety here within the proof? In my mind, the CA can either simulate a TM or it cannot.

  59. Greg Kuperberg Says:

    Could someone help me understand the subtlety here within the proof? In my mind, the CA can either simulate a TM or it cannot.

    The question is what counts as a simulation. The CA could be a Rorschach test that simulates a TM in the opinion of some but not others.

    I would still be interested in the strict notion of simulation that I outlined: a CA X might be universal in the sense that it can simulate every CA Y in the same dimension by rescaling space and time by fixed factors.

  60. Greg Kuperberg Says:

    Okay, the principle of computational equivalence is given two formulations on the Wolfram web site.

    Almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication (Wolfram 2002, pp. 5 and 716-717).

    It’s not clear what “almost” means. Maybe it is true with respect to some reasonable probability distribution on CAs. If it means that the counterexamples are somehow constricted or hard to construct, then it isn’t true for TMs or CAs. A TM can be hard-coded to only execute any one specific algorithm, so therefore it can have any intermediate amount of computational power. A CA can be hard-coded to only simulate one TM.

    More specifically, the principle of computational equivalence says that systems found in the natural world can perform computations up to a maximal (“universal”) level of computational power, and that most systems do in fact attain this maximal level of computational power.

    This “more specific” formulation is actually less specific and different. It refers to natural processes rather than mathematical processes, and it does not “almost” exclude intermediate complexity. This version is a plagiarism of the Church-Turing thesis.

  61. Bram Cohen Says:

    anonymous: The subtlety has to do with the ‘encoding’ – basically, the tape has to be pre-populated with some data for the proof which won the prize to work, and there’s a question of how much of the heavy lifting was done by the computation necessary to generate that data. One could certainly cheat by having the pre-population be completely universal, then have the machine be one which moves to the right on either a 0 or 1, and leaves the cell in passed unchanged, and then it ‘reads’ a universal computation, hence simulating it.

    This is made particularly difficult by the apparent fact that universality isn’t a nondeconstructible primitive. One can find a computation for pre-population which is nonuniversal and a machine which is nonuniversal but which are capable of universal computation when put together. The prizewinning entry may in fact have hit such an edge case, but what qualifies as such an edge case isn’t terribly well defined, and the committee of esteemed people who were supposed to decide whether the prize was to be handed out weren’t actually consulted. Having read through much of ANKOS, I’m pretty sure that Wolfram doesn’t have a firm grasp of these issues himself.

  62. Greg Kuperberg Says:

    Having read through much of ANKOS, I’m pretty sure that Wolfram doesn’t have a firm grasp of these issues himself.

    What’s depressing is that so many people would mistake a load of hand-waving and intellectual plagiarism for serious research, just because it has good salesmanship. Wolfram is what he is; it would be easy to spend all day complaining about him. The real problem is that so many journalists and others listened to him in the bleak 2002 year. The same summer that the New York Times published a 2000-word exclusive on Wolfram (“Did This Man Just Rewrite Science?”), they did not devote even a single word to the Beijing ICM. Jiang Zemin (the President of China) was at the opening ceremony, but it still just wasn’t a story.

    It’s almost worse that many of those who were distracted by Wolfram’s book didn’t realize that it’s intended as mathematics. If you do work that cuts across all fields of science, then it’s mathematics. Or in the case of ANKOS, if you aspire to do such work. I have the feeling that if the fans had just thought of it as math, they would have ignored it.

    ANKOS is useful for this, though. Soon after it was published, Sam Brownback invited Wolfram to an exclusive Senate hearing and ranted that he needs to be funded somehow. So that is good evidence, if anyone still needs it, never to trust Brownback.

    (The Times did do better in 2006. They had good articles on Perelman and Tao, and they even gave 80 words to Okounkov and 40 to Werner. Also the New Jersey local edition did give Voevodsky 40 words in 2002. It should also be said that the Times has by far the best science section of any American newspaper.)

  63. logopetria Says:

    I have a couple of newbie questions about interactive proofs, and in particular the variations discussed by Scott in which the prover has only finite capability. Following Scott’s notation above, I’ll call these classes IP_X, where X is the complexity class of the prover (so in the problem discussed above, X=BQP).

    First, is it the case that X ⊆ Y implies IP_X ⊆ IP_Y (and so, in particular, IP_X ⊆ IP for all X)? At first I thought it must be the case — since any answers the less powerful prover can come up with can also be produced by the more powerful one. But then I remembered the other side of the balance between the prover and the verifier, namely that the prover should only be able to mislead the verifier with low probability. Might there be problems for which a less powerful prover X can demonstrate answers to the verifier by an interactive proof, but a sufficiently more powerful prover Y would furthermore be able to mislead the verifier, thus putting the problem outside IP_Y?

    Second, I see that it’s obvious that IP_X ⊆ X (i.e. if a prover can demonstrate the answer to a problem, it must certainly be able to find that answer). In the other direction, is there anything obvious one can say about when we would expect IP_X=X? (Scott’s question above would be “Is BQP a class of this type?”)

    In particular, am I right in thinking that IP_BPP=BPP? If the problem is in BPP, the prover can just send an algorithm that the verifier can run to solve the problem for himself, right? If that’s right, the same should hold whenever the prover is no more powerful than the verifier.

    Sorry if this this is just too trivially obvious, but I don’t have much practice in thinking about this kind of problem, and I’d like to check I’m not going astray right at the start.

  64. Scott Says:

    logopetria: Those are extremely good questions!

    First, is it the case that X ⊆ Y implies IP_X ⊆ IP_Y (and so, in particular, IP_X ⊆ IP for all X)?

    Yes, but this is because of a subtlety in the definition of IPX. We say that if the answer is yes, then some X machine should cause the verifier to accept w.h.p., whereas if the answer is no, then no interaction with the prover (whether it’s an X machine or anything else) should cause the verifier to accept w.h.p.

    In other words, in convincing itself that the answer is yes, the verifier is not allowed to use the assumption that the prover is “merely” an X machine. The complexity of the prover’s strategy is only the prover’s business, not the verifier’s!

    If the verifier can assume the prover is merely an X machine, then you get a new and very interesting topic called computationally sound proofs. Basically, these are proofs of mathematical theorems that you should only believe if you were told them by a dumb person, not by a smart person (since a smart person could’ve tricked you into believing a falsehood)! But that’s a subject for another day.

    is there anything obvious one can say about when we would expect IP_X=X? In particular, am I right in thinking that IP_BPP=BPP?

    Yes, you’re correct that IPBPP=BPP.

    Also, since IPX ⊆ IP ⊆ PSPACE for all X, we can’t have IPX=X for any class larger than PSPACE.

    We know that IPPSPACE = PSPACE and that IPP^#P = P#P. On the other hand, it’s a major open question whether coNP ⊆ IPPH. (It’s also open, I believe, whether IP⊕P = ⊕P.) So, what other classes do we know that are significantly smaller than P#P yet not known to be contained in PH? Why, BQP of course! 🙂

  65. Kovas Boguta Says:

    Greg: The first statement of the PCE does indeed include physical processes, as well as ‘abstract’ ones. If you take a look at the actual NKS book rather than the “cheat sheet”, this is quite clear.

    The second statement is not a statement of the principle itself, but rather specific implications. I hope you are just being sloppy rather than intentionally mean-spirited, since you obviously have an anti-wolfram agenda, but the Church-Turing thesis clearly does not say anything at all about the distribution of universality, or the relationship of universality to complex behavior, which is certainly the point of the second statement you quote. I recommend actually understanding NKS before accusing people of plagiarism.

    To understand the relationship of the PCE to the Church-Turing thesis, it is important to note that the PCE is not a statement about systems themselves, but rather a statement about specific computations performed by systems. The PCE does make a prediction about the distribution of universality, but that is ultimately a derivative statement that does not capture the full principle.

    Finally, there is a conceptually straightforward way to quantify “almost”, at least for abstract computational systems. One just enumerates simple programs in a variety of frameworks, and attempts to answer the question in its various cases. If say 95% of simple programs that exhibit complexity end up universal, most reasonable people would agree that means “almost all” . (To actually do this would require a signficant research program, complete with automated theorem proving to deal with all the different cases… but that is what NKS advocates be done). If one gets to 45% and then finds certain classes that don’t seem to be provably universal, then one becomes suspicious.

    (of course there then remains the question of how the distributions of behavior of simple program relates to more complicated ones, and to systems in nature)

  66. logopetria Says:

    Thanks, Scott! Is this general question — “For which X is IP_X=X” — one that has been extensively studied already? Is there a name for classes that meet that criterion?

    Thinking about it further, it seems that there’s a sense in which these classes are the only “useful” ones to us. By that, I mean that if you had a super-powerful computer that could solve problems vastly beyond your own abilities, but which couldn’t reliably convince you that it had got the right answer, then it wouldn’t do you much good to consult it. You may as well ask a Magic 8-Ball. Viewed in that way, your question about BQP is worth a lot more than $25!

    We’ve been assuming that the verifier is BPP, but I guess we can get some interesting questions by allowing that to vary as well. If we let IP(X,Y) be the class of problems that have an interactive proof with prover of class X and verifier of class Y, then what we’ve called IP_X above becomes IP(X,BPP). It’s still the case that IP(X,Y) ⊆ X for all Y, and IP(X,Y)=Y for all X equal or weaker than Y. With this extra degree of freedom, we can pose questions like: is it the case that IP(X,Y) ∩ IP(Y,Z) ⊆ IP(X,Z)? That is, if X can convince Y, and Y can convince Z, does that mean that X can convince Z? [Again, maybe the answer to that is obvious, but it’s not clear to me right now].

  67. Scott Says:

    logopetria:

    Is this general question — “For which X is IP_X=X” — one that has been extensively studied already? Is there a name for classes that meet that criterion?

    Not that I know of.

    Thinking about it further, it seems that there’s a sense in which these classes are the only “useful” ones to us.

    When I try that view on for size, two caveats immediately spring to mind:

    (1) It could be that the weakest prover convincing us about X belongs to a larger class Y, in which case we have to worry about both classes. (I.e. if you want to be convinced about X problems, a Y machine is what you have to buy.)

    (2) You can also consider multiple and competing provers. For example, there’s a theorem that if you can communicate with two EXP machines — one trying to convince you that some EXP machine M accepts, the other trying to convince you that M rejects — you can play them against each other and figure out the truth in polynomial time. It’s also known that, if you can interrogate two cooperating provers who are prohibited from talking to each other, then you can verify any problem in NEXP. (For this theorem, the provers themselves need to be more powerful than NEXP — I think Σ2EXP.) So, should we allow these things?

    if X can convince Y, and Y can convince Z, does that mean that X can convince Z?

    Another great question!

    If you assume Y⊆X, then the answer is yes — for then the X machine can just simulate the interaction between Y and Z that causes Z to accept.

    On the other hand, if you don’t assume Y⊆X, then I can give you a counterexample. A BPP machine can convince a PSPACE machine of anything in PSPACE (by doing absolutely nothing); a PSPACE machine can convince a BPP machine of anything in PSPACE (by the IP=PSPACE Theorem); but combining these, it doesn’t follow that BPP machine can convince a BPP machine of anything in PSPACE! 🙂

  68. logopetria Says:

    OK, so I was certainly overstating to say that these classes are “the only ‘useful’ ones”. Clearly a machine that’s not of this type isn’t just a doorstop. It’s just that the machine’s full potential is not accessible to us. It can convince us of the correctness of its answers to simpler questions, but, Cassandra-like, it also knows things it can’t reliably make us believe (without help, at least)!

    On your point (2): is that kind of result something you’d expect to hold for most classes of prover? That is, are multiple isolated provers of class X generally more powerful than a single one (except in the trivial case where the prover is no more powerful than the verifier)? If so, would it be more relevant to ask whether MIP_BQP=BQP (and more generally, “For which X is MIP_X=X?”)? I assume Wolfram would be convinced just as well by a demonstration that needed two or more quantum computers!

    I assume, by the way, that we have to ensure by physical means that the machines are “prohibited from talking to each other”? There’s no purely mathematical way to detect or guard against conspiracy between the provers, right?

    Finally, on the X->Y->Z question: nice counterexample! I’d been hoping there might be scope for a kind of “middleman” scenarios, in which X can’t convince Z of something, but it can convince Y, who in turn can convince Z. But now I see that whatever the relative powers of X, Y and Z, at least one of X and Y will be redundant.

  69. Scott Says:

    On your point (2): is that kind of result something you’d expect to hold for most classes of prover?

    Alas, I don’t know the right “metric over complexity classes” to answer that question. (Another issue is that the power of the prover in interactive protocols has been understudied. People do comment on what prover power suffices for a given protocol; what they haven’t done as much is to start with the prover’s complexity and then ask what it can prove.)

    I assume, by the way, that we have to ensure by physical means that the machines are “prohibited from talking to each other”? There’s no purely mathematical way to detect or guard against conspiracy between the provers, right?

    Yes, that’s exactly right. In fact, a beautiful recent series of papers has studied what happens if the provers don’t communicate, but do share quantum entanglement. (Short answer: it seems that some protocols break and others don’t.)

  70. anonymous Says:

    Hmmm … seems like you are doing exactly what wolfram wants …
    PR for Wolfram Research on this rather holy blog.

  71. anonymous Says:

    A BPP machine can convince a PSPACE machine of anything in PSPACE (by doing absolutely nothing); a PSPACE machine can convince a BPP machine of anything in PSPACE (by the IP=PSPACE Theorem); but combining these, it doesn’t follow that BPP machine can convince a BPP machine of anything in PSPACE!

    Of course, you’re implicity assuming BPP != PSPACE when you say that’s a counterexample…;)

  72. Vladimir Levin Says:

    I am just a layperson, but I am curious if anyone can comment on the following: Is a Universal Turing Machine (UTM) that starts with an empty tape is the “most” powerful kind of UTM? In other words, these other machines which require a “push” as it were in the form of either a repeating or non-repeating sequence of symbols on the tape, how much “less” powerful are they? Or maybe the right way to put it, how can we measure the impact of the “push?”

  73. Aaron Denney Says:

    Kovas Boguta: In mathematics “almost all” traditionally means “all, except for possibly a set of measure zero”. Greg Kuperberg is asking “what sort of natural measure / probability distribution would make that be true.”

  74. Clive Says:

    Re Vladimir Levin’s question (#72): if you are not particularly concerned with minimising the number of states in your Turing machine, then you can always arrange for the machine itself to create any repetitive background and in fact even also write the initial “input” data itself. You may need to add (many) new states and rules though, so you have a “bigger” machine at the end of this and thus probably won’t win $25000 for your efforts!

    In other words, if you aren’t trying to work with a specific machine (such as the one in the Wolfram Prize), then you can arrange for the machine itself to do all the required tape initialisation “from scratch”. Of course that means that machine is only good for one thing – because we’ve effectively “hardcoded” everything.

    Hopefully this helps make it clear that just being able to start with a “blank” background or repetitive pattern on your tape, doesn’t give your machine any more “power” at all in terms of what it can ultimately do.

    However, if you’re forced to work with a particular machine and/or limited numbers of states/symbols/rules, then having the tape initialised for you in some particular way might just be enough to allow things to work.