My diavlog with Anthony Aguirre

Bloggingheads has just posted an hour-long diavlog between the cosmologist Anthony Aguirre and your humble blogger.  Topics discussed include: the anthropic principle; how to do quantum mechanics if the universe is so large that there could be multiple copies of you; Nick Bostrom’s “God’s Coin Toss” thought experiment; the cosmological constant; the total amount of computation in the observable universe; whether it’s reasonable to restrict cosmology to our observable region and ignore everything beyond that; whether the universe “is” a computer; whether, when we ask the preceding question, we’re no better than those Renaissance folks who asked whether the universe “is” a clockwork mechanism; and other questions that neither Anthony, myself, nor anyone else is really qualified to address.

There was one point that sort of implicit in the discussion, but I noticed afterward that I never said explicitly, so let me do it now.  The question of whether the universe “is” a computer, I see as almost too meaningless to deserve discussion.  The reason is that the notion of “computation” is so broad that pretty much any system, following any sort of rules whatsoever (yes, even non-Turing-computable rules) could be regarded as some sort of computation.  So the right question to ask is not whether the universe is a computer, but rather what kind of computer it is.  How many bits can it store?  How many operations can it perform?  What’s the class of problems that it can solve in polynomial time?

62 Responses to “My diavlog with Anthony Aguirre”

  1. Dániel Says:

    How many bits can it store? How many operations can it perform? What’s the class of problems that it can solve in polynomial time?

    Let me highlight a possible source of confusion about the interpretation of these questions. (I am sure Scott is not confused, but others may be.) The Space-time Continuum does not compute anything. Computation requires an inside observer/manipulator. (Inside in the sense that she is a proper part of the Universe, like Tegmark’s frog. Tegmark’s bird does not compute, it just knows.) The inside observer must obey the laws of physics when setting up an initial configuration and when reading out the results of the computation. And she must have freedom to set up specific initial configurations. Where does this freedom come from? The source of this freedom is a lack of information. (Tegmark’s frog has free will because it does not have access to all the information Tegmark’s bird has access to.)

    When we talk about the day-to-day activities of humans, this sort of nitpicking is not very useful. We do have free will, period. But when we talk about such insanely abstract things as Universes-as-Computers, we have to be very careful. We have to specify what we assume about the inside observer. We have to say something about the source of her perceived free will. We have to specify the proper amount of her free will. We have to deal with every possible case, even with weird edge-cases like when the (complexity of the) Universe is only slightly larger than the (complexity of the) observer herself.

    A stupid, possibly misleading, but at least concrete example: Can I solve CLIQUE if I have a computer that can solve it for every graph I can ever imagine, but I can only imagine graphs with small Kolmogorov-complexity?

    Before settling the questions about the inside observer, our only choice is to talk about Universe-as-Computer from an anthropocentric perspective. Scott’s questions are perfectly good questions even from an anthropocentric perspective, but they are less well-defined, less general, and less attractive than they would appear at first sight.

  2. Moshe Says:

    I think the real question about the universe as a computer, is whether or not this perspective adds anything to either physics or computer science. The reason I am skeptical is that there is some tension between models of computations on the market, and tools used in fundamental physics, and that is the question of Lorentz invariance.

    As far as I can tell, any model of computation used countable number of states, and a countable number of possible transformations on these states. There is no problem in principle to realize such model by a suitable physical system, but all indications are that there are also some (many) physical systems NOT suitable for such realization, and that the universe as a whole is one of such systems.

    One way to phrase the problem is that the Lorentz group is non-compact, which implies that all its representations are non-countable. This is why you need completely different mathematical machinery when describing high energy physics (namely quantum field theory). There is no natural way within this formalism to distinguish states from operations, and to make either one of them a countable set.

    So before we discuss the whole universe, how about quantum electrodynamics, the relativistic theory of electrons and photons, whose basic ingredients are quantum fields? Is there a way to even ask the questions you ask in this post, about total number of operation etc. etc., when we discuss a mundane process like electron scattering?

  3. Scott Says:

    Dániel: You’re absolutely right that, when talking about the universe as a computation, it’d be nice to be able to answer questions like: “who supplies the input? who reads the output? and is that person herself part of the universe?”

    However, I don’t think the situation here is worse than in any other area of science. In physics, too, one can pose brain-benders like:

    “Suppose a physical theory only works in certain restricted circumstances—but it also makes the clear prediction that no one will ever have the imagination to design an experiment where the theory fails. Can the theory then be considered complete?”

    These philosophical questions are fun to think about, but in practice, we normally do physics by imagining that we (the “observers”) are external to the theory itself, and have complete freedom to prepare arbitrary initial states and make arbitrary measurements according to the rules of the theory. (Of course, as in quantum mechanics, “arbitrary measurements” allowed by the theory might not include complete measurements of the system’s state!)

    Of course, this instrumentalist approach has been pretty successful in moving science forward. One can debate about possible situations where the approach fails—but as I see it, the question of “what kind of computer is the universe?” is mostly orthogonal to that debate. For that question turns out to be extremely interesting even if we “stay inside the instrumentalist box”!

    In other words: assume you (the “user”) have the god-like power to choose any input you like. We won’t try to look inside your head and limit your free will by (say) upper-bounding the Kolmogorov complexity of your choices. Then you feed your input to the computation, you wait a while, and the computation produces an output.

    Between the input and the output, there might be a quantum computer, or a black hole, or a closed timelike curve, or anything whatsoever consistent with your favorite physical theory.

    In that sort of setup, what can you compute, and what can’t you? That’s already an extremely interesting scientific question—one that doesn’t hinge on the distinction between “birds” and “frogs” or other such abstruse matters—and it’s what I generally have in mind when I talk about the computational capacity of the universe.

    Finally, while you raised some excellent points, I beg to differ with your assessment that the questions are “less well-defined, less general, and less attractive than they would appear at first sight.” For me, a question is “attractive” to the extent it leads to actual, meaty, interesting new science—and the questions I’m talking about definitely pass that test; they pretty much led to the whole field quantum computing and information! (And arguably theoretical computer science as a whole, if you go back further.)

  4. harsha Says:

    One point in the discussion that wasn’t clear to me was that in Newtonian physics observables take values in continous domains while not in QM. I thought this was only special QM systems and observables (like the harmonic oscillator and the Energy observable) where this phenomenon happens and in general this is not so. Indeed, for a free particle in three-space, the 3 position operators have all possible values in R. Could you clarify this a bit?

  5. Scott Says:

    Moshe: As you know, I’m also extremely interested in the tension you pointed out, between Lorentz invariance and computational models that involve only countable numbers of states. But with regard to your starting question—namely, whether looking at the universe as a computer adds anything new—I see that tension as a positive rather than a negative! For it suggests that a computational perspective on physics leads to some actual falsifiable predictions. For example:

    (1) Lorentz invariance should not be an exact symmetry, only an approximate one. It should break down if space and time are probed at (say) the Planck scale.

    (2) The number of perfectly-distinguishable states in a bounded region of spacetime should be finite, not infinite. Something like the holographic principle should hold.

    (3) Spacetime should not be a perfect continuum, and any theory that models it as such (including QFT) will be found to be an approximation to a better theory.

    Maybe the simplest way of putting it is this: you notice the tension between QFT and a computational perspective on physics, and see a problem for the computational perspective. I notice the same tension and see a problem for QFT! (Of course, computational considerations are far from the only reasons for suspecting that QFT is only an approximation to a better theory, one that does away with continuous spacetime altogether. But I’d argue that they add to the other reasons.)

  6. Scott Says:

    One point in the discussion that wasn’t clear to me was that in Newtonian physics observables take values in continous domains while not in QM.

    harsha: Did Anthony or I actually say that? If so, I apologize!

    Yes, you’re right that quantum mechanics is just as happy to live in an infinite-dimensional Hilbert space as in a finite-dimensional one. Indeed, traditionally physicists talked almost exclusively about infinite-dimensional Hilbert spaces, in contrast to quantum information theory, where we talk almost exclusively about finite-dimensional ones. (Purely from a pedagogical perspective, we are right and they are wrong.)

    It’s really quantum gravity (i.e., the combination of quantum mechanics with arguments about black hole entropy made by Bekenstein, Hawking, and others in the 1970s) that leads to the prediction that the Hilbert spaces accessible to us should be finite-dimensional.

  7. onymous Says:

    Scott, if you think these “predictions” might emerge from general principles, it seems worthwhile to assess to what extent they hold in the examples of quantum gravity that we know about. As far as I can tell from the way you’ve phrased things, #2 is true of string theory (the holographic principle applies, certainly in asymptotically AdS spaces, and also to black hole microstate counting and other scenarios). However, #1 and #3 are not. Certainly there’s no reason to think Lorentz invariance breaks down (and experimentally, it’s very well-tested), and while it is generally true that one can think of spacetime as emergent (from lower-dimensional field theory in AdS/CFT, or perhaps from the worldsheet), there’s no sense in which it fails to be a continuum. It’s true that it’s hard to probe sub-string (or, more generally, sub-Planck) scales in quantum gravity, because attempts to create black holes; but this doesn’t imply any discreteness at those scales, which is what your phrasing seems to suggest.

    The fact that we seem to live in something resembling deSitter space seems to me to be a stronger motivation for searching for theories that involve finite-dimensional Hilbert spaces, but even that isn’t very convincing, since our deSitter space seems likely to decay to flat space in the far future….

  8. harsha Says:

    > Did Anthony or I actually say that?
    Most probably not, but thanks for the clarification.
    Also, if you or some other reader has a good place to read up this argument for finite dimensional Hilbert space in QG, that would be great.

  9. Moshe Says:

    Yeah, tension can drive a good narrative if it resolved the right way, and then it may lead to a catharsis (which for a Popperian would be a “falsifiable prediction”, I suppose). But, as anyone who watched enough movies knows, more often than not it all ends up in a disappointment.

    More to the point, what I wanted to say (and I have said that before) is the two step statement:

    1. The statement “X is a computer” does not have to be vacuous, if you don’t want it to be. Something like your assumptions (1) or (3) are implied by that statement, if it has any meaning at all. For example, we seem to agree now that continuum QFT, taken as a mathematical model, cannot be considered to “be a computer”.

    2. Assumption 1 (or any similar statement) is not only falsifiable, it is also falsified.

    The point is that there is no known way, in the context of the physics of our universe, to break Lorentz symmetry in a “small” way. There is an underlying assumption in your discussion (which is a common misconception), that the effects of breaking Lorentz symmetry at the Planck scale would automatically be small when we probe the world using only low energy probes. This is the assumption I disagree with.

    This reason I disagree with this assumption can be made very precise and technical, but I also find it very intuitive. Think about that for a moment, if your model does not have a certain symmetry, generically it will not have any trace of that symmetry at any level of description. For example, suppose you have a randomly chosen tiling of the plane, does it look spherically symmetric? Can it made to look spherically symmetric “on average”, or “to most probes”? You could see at the very least that this is not an unproblematic statement, and the answer depends on details, and cannot be taken for granted.

    (On other point you make above, as onymous points out, there are known holographic models of quantum gravity which violate many of the statements you make as consequences of quantum gravity and holography, e.g they have infinite dimensional Hilbert spaces. So, those statements have counter examples, and cannot be considered to be consequences of only the assumptions you specified. But, one thing at the time, let me stay focused on the Lorentz invariance issue).

  10. Carl Says:

    I haven’t had a chance to finish listening to the Bloggingheads yet, but I’m glad the commenters here have been asking good questions about the “universe as computer” model. For my part, I have some philosophical objections to the theory.

    What is a computer? I propose the definition, “A rule governed system used to make reliable inferences about another rule governed system.” I think this defintion covers all of the usual cases of computation from iPads to the abacus to a model airplane in a wind tunnel.

    The problem with the definition from the point of view of the universe as computer is that it smuggles in a subject: there needs to be *someone* who *is using* the computer for a thing to be a computer. After all, if we don’t make this stipulation, then as Scott noted in the comments above we can use anything as a computer for something. In the limiting case, I can always use all the particles of the universe as an enormous abacus.

    So, “being a computer” is a subject-relative term, not an absolute term. Which means that it’s silly to ask “Is the universe a computer?” for the same reason it’s silly to ask, “Is the universe enjoyable?” Well, I find the universe enjoyable, and maybe you do or you don’t, but these are ultimately facts about us, and not facts about the universe itself.

    Of course, none of this has any bearing on whether or not it is useful to think about the universe in terms of cellular automata or whatever else. It could be that those are useful ways of describing the laws of nature. But that would mean the universe is “like” a computer, but the universe cannot be a computer unless we say that it is God’s computer.

    One final note: I find it interesting that computer science’s complexity classes only make sense with the postulate of free will. Of course, we also have the problem that in practice we never have true Turing machines with an unlimited supply of tape, just machines with an in-principle finite number of states. So, it could be that “free will” is just an assumption that tends to keep us from over-simplifying some problems while making other problems more tractable for us. Science also has a similar problem because if we want to postulate “causation” rather than mere “correlation” we need to say, “Yes, the input could have been different, and then something different would have been caused.”

    So, free will is an idea that seems to be important to certain of our intellectual investigations, yet those same investigations seem to make it seem unlikely that free will is true…

  11. Moshe Says:

    Why my previous comment awaits moderation, just a quick clarification. I know of nobody, myself included, that thinks of QFT as good theory beyond the Planck scale. Tension between QFT and your favorite model of Planck scale physics (computational or not) would be easy to resolve. The tension I am pointing has to do more with Lorentz invariance which, all evidence shows, hold well beyond the Planck scale. There are plenty of models that are perfectly Lorentz invariant, and holographic, and obey everything else you’d like from quantum theory of gravity. I’d claim that none of them “is a computer” in the sense we discussed.

    So, my inclination is to say that the universe is probably not a quantum computer.

    http://diracseashore.wordpress.com/2009/02/01/the-universe-is-probably-not-a-quantum-computer/

  12. Moshe Says:

    Why ->While (probably doesn’t matter).

  13. Scott Says:

    Moshe: Personally, I have no problem with exact Lorentz symmetry per se, but I do have a problem with some of the things that I understand to be implied by it. Maybe you can help me, though, by explaining why those things aren’t implied by Lorentz symmetry at all!

    You yourself got at the heart of what makes me uneasy in your blog entry:

      Namely, in the Lorentz invariant theory the space of states is always continuous. In technical Language the Lorentz group is non-compact and all its representations are continuous. In plain English: you can boost any state to have an arbitrary momentum, which can be any real number.

    Here’s my question for you: how is the above compatible with the holographic upper bound on information content? If momentum can be an arbitrary real number, then why can’t we encode an infinite number of bits into a momentum? I’ve wondered about that, and will be grateful for any clarification—thanks!

    To pick up another thread, when you write:

      Assumption 1 (or any similar statement) is not only falsifiable, it is also falsified.

    your own statements in your blog entry seem to concede implicitly that that formulation is too strong. One could dispute—as some of the commenters on your entry did dispute—whether there are “reasonable” ideas out there for breaking Lorentz symmetry “but only by a little” (e.g., at extremely high energies or short distances). But even if we assume that there are none, that’s still a far cry from saying that violation of Lorentz symmetry has been falsified! By analogy, no one knows how to get rid of certain mathematical problems with QFT (like Landau poles), but no one seems to take that to imply that those problems must be a fundamental feature of the universe.

  14. Scott Says:

    Also, onymous and Moshe: yes, sorry, I should have clarified that the finiteness of the Hilbert space accessible to us is a consequence of quantum mechanics, plus the holographic principle, plus the positivity of the cosmological constant.

    For more about that implication, see e.g. this paper by Bousso.

  15. onymous Says:

    how is the above compatible with the holographic upper bound on information content? If momentum can be an arbitrary real number, then why can’t we encode an infinite number of bits into a momentum? I’ve wondered about that, and will be grateful for any clarification—thanks!

    Morally, the answer is governed by two key ideas: questions in physics should only depend upon invariant quantities; and the differences between QFT and quantum gravity center around black holes.

    On the first point: talking about the momentum of a single particle is not a Lorentz-invariant question, as you can boost to whatever momentum you like. It’s only through something like a collision process that you might hope to learn something interesting.

    This leads to the second point: if we are to see differences in the state space of QG vs QFT, these differences will arise in situations where we create black holes. At the level of single-particle states, there’s nothing in QG that keeps you from thinking about particles propagating with any energy. But as soon as you have a two-particle scattering process, you have the possibility of kinematics where some invariant is larger than the Planck scale, and black holes can play a role. At a rough level, this is why the density of states in QG scales like that of a lower-dimensional QFT. You could imagine writing the states of the QFT in terms of a multi-particle space; if the particles are well separated, or scattering with low energies and momentum transfers, they will look very similar in QG. But if there are many particles in a small volume, or kinematic invariants are large, you should think instead in terms of black holes. And the entropy of black holes scales like that of field theories in a lower dimension.

    Note that this is very different from, e.g., having a simple UV cutoff where space is discretized; that would lead you to expect differences already at the level of single-particle states, and multi-particle states would still be built out of single-particle states in the usual way. Holography is something much more subtle, and becomes more and more important the more particles you try to pack into a volume. It seems to imply a sort of extremely mild nonlocality. It’s this “mildness” that’s confusing you, I think.

    I might not quite be responding to your question; I’ve gestured at how the state space of QG in D dimensions can look like QFT in D-1. If you’re asking a stronger question about how to get down to a finite state space in de Sitter space, I don’t have a sharp answer, but I think morally the same rules apply. In de Sitter, we have a thermal background, so even at the level of one-particle states, it’s no longer okay to think of a free particle, because you can scatter off the vacuum of the space itself, in some sense. So there’s a much more severe reduction of the QFT Hilbert space. This is vague, I admit. I’m not very familiar with de Sitter.

  16. John Sidles Says:

    Scott, almost everyone first learns quantum mechanics as a dynamical flow on a (vector) Hilbert space. But isn’t the experimental evidence for this conjecture far from conclusive, and aren’t the mathematical arguments surprisingly flimsy?

    The older experimental evidence is reviewed by Richard Thompson’s 1989 Nature article “Is quantum mechanics linear?” Following theoretical arguments by Weinberg, and referencing experimental work at NIST by Bollinger, Heinzen, Itano, Gilbert, and Wineland, the Nature review concludes “There is no reason at the moment to suppose that the framework of quantum mechanics shows nonlinearity at at significant level.” And here, by “significant,” Thompson means “within 4 parts in 10^{-27}”.

    Gee … this sounds compelling … but there’s a pretty big loophole. Namely, Thompson’s conclusion is true, but if one replaces “nonlinearity” with “linearity”, then the opposite conclusion is also true.

    Here the point is that rank-one pulled-back quantum dynamics reduces to the good old Bloch equations, whose resonances are (of course) arbitrarily narrow and amplitude-independent … and thus consonant with the amplitude-independence of the narrow resonances observed in the NIST data.

    Subsequent experiments, including (for example) the 12th-order quantum correlations observed in Negrevergne et al. “Benchmarking Quantum Control Methods on a 12-Qubit System” (2006) generically exhibit the same loophole, namely, that the experimental data require many fewer state-space dimensions to simulate than is immediately obvious (see, e.g., Menicucci and Caves’ “Local Realistic Model for the Dynamics of Bulk-Ensemble NMR Information Processing”, 2002).

    So Thompson’s linearity conclusion is not just true … it’s a “Great Truth” whose opposite is also true.

    Not to belabor the point, but pretty much all the theoretical articles (known to me, anyway) that argue the case for quantum linearity, including those cited in your (admirable) course notes for Quantum Computing Since Democritus … namely, the analyses by Lucien Hardy, and by Abrams and Lloyd … well … they all fall to the pullback loophole.

    The over-arching argument is that the symplectic dynamical structure of QM naturally pulls-back onto low-dimension state-spaces, as do Lindbladian stochastic potentials; and these two natural elements are (seemingly) all that is needed to explain even the most paradoxical aspects of the existing quantum experimental data. Conversely, density matrices don’t pullback and are not dynamically natural; thus theoretical arguments that rely upon density matrices are not naturally relevant to the linearity debate.

    Scott, the preceding is not intended as a criticism of your articles or your course notes, which (IMHO) are as well-reasoned as any in the literature. But unless I’m missing some key point … this praise is not particularly strong.

    So the next time you (or anyone) teaches introductory quantum mechanics, what should be said about linearity?

    It would be mighty harsh to echo the viewpoint of Ashtekar and Schilling (1999), that “the linear structure which is at the forefront in text-book treatments of quantum mechanics is only a technical convenience” … and ask that physics students study differential geometry and dynamical flow before studying quantum dynamics ..

    Hmmmm … and perhaps the Ashtekar-Schilling approach to QM would be best for physics students? On the grounds that far more has been learned about the naturality of classical dynamics since (say) 1950, than has been learned about naturality of quantum dynamics?

    One cheerful aspect of pulled-back quantum dynamics is that it generically can be simulated with PTIME computational resources. That is why (for engineers anyway) a pullback quantum universe is much more natural to live in than a Hilbert quantum universe.

    In summary, Hilbert quantum universes support both the anthropic principle and physical computation in BQP. In contrast, in a pullback quantum universe there is no anthropic principle, and all physical computations are limited to PTIME.

    This leads us to a (hilariously) anthropic argument for deciding between Hilbert and pullback universes: “We exist and the universe is tuned to our evolution; therefore the quantum dynamics of Nature is Hilbert, not pullback,”

    It would be interesting (and fun) to see what MIT’s feisty undergraduates made of *that* argument. 🙂

  17. onymous Says:

    Looking back over the discussion I guess you probably were concerned mostly about the “finiteness” part, so my answer probably wasn’t that responsive, and I apologize if I’m repeating things you know well. But there are papers scattered throughout the literature by (even very good!) field theorists who seem to at some point have thought holography had to do with UV and IR cutoffs, which I think sort of misses the point of how the density of states works. I was lucky to have these things explained very clearly and forcefully to me by a first-rate physicist when I was a poor confused grad student, but I’m afraid my blog-comment version of the spiel is less clear.

  18. Scott Says:

    Carl (#10): Thanks for your thoughtful comment. A less loaded way to ask the question “What kind of computer is the universe?” is this: “What kinds of computation does the universe support?” Or: “What kinds of computers can we build within the universe, consistent with laws of physics?” Remove all the philosophical baggage you want, and the question will still be interesting!

  19. Dániel Says:

    Carl: I fully, perfectly agree with everything you wrote in your post. But are you aware that the philosophical problem of free will is solved? In my own post, I implicitly assumed a compatibilist theory of free will. If you accept this very nice idea, then you don’t have to say things like “it is unlikely that free will is true”.

  20. Dániel Says:

    Scott: Of course, this instrumentalist approach has been pretty successful in moving science forward. One can debate about possible situations where the approach fails—but as I see it, the question of “what kind of computer is the universe?” is mostly orthogonal to that debate.

    My point is that the instrumentalist approach is obviously very useful, but we have to be aware of its limitations. And these limitations are most prominent exactly when we try to reason about the whole universe as a single entity, denying ourselves the loophole of an observer-universe duality. So I must disagree about the orthogonality of the two problems.

    “Suppose a physical theory only works in certain restricted circumstances—but it also makes the clear prediction that no one will ever have the imagination to design an experiment where the theory fails. Can the theory then be considered complete?”

    I think this is an easy mind-bender. Let me attempt to solve it, because the solution rhymes very nicely with some of my other points.

    The confusion can be resolved if we note that the description of a space-time continuum in the form of rules plus boundary conditions is just a useful simplification. (Or if you ask me, a dangerous simplification leading to a huge amount of bad philosophy.) The best description does not necessarily has this two-component form. When you consider this, it becomes clear that your physical theory should be phrased as a statement about the whole space-time continuum instead of as a statement only about the rules. If your statement is true, then yes, the theory can be considered complete. Of course, we prefer true statements if they have low complexity _and_ high predictive power. But in the case of your contrived example, it is not sure we will get both.

    Remove all the philosophical baggage you want, and the question will still be interesting!

    Despite all my emphasis on the philosophical baggage here, I fully agree with that.

  21. Dániel Says:

    Moshe: For example, we seem to agree now that continuum QFT, taken as a mathematical model, cannot be considered to “be a computer”.

    (I quote Moshe here, but mostly reflect to Scott’s ideas.) I think Eliezer Yudkowsky would call this “confusing the map with the territory”. Continuum QFT is what it is. If it is a correct model, it can be harnessed to compute stuff. It is not QFT’s problem that a human inside observer has only a finite access to QFT’s infinite computational power. I think it is a circular argument to say that the right model of the universe should be discrete because we humans can only harness it for discrete computations. If we try to figure out the properties of the Universe by restricting ourselves to models that incorporate our own limitations as inside observers, then we are doing it backwards. Our limitations must come out as emergent properties of the right model.

  22. Moshe Says:

    I agree with what omynous is saying, and more generally I don’t think I am saying something particularly original or controversial, at least not in my community, as I understand things. But I should probably be more clear on what are my opinions, and what is more or less the consensus view.

    Maybe to repeat a bit: elements of your formalism (say the momentum of a particle, or “dimension of Hilbert space accessible to an observer”) that have no direct observational consequences can probably be anything you wish them to be, discrete or continuous, finite or infinite. They are also probably ambiguous, usually there are a few different equivalent mathematical formulations for a theory, which all give the same physical answers, but differ in the type of variables used. If you phrase your question as something to do with results of (in-principle) experiments, it would be easier to get a sharp question, if not always an answer.

    (I think a person trained in the foundations of mathematics, the idea that there are questions and objects that you can formulate in natural language (the set of all sets and all the rest of that), but are not really well-defined should be a very familiar concept.)

    For example, this applies in my mind to the whole question of the “dimension of Hilbert space accessible to us”. As long as I don’t know of any experiment that could in principle measure this number, I cannot tell you whether it is finite or not, or whether or not such finiteness is a consequence of the assumptions you list. People differ in their gut feelings, lots of people I respect have a feeling (different than mine) that a finite Hilbert space is what is called for, but I think to say that it is an unavoidable consequence of the assumptions you list (or even whether or not this is a well-defined question) is probably too strong a statement.

    As for my own strong statement, I was actually trying to make a subtle statement, which as you can see from the discussion following my blog post, I was not able to make successfully. But let me try here: I am not saying that discrete models cannot ultimately work, I am trying to say that there is a dominant failure mode for such models. This failure mode leads to direct conflict with observation, not some mathematical unease (and the Landau pole doesn’t cause unease to too many people nowadays, incidentally). The fact is that it is hard to avoid this failure mode (impossible so far, for well-understood reasons, but as you point out you never know, so I am happy not to be using the word “falsification” which I don’t like for my own reasons). This fact leads me to believe that Lorentz invariance had better be an exact symmetry of your model. Obviously there is a judgement call there.

    Ultimately, this is a good thing, we don’t have too many clues on physics in that regime, and if any garden variety discrete model you fancy was any good as the next one, there wouldn’t be any way to distinguish them. That should be taken as a challenge then, if the universe is a computer, it is a very special one! (either it is exactly Lorentz invariant, or it cleverly hides that fact from observation. The first option seems more likely to me, but they are both really hard to achieve.)

  23. Carl Says:

    @Dániel,

    When I said “free will” I meant “libertarian free will” as short hand. I agree that, philosophically speaking, compatibilist free will seems to be a much strong position.

  24. Cranky McCrank Says:

    Do the limits of computation due to the cosmological
    constant imply, that there is maximum imput size n for
    every NP-Problem ?
    Does the “P vs NP”-Problem still make sense, if the
    input-size to an NP-Problem can’t be arbitrary large?

  25. Scott Says:

    Cranky M.: P vs. NP makes perfect sense as a mathematical question, regardless of any discoveries about cosmology. But maybe you were asking whether the question is still relevant.

    Now, notice that you could’ve asked exactly the same thing about any mathematical question that talks about all positive integers n: shouldn’t we now restrict the question to, say, n≤10122? The trouble is that from a mathematical standpoint, you’d then get an extremely unnatural statement: if you wanted actually to prove the statement instead of just philosophizing about it, then the first thing you’d do is remove the hypothesis n≤10122 anyhow.

    In other words, even though 10122 is not infinity, we don’t know techniques for proving natural statements for n≤10122 that don’t also prove them for arbitrary n.

    Anyway, that’s a “standard” answer to your question. A “nonstandard” answer, but one I’m personally fond of, is this: we could do complexity theory taking 1/Λ (the inverse of the cosmological constant) as an input parameter! In the universe we inhabit, God or nature happened to make the choice 1/Λ ~ 10122, but the laws of physics (and questions about the limits of computation) would clearly make sense with other values of 1/Λ as well.

  26. Cranky McCrank Says:

    Thanks for the quick answer
    I of course meant the relevance of the question.
    But i also forget the phrase “on a phillosopical perspective…”
    🙂

    Your last point is of course interesting,
    but maybe the constant will be a part of
    physics some day 🙂

    I would have one more outrageous question though.
    No need to answer it, but i promise this is the
    first and last “crank” question from me in this blog.
    I’ll try to keep it short.
    ——————————————————–
    If one abstracts the P vs NP question
    and ask it for infinite turing machines.
    P not equal NP is actually proved for this machines.

    I really don’t understand this proof very well
    but i would like to know if the following idea
    would be a stupid one or not so stupid:

    1)Define an infinite Sat-Problem much like the classical Sat problem, but with infinitley many variables. You can still check a solution in countably infinitley many steps.

    2)Try to find an analouge of Cooks theorem by modeling an infinite non determenistic turing machine with an infinte Sat-Formula.

    3)This would imply infinite Sat is NP complete, and can not be solved on an infinite time determenistic turing machine.

    4)Define something like an infinite extension of an NP Problem.
    Where infinite-Sat is the extension of finite-Sat.

    5)Show that if a Problem is in P it’s infinite extension is in P aswell.

    6) As infinite-Sat isn’t in P, Sat isn’t in P
    ————————————————————————————
    The idea in 5) would be, that if you could adapt an polynomial time
    algorithm to an infinite length input n, you would only need countably
    infinitley many steps to solve the problem, wich would be possible on
    an infinite time turing machine.
    an example:
    problem in P:
    Given a natural number x, does the digit 2 appears y times in it?
    infinite extension:
    Given an infinite string of digits, does the digit 2 appears infinetly many times in it?
    You can adapt the algorithm for the P problem, to solve it’s infinite extension as well.

    ———————————————————

    I hope atleast half of it makes sense in any way 🙂

  27. John Sidles Says:

    Students of mathematical history will recognize pleasing echoes of 19th century cosmology in Anthony and Scott’s dialog.

    The 19th century narrative began with Bowditch’s 1807 description of the practical elements of non-Euclidean geometry, continues with Gauss’ description of the intrinsic elements of geometry, Riemann’s description of the natural elements, and concludes Einstein’s description of the natural elements.

    Thus a 19th century conception of a classical dynamics that began with a state-space that was linear, flat, and static, and (extravagantly) infinite, concluded with a state-space that was nonlinear, curved, dynamic, and finite.

    Now at the start of the 21st century, we similarly teach our students that quantum state-space is linear, flat, static, and supports (extravagantly) BQP computation. If mathematical history repeats itself, the 21st century may end with our students learning that the state-space of nature is nonlinear, curved, dynamic, and can be simulated with PTIME resources.

    That is one possible mathematical future, anyway.

  28. Scott Says:

    Moshe, Dániel, onymous: Unusually for blog-discussions, I feel like we’re actually making progress! I like the way Dániel framed the issue:

    It is not QFT’s problem that a human inside observer has only a finite access to QFT’s infinite computational power. I think it is a circular argument to say that the right model of the universe should be discrete because we humans can only harness it for discrete computations.

    So the question is this: provided we all agree that there are finite limits to the computations that humans can perform, should we seek a model of physics that makes it manifest why that’s true? Or more strongly, should we be unsatisfied with any model that doesn’t make it manifest?

    It won’t surprise you that I think the answer is yes. The reason is that I think of QFT, and every other physical theory, as models that human beings create, ultimately to explain reality but proximately to predict the results of experiments. So sure, the universe is what it is, but we have some freedom in our theories. And given two equally predictive theories, one of which involves quantities that are both uncomputable and unobservable and one of which doesn’t, it’s hard for me to imagine a circumstance in which I’d prefer the former.

    So then here’s a challenge: can you describe to me a possible state of affairs in which
    (1) the results of all measurements that anyone can ever perform are computable, but
    (2) the only reasonable “theory” to account for the measurements involves uncomputable quantities?

    The closest analogy that I could think of (and it’s only an analogy) is standard quantum computing. Our usual ways of describing QC suggest that Nature “invests” an exponential amount of both time and memory to compute the probability distribution over measurement outcomes. And yet we certainly don’t believe that BQP=EXP—i.e., that we get to take full advantage of that exponentiality. We know how to “harness” the exponentiality to solve a few specific problems like factoring in polynomial time, but not (e.g.) to solve NP-complete problems.

    So in the case of quantum computing, some people would say there’s a “nearly-exponential gap” between how much computational work our best theory portrays Nature as doing, and how much work the theory predicts we can extract from Nature. (Personally, I’m uncomfortable with that way of speaking, since it smacks too much of grafting our classical intuitions onto physics: if the theory keeps telling us that Nature is investing only BQP effort, why should we insist that it’s “really” investing EXP effort?)

    But even if we accept that way of speaking, a nearly-exponential gap is still extremely far from a computable vs. uncomputable gap! So I repeat my question: can you give me any example of the latter, even a hypothetical one?

  29. Scott Says:

    OK, let me now clarify the challenge in my last comment.

    Standard quantum mechanics (and classical probability theory) don’t count as examples. Sure, they involve arbitrary complex or real numbers (amplitudes or probabilities), but those numbers evolve linearly—which means that everything works out fine if we approximate the numbers to finite precision. (I’ve often described QM and probability theory as having “benign continuity”, continuity that poses no serious threat to a computational view of physics.)

    QFT, with its infinite-dimensional Hilbert spaces, is a better example. But it’s still not very convincing, because we’ve known since the 1930s that QFT (like GR) is a theory that “predicts its own breakdown”: i.e., it predicts the existence of singularities at which it needs to be replaced by a better theory. (Saying that most people nowadays don’t worry about Landau poles and related problems is just giving me a sociological observation: if you want me not to worry, give me the arguments for why I shouldn’t! 🙂 ) So if I’m so inclined (and I am), I can easily conjecture that the better theory that replaces QFT will make it more manifest why all the quantities that appear in the theory are computable.

  30. nick (bostrom-not) Says:

    I don’t get this use of “bit” in reference to the Universe-as-computer. I trust it’s simply meant as a metaphor for “information” per se (as they se). Binary is a human code, no more.

    (I know that Zeilinger and Bruckner, out of Wheeler, talk about the fundamental relationship between bits and QM. But what they’re saying is that the limit of our comprehension is the limit of our reality, and that “our reality” is identical to reality itself for all intents and purposes howsobeit anyhow whatever and Vorwärts Kamaraden until we hits the wall.)

  31. Moshe Says:

    Scott, let me answer the easy question first. People don’t worry about Landau poles for two main reasons:

    1. They were much more worrisome when it was thought that every QFT has them. Since the early 70s we have working examples of the so-called asymptotically free QFT, which have no Landau poles, and are therefore well-defined as mathematical models. So, I was suggesting one of these as a model which is not a quantum computer, just to have a working example.

    2. We now think about QFT has something valid only up to the Planck scale anyhow, whether or not it has Landau poles, for reasons that have to do with quantum gravity. The Landau poles appear on distance scales much much shorter than that. If your problems with QFT will be solved at such scales only, you’ll violate every bound to do with quantum gravity, which always have to do with the Planck length.

    (As an aside, all those bounds use non-relativisitic language, but are derived from a completely relativistic theory. For example, the statement that the gravitational entropy of a region is at most X is obtained from a set of calculations in GR, by construction it is a relativistic statement, even if people are not always careful to phrase it as such.)

    But, I think this is all a distraction. Models of single particle quantum mechanics (the simple harmonic oscillator, or the free particle) involve quantities that are uncomputable (real numbers). Maybe this is not necessary, but even before worrying about that, I’d worry about these elements being unmeasurable and arbitrary. My claim is that ascribing too much reality to such elements will tend to confuse you about much simpler questions than computability. Your example involving quantum computing is a good one. I think we are stuck with theories with lots of redundancies, and we have to learn to distinguish the language from the content.

  32. Scott Says:

    Thanks, Moshe! The facts that other things kill QFT long before Landau poles do, and that not all QFTs have Landau poles, are indeed excellent reasons not to worry about them.

    As for “learning to distinguish the language from the content”: yes, I completely agree that that’s the goal! So let’s put the question this way: how can we make it apparent why the physical content of QFT is computable (at least in those regimes where QFT is valid), even though the language of QFT involves arbitrary reals?

    For extra credit, replace “computable” by “efficiently computable”; for double extra credit, replace QFT by QG.

    Now, if the presupposition of my question is wrong, and the physical content of QFT is not computable (even in those regimes where QFT is valid), that would mean one could in principle build a “QFT computer” to solve Turing-uncomputable problems. I take it that neither of us sees that as likely.

    In the case of standard QM in finite-dimensional Hilbert spaces, I feel like I completely understand the answer to the above question. In other words, the fact that QM involves uncomputable amplitudes doesn’t keep me awake at night, since I can explain to anyone why that “uncomputability” is just an artifact of language, and doesn’t “leak” into actual physical predictions.

    So, I’m asking for similar reassurance in the case of QFT. And I’m predicting that if no such reassurance can be given, then the problem is with QFT, not with the request.

    (If the reassurance is easy to provide, great! But if it can’t be supplied without a significant advance in physics, so much the better…)

  33. asdf Says:

    I just came across this via slashdot:

    http://www.technologyreview.com/blog/arxiv/25494/

    It discusses paradox-free time travel via quantum postselection.

  34. Moshe Says:

    I think we are fast converging to the point where I predict we will not agree (which is good, because the day job is calling…). Let me phrase this precise question:

    Suppose we discuss one of the cases of QFT which does NOT predict its own demise, purely as a mathematical model, forget for a minute if it describes reality. In such a model the physical questions correspond to calculating the probability amplitudes to go from one state to another. All those states are described by a collections of real numbers. So, this has input and output, and one might want to use the computer analogy, but I think we’d both agree this is not a computation in the usual sense (of being simulated by a Turing machine) , or any reasonable deformation of the usual sense.

    So, there are things out there that are NOT computers, and we are discussing a non-vacous statement, when we claim that the “universe” is a computer.

    Now, such a model may or may not describe reality. We naturally have different intuition about that. From my point of view, I see no reason (including all the reasons from QG) that the theory of our universe should involve this particular human construct (Turing machines) rather than another human construct (real numbers), but I expect that is the point where we’ll differ.

  35. Mike Says:

    I suspect that will be unsatisfactory answer for most all concerned, but I think that David Deutsch generally has the right approach. In short, he thinks that “within each universe all observable quantities are discrete, but the multiverse as a whole is a continuum. When the equations of quantum theory describe a continuous but not-directly-observable transition between two values of a discrete quantity, what they are telling us is that the transition does not take place entirely within one universe. So perhaps the price of continuous motion is not an infinity of consecutive actions, but an infinity of concurrent actions taking place across the multiverse.”

  36. Moshe Says:

    Let me just sharpen things a bit: the state in QFT is characterized in terms of energy-momentum. Whether it is real or discrete is not the issue (since this has to do with long distance physics); rather the point is that the each component of the energy momentum is unbounded, unless we are willing to do violence to LI, which will likely lead to observable effects.

    So, quantum gravity tells us that there may be some modification needed when some relative momenta become large, in an invariant way (i.e E^2-p^2 bigger than some cutoff to do with the Planck scale). What it does NOT tell us is two other things:

    1. That there is a bound of individual components of the energy-momentum separately. Note that a bound on E^2-P^2 does not imply that, this is where the non-compactness of the Lorentz group comes in.

    2. That there is any restriction whatsoever on the overall (center of mass) energy or momentum. Nothing special happens when the overall energy-momentum approaches the Planck scale, if we are talking about a single object, or the average energy-momentum of a composite system.

    So, seems to me that the data specifying any initial or final state is necessarily infinite (but countable), assuming only LI. Does that make such models necessarily non-computable, or do we need to look further at details?

  37. Dániel Says:

    So then here’s a challenge: can you describe to me a possible state of affairs in which
    (1) the results of all measurements that anyone can ever perform are computable, but
    (2) the only reasonable “theory” to account for the measurements involves uncomputable quantities?

    What do you think about the following situation:
    (1′) the rules of the universe are computable
    (2′) our only reasonable theory predicts that the total Kolmogorov-complexity of the boundary conditions of the universe is infinite

    Would this meet your challenge? Can you even imagine a situation where the only reasonable theory predicts such a thing? When should we give up trying to compress the theory? What would Einstein do? 🙂

    Here is a thought experiment that gives one possible concrete formalization of the above, but it is not pretty. I think it obeys the letter but it definitely does not obey the spirit. The loophole it exploits is basically nonuniformism.

    Fix an infinite random string. Build a digital computer that queries the random bits as an oracle. Run a simulation of an otherwise computable universe on the computer. You can visualize this as a blinking star on the night sky. The star blinks perfectly unpredictably in an otherwise orderly (say, Newtonian) universe.

    I won’t even attempt to solve your challenge the way you envisioned it. It is clear that it would require some very-very advanced math. Which, just as you suggest, strengthens your position that the universe is computable. I actually believe this position to be correct. In my previous comment all I wanted to note that it is not a priori correct, even if we assume your point (1).

  38. wolfgang Says:

    Moshe,

    we do know how to do lattice-QCD, putting a lorentz inv. theory on a lattice. In fact, some people would argue that lattice-QCD is the only well-defined QFT we have.

    The trick includes Wick rotation to the Euclidean sector (we do understand how spherical symmetry can be recovered on a lattice which is not spherically symmetric).

    Of course, we do not know (yet) how to do quantum gravity on a lattice and one issue is that Wick rotation does not work.

    But I would think that your general statements about QFT
    and lorentz invariance do not hold (if one allows Wick rotation).

  39. Baptiste Says:

    Hi,

    The Stanford Encyclopedia of Philosophy recently released an article on “Computation in physical systems”. Maybe you’ll find it interesting.

    http://plato.stanford.edu/entries/computation-physicalsystems

  40. Moshe Says:

    Wolfgang, Wick rotation and lattice regularization are very useful tools, with their known strengths and weaknesses. Since we are asking questions of principle here, I’d point out that it works for very specific field theories, and for calculating very specific properties of those field theories. For a generic (e.g. time dependent) quantity, in a field theory complicated enough to include the standard model (with its chiral fermions, the Higgs and its myriad of Lorentz violating relevant operators), Wick rotations and lattice field theories are not going to help.

    In any event, this discussion had its moment in my original blog post. What I want to get out of the current discussion is a clear (possibly even quantitative) statement about whether or not, if I assume for my own reasons exact Lorentz invariance, this necessarily means that the model is not computable. I formulated the question more precisely in my previous comment, and I am really curious about the answer. We can then discuss our psychological prepossessions, but I’d like to make sure I understand the facts first.

  41. Scott Says:

    Moshe:

    What I want to get out of the current discussion is a clear (possibly even quantitative) statement about whether or not, if I assume for my own reasons exact Lorentz invariance, this necessarily means that the model is not computable.

    To answer your question, we really need to distinguish carefully between “strong” and “weak” computational claims about the universe.

    The weak claim—the one I’ll give up eating ice cream forever if falsified—says that there exists an algorithm that takes as input a description of any possible physical experiment, and that outputs the probabilities for the possible results of that experiment, to any desired accuracy. In particular, if the weak claim fails, then it ought to be possible in principle to build “hypercomputers” that solve problems that are non-Turing-computable (in some sense, the universe itself could be regarded as such a hypercomputer).

    The strong claim says that there exists a natural formulation of physics at the Planck scale in which continuous quantities don’t appear anywhere, except as amplitudes or probabilities.

    (Some people, like Ed Fredkin and Stephen Wolfram, have also advocated a “superstrong claim,” which would banish continuity even from amplitudes and probabilities, but no one has proposed any reasonable way to reconcile the superstrong claim with standard QM.)

    A huge amount of talking-past-each-other in this thread was simply due to conflating the strong and weak claims (and I bear part of the blame for that—I apologize).

    To your question: if a theory satisfies exact Lorentz invariance, then for the reasons you pointed out in your blog post, it conflicts with the strong computational claim.

    However, it’s not obvious whether it conflicts with the weak computational claim. Indeed, that’s a huge part of what I was asking you! My guess would be that even standard Lorentz-invariant theories (like QFT), to the extent that they’re well-defined, do satisfy the weak computational claim, but that nontrivial work will be needed to bring out their computability. (Basically, one would need to show rigorously that nothing goes badly wrong if all relevant quantities are approximated to finite precision: to what extent has that been done for QFTs?)

    It’s obvious, from the preceding discussion, that I have a psychological predisposition in favor of the strong claim, and that you have a psychological predisposition against it. Let me ask you point-blank: do you at least share my psychological predisposition in favor of the weak claim?

  42. wolfgang Says:

    Scott,

    your question was directed at Moshe, but still I want to point out that there is an important ambiguity in your weak claim

    “an algorithm that takes as input a description of any possible physical experiment, and that outputs the probabilities for the possible results of that experiment”

    you can obtain the necessary input only for systems which are sufficiently isolated from the environment.
    But since one cannot shield gravity the necessary input cannot be obtained in general.

    I assume your weak claim is about systems which are sufficiently isolated from the environment and thus cannot include the universe.

    PS: I am sorry for interrupting your conversation with Moshe.

  43. Scott Says:

    wolfgang: “Interrupting” is a strange concept in a blog discussion! 🙂 You’re right about the ambiguity in my weak claim — I thought about it, but then figured it was clear that I was talking about sufficiently isolated physical systems.

    I think it’s fair to say that no one really knows yet how to reason about computation in a quantum gravity context, where the universe might need to be considered in its entirety, and there might not even be such a thing as “time”. Or at least, I don’t know…

  44. John Sidles Says:

    Scott says: (Part A) The weak claim—the one I’ll give up eating ice cream forever if falsified—says that there exists an algorithm that takes as input a description of any possible physical experiment, and that outputs the probabilities for the possible results of that experiment, to any desired accuracy. (Part B) In particular, if the weak claim fails, then it ought to be possible in principle to build “hypercomputers” that solve problems that are non-Turing-computable (in some sense, the universe itself could be regarded as such a hypercomputer).

    Scott, (Part A) (I added the label) is terrific, and I hope your coming paper with Alex Arkhipov will put some rigorous meat on those lovely bones.

    (Part B), though, has a glaring loophole as-stated. It makes sense only if the phrase “physical experiment” in (Part A) is restricted to Hilbert state-spaces.

    Hmmm … gee … according to the literature, isn’t the evidence for the Hilbert space assumption far from compelling? Isn’t that why we presently are far from achieving a consensus on that question? So perhaps the assumption ought to be made explicit?

  45. Scott Says:

    John: Actually, absolutely nothing in the passage you quoted presupposed Hilbert spaces. I know you’ve had a “bug up your tuchus” lately about overthrowing Hilbert space, but try to stay on topic! 🙂

  46. Moshe Says:

    Thanks Scott, that is really a useful distinction. If you are asking about my gut feeling, the weak claim would probably be right in some sense (as Wolfgang point out, it is not necessarily defined without an appropriate context), but a lot of work needs to be done to show what is that sense precisely. Let me make a couple more comments about what kind of work that is.

    The point of my blog post, and my comments here, is that in the context of a theory which is complicated enough to include the standard model, and which is LI (which is not to say it is a QFT), the statement that “nothing goes badly wrong if all relevant quantities are approximated to finite precision” is generically wrong. If this approximation breaks the underlying LI, many results of calculations in that “approximate” theory will dramatically deviate from the correct ones, unless they are shielded by some (thus far) unknown mechanism. Maybe such mechanism will ultimately shown to exist, but for the sake of discussion let’s take that hint seriously and try to see what are the consequences.

    This hint leaves me to believe that the correct theory will be exactly LI, but I don’t think this necessarily contradicts your weak computational claim! My gut feeling is that it doesn’t, but the two claim of exact LI and the weak computational claim do live in tension, on the verge of contradicting each other. So either, they do contradict, or they don’t, and I’d really like to know which one it is.

    My own gut feeling is that they do NOT contradict, and that fact alone narrows down dramatically the set of computational models one should consider (likely none of the current ones on the market will survive).

  47. Moshe Says:

    Maybe one more comment to clarify, or maybe to confuse further. There are ways to “discretize” without breaking Lorentz invariance (in various examples from string theory). In all of them, if we are going to use the language here, you don’t discretize the basic ingredients of your theory (which does violence to the theory), but you discuss more carefully what are the “relevant” (i.e. observable) quantities. That set of quantities is smaller (in a LI way) in quantum gravity, potentially consistent with what you’d expect from the weak computational principle.

  48. John Sidles Says:

    I take your point, Scott … it’s partly a cultural divide, since for engineers, algorithms in (say) EXP are too extravagant to count as real algorithms (since one can’t realize them on practical hardware).

    Thus, it’s the engineers who (often) make the implicit assumption, that “exists an algorithm” means “exists a PTIME algorithm.”

    Then my point is that your comments on algorithms remain interesting—and to engineers, become more interesting—if one imposes a PTIME restriction.

  49. Scott Says:

    Thank you, Moshe, for forcing me to realize the importance of distinguishing the strong and weak claims! This is one of the more useful blog-exchanges I’ve had.

    As I said before, I have a psychological predisposition in favor of the strong claim, but it’s only that. If the evidence builds up that LI is exact, and that the ideas of (e.g.) my friends at Perimeter Institute about discreteness of spacetime at the Planck scale are wrong, then I’ll happily abandon my predisposition.

    With the weak claim, by contrast, I’m ready to stick my neck out and predict that, if a theory can’t be made consistent with it, then that’s a problem for the theory rather than for the weak claim.

    I find it ironic that you and Wolfgang jumped on the weak claim as being potentially ill-defined: my own worry was that the strong claim was ill-defined, since it talks only about the formulation of theories, not the outcomes of actual experiments. But yes, I see your point. We have to be careful to define what we mean by an “experiment”: that it takes place in a sufficiently isolated region, that we know the initial conditions, etc.

    Earlier in our exchange, confusion resulted because I assumed the weak claim as an obvious, unstated premise. And thus, when you argued against the strong claim, I asked (in “today’s” language) whether your arguments also worked against the weak claim, wrongly imagining we both agreed that if they did, then that would be a reductio ad absurdum of the arguments.

    In fact, you’re absolutely right that reconciling the weak claim with QFT—or with any Lorentz-invariant theory—is a great challenge for physics, even if one has one’s own reasons (as I do) for believing that the weak claim will ultimately prevail.

    I share your sense (expressed in comment #47) that it’s conceivable that QFT would fail to satisfy the weak claim, but that QG would nevertheless satisfy it. One thing I’ve learned from this thread is that QG might satisfy the weak claim even while being perfectly Lorentz-invariant.

    (Incidentally, and just for your amusement and to underscore the “relativity of opinion”: one of the debates I’ve had with other computer scientists has been convincing them that what I called the “superstrong claim”—i.e., a discretization of quantum amplitudes—isn’t going to work, and that at most they should hope for the strong claim!)

  50. Moshe Says:

    Thanks Scott, I was also enjoying myself. I think we have reached the point where any further progress will need some actual work, making things quantitative, which is potentially an interesting thing to do some time, but my plate is full at the moment.

  51. onymous Says:

    This has been an interesting thread. I’m fairly ignorant of complexity theory, so here’s perhaps a silly question: is it possible to imagine that physical quantities in Lorentz-invariant theories are not computable by Turing machines, but still don’t provide a mechanism for solving NP-hard problems efficiently? Naively, it seems to me like the direction in which physics is not computable — the discreteness/continuity axis, if you like — is orthogonal to the computational complexity axis that (presumably) separates P from NP.

    Maybe this is morally the same question as the

  52. onymous Says:

    Oops! Clicked “submit” accidentally. I was going to say:

    Maybe this is morally the same question as the weak conjecture, i.e. if the noncomputable things that physics does can all be approximated well by the usual model of computation, then they don’t provide a route to doing hard problems. Is it? Or are these separate questions? (Or maybe my question is incoherent — I don’t know much about computation.)

    I do think, Scott, that the evidence against discreteness of spacetime, or any violation of Lorentz invariance, at the Planck scale is extremely strong, and the onus is on those who postulate it to explain how it can be consistent with precision tests at low energies. Moshe’s old post on this gives, in detail, the argument that almost any physicist you ask will tell you, namely that there are many Lorentz-violating relevant operators and it’s hard to imagine what would forbid them aside from Lorentz symmetry.

  53. Scott Says:

    onymous: The core of the issue can be explained by a simple thought experiment. If time were really continuous, even below the Planck scale, then why couldn’t we solve the halting problem in finite time, by doing the first step of a computation in 1 second, the second step in 1/2 second, the third step in 1/4 second, and so on?

    Or why couldn’t you leave your computer on earth and then accelerate arbitrarily close to the speed of light, if you wanted to perform an arbitrary amount of computation in only 1 second as experienced by you? Or leave your computer in free space, then go arbitrarily close to a black hole event horizon (but not past it), to get a similar time dilation effect?

    Of course, NP-complete problems are kindergarten stuff compared to what you could solve by these means (EXP, triply-exponential time … the sky’s the limit)!

    Now, as a physicist, I’m sure you can explain to me why the specific proposals I just mentioned wouldn’t work (yeah, I know: it’s because the energy needed would become so large that you’d exceed the Schwarzschild bound). But to my mind, that isn’t really a satisfying answer. I don’t want you to shoot down proposals for exploiting the continuity of spacetime to solve hard problems as I come up with them. Instead, I want a fundamental principle—something like relativity or the Second Law—that explains why the continuity of spacetime can never be used for that purpose. Is that so much to ask for? 🙂

    Of course, if the continuity of spacetime were to break down at the Planck scale, then that would serve very nicely as such a fundamental principle. And that’s one of several considerations that makes such a breakdown attractive to me.

    But as I said to Moshe, I’m not wedded to the notion of discrete spacetime. Maybe Lorentz invariance is an exact symmetry even at 10-33 centimeters. Maybe the idea of a smooth spacetime manifold resolving itself into a complicated combinatorial network when probed at the Planck scale, much like water resolves itself into individual molecules when probed under a microscope, is a nonstarter. But if so, then as I was saying before, I think we face the burden of finding an alternative fundamental principle that convincingly explains why hypercomputing is impossible. Of course, what looks like a burden to me might look like a wonderful research opportunity to someone else.

  54. onymous Says:

    Ah, I see what you mean. It sounds true that QFT without gravity should let you solve all sorts of problems quickly by this sort of trick. But I think of computational complexity as counting how many steps it takes to solve a problem, and you haven’t changed the count of steps, you’ve just decided that each step can be much faster than the previous one. Isn’t that a bit of a cheat? Is “time” what you really care about, or is it something like the complexity of the method of solution?

    I do share your sense that in gravity there should be a principle that forbids this — a way of formalizing the case-by-case check that black holes show up and spoil your day — but it seems like the physical idea that you can’t solve arbitrarily hard problems in polynomial time is using a sense of “time” that’s wandered rather far from the CS version that you started with. But you’re the computer scientist, so I’ll believe whatever definition seems right to you 🙂

  55. Scott Says:

    onymous: What matters is whether you can really, actually solve your problem using a reasonable investment of resources (time, space, energy, and so on). Not whether you “used too many steps” according to some metric that computer scientists made up for a different model, but that might be completely irrelevant for your model.

    Yes, doing an infinite amount of computation in a finite time using exponentially-faster steps certainly does seem like a cheat to me! So the challenge for physics is to explain why it’s a cheat, by demonstrating exactly what goes wrong when you try to do it. “Black holes show up” is clearly an important part of the answer. But I want a fundamental principle that will convince even the most determined (but intellectually honest) hypercomputing enthusiast that there’s no way whatsoever around the black hole problem.

  56. wolfgang Says:

    Scott,

    >> doing an infinite amount of computation in a finite time using exponentially-faster steps

    if you consider that dt * dE > h you can make a case that hypercomputation (which requires dt -> 0 ) will require infinite energy, although t is a continuous parameter in q.m.

    This argument is just handwaving and does not really proof anything yet, but it shows that continous space-time + q.m. could provide for the limitations you are looking for.

    The Bekenstein bound would be another way to look at this issue and again it would not be necessary to assume a discrete space time a la LQG.

    PS: I find it ironic that you wrote “I find it ironic that you and Wolfgang jumped on the weak claim as being potentially ill-defined” , because the question if a system can be sufficiently isolated from the environment contains the question if (and what kind of) a quantum computer is possible.

  57. wolfgang Says:

    I should add that Salecker and Wigner (1958) showed that a clock with precision dt -> 0 has to have a mass M -> infty

    A clock is not a computer, but I think their paper ( and the discussion which followed and lasts until today see e.g. arxiv.org/abs/hep-th/0406260 ) is a good starting point for your issue(s).

  58. András Salamon Says:

    @John Sidles#48: I thought that the physics/engineering point of view was to ignore the worst-case complexity of problems? What matters seems to be how well algorithms work in practice. So SAT might be outside PTIME, but many interesting SAT instances can be solved in reasonable amounts of time. With verification of device driver halting, the problem might be undecidable, but there are many instances which can still be tackled. In between there are many interesting problems in databases and computer algebra where doubly exponential algorithms work just fine for commonly encountered instances.

    Of course, it might turn out that the class of instances of a hard problem encountered in practice is actually in PTIME, because of some structural feature. But this can take a long time to establish. Bounded treewidth explained easy instances of query evaluation that had been known in practice for the previous 15 years. Practitioners didn’t stop to wait for the theory…

    So I am somewhat surprised to see you focusing on EXP vs. PTIME. Or am I misinterpreting what you said?

  59. John Sidles Says:

    András, what I had in mind was to elevate the phenomenological rule “No experimental or observational data-set has ever been acquired that provably cannot be simulated with PTIME resources” to a law of nature in the class that Scott asked for, namely: “Nature’s laws are such that all experiments and observations can be simulated with PTIME resources.”

    When we try to reconcile known principles of quantum dynamics with this (postulated) law of universal PTIME simulation, while at the same showing at least some respect Dirac’s goal of “getting beauty in one’s equations” … well … that’s obviously a pretty tall order.

    For the sake of Dirac-style beauty and thermodynamics, the symplectic and Lindbladian structure of quantum mechanics must be preserved. For the (postulated) law of universal PTIME simulation to hold, the exponential dimensionality and linearity of Hilbert space must be abandoned. And this turns out to be not such a dire loss … no more NP-hard separability problem, for example!

    These are a sufficiently tricky issues, which are accompanied by sufficiently rich research opportunities, that as Scott observed, there is a huge amount “talking-past-each-other.”

    Heck, no one has even mentioned the (fascinating!) non-collision singularities that arise when we combine good old Newtonian gravity, Euclidean space-time, and point mass trajectories. These singularities prove that we don’t need fields *or* quantum mechanics *or* non-Euclidean geometry to get into dynamical trouble!

  60. wolfgang Says:

    Scott,

    if you think hyper-computation is cheating, then you should not worry so much about the discreteness of space-time and instead worry about people of the future creating baby universes to get around the Bekenstein bound.

    There is a little bit more about this at my blog. 😎

  61. John Sidles Says:

    By the way, it occurs to me that there is a 1961 story by Paul Linebarger (aka Cordwainer Smith), titled Alpha Ralpha Boulevard, that can be read as a startlingly prescient narrative about the social and intellectual challenges that are associated to what Scott calls “hypercomputation.”

    In fact, the entirely of Linebarger’s work can be read as an extended meditation about the interface between complexity theory, biology, cognition, and governance … which is a mighty interesting interface.

    This work is collected in The Rediscovery of Man, which is in-print and (IMHO) well worth reading.

  62. John Sidles Says:

    Another fun (and very accessible) article relating to the general theme of “hyperdynamical physics lurking within ordinary physics” is Donald G. Saari and Zhihong (Jeff) Xia, Off to Infinity in Finite Time (Notices of the AMS, 1995, available on-line).

    This article can be regarded as a cautionary tale about the exceeding difficulty of excluding hyperdynamical physics (and hypercomputation in particular) from even the simplest dynamical models.