This article can be regarded as a cautionary tale about the exceeding difficulty of excluding hyperdynamical physics (and hypercomputation in particular) from even the simplest dynamical models.

]]>In fact, the entirely of Linebarger’s work can be read as an extended meditation about the interface between complexity theory, biology, cognition, and governance … which is a mighty interesting interface.

This work is collected in *The Rediscovery of Man*, which is in-print and (IMHO) well worth reading.

if you think hyper-computation is cheating, then you should not worry so much about the discreteness of space-time and instead worry about people of the future creating baby universes to get around the Bekenstein bound.

There is a little bit more about this at my blog. 😎

]]>When we try to reconcile known principles of quantum dynamics with this (postulated) law of universal PTIME simulation, while at the same showing at least *some* respect Dirac’s goal of “getting beauty in one’s equations” … well … that’s obviously a pretty tall order.

For the sake of Dirac-style beauty *and* thermodynamics, the symplectic and Lindbladian structure of quantum mechanics must be preserved. For the (postulated) law of universal PTIME simulation to hold, the exponential dimensionality and linearity of Hilbert space must be abandoned. And this turns out to be not such a dire loss … no more NP-hard separability problem, for example!

These are a sufficiently tricky issues, which are accompanied by sufficiently rich research opportunities, that as Scott observed, there is a huge amount “talking-past-each-other.”

Heck, no one has even mentioned the (fascinating!) non-collision singularities that arise when we combine good old Newtonian gravity, Euclidean space-time, and point mass trajectories. These singularities prove that we don’t need fields *or* quantum mechanics *or* non-Euclidean geometry to get into dynamical trouble!

]]>Of course, it might turn out that the class of instances of a hard problem encountered in practice is actually in PTIME, because of some structural feature. But this can take a long time to establish. Bounded treewidth explained easy instances of query evaluation that had been known in practice for the previous 15 years. Practitioners didn’t stop to wait for the theory…

So I am somewhat surprised to see you focusing on EXP vs. PTIME. Or am I misinterpreting what you said?

]]>A clock is not a computer, but I think their paper ( and the discussion which followed and lasts until today see e.g. arxiv.org/abs/hep-th/0406260 ) is a good starting point for your issue(s).

]]>>> doing an infinite amount of computation in a finite time using exponentially-faster steps

if you consider that dt * dE > h you can make a case that hypercomputation (which requires dt -> 0 ) will require infinite energy, although t is a continuous parameter in q.m.

This argument is just handwaving and does not really proof anything yet, but it shows that continous space-time + q.m. could provide for the limitations you are looking for.

The Bekenstein bound would be another way to look at this issue and again it would not be necessary to assume a discrete space time a la LQG.

PS: I find it ironic that you wrote “I find it ironic that you and Wolfgang jumped on the weak claim as being potentially ill-defined” , because the question if a system can be sufficiently isolated from the environment contains the question if (and what kind of) a quantum computer is possible.

]]>Yes, doing an infinite amount of computation in a finite time using exponentially-faster steps certainly *does* seem like a cheat to me! So the challenge for physics is to explain *why* it’s a cheat, by demonstrating exactly what goes wrong when you try to do it. “Black holes show up” is clearly an important part of the answer. But I want a fundamental principle that will convince even the most determined (but intellectually honest) hypercomputing enthusiast that there’s no way whatsoever around the black hole problem.

I do share your sense that in gravity there should be a principle that forbids this — a way of formalizing the case-by-case check that black holes show up and spoil your day — but it seems like the physical idea that you can’t solve arbitrarily hard problems in polynomial *time* is using a sense of “time” that’s wandered rather far from the CS version that you started with. But you’re the computer scientist, so I’ll believe whatever definition seems right to you 🙂

Or why couldn’t you leave your computer on earth and then accelerate *arbitrarily* close to the speed of light, if you wanted to perform an arbitrary amount of computation in only 1 second as experienced by you? Or leave your computer in free space, then go arbitrarily close to a black hole event horizon (but not past it), to get a similar time dilation effect?

Of course, NP-complete problems are kindergarten stuff compared to what you could solve by *these* means (EXP, triply-exponential time … the sky’s the limit)!

Now, as a physicist, I’m sure you can explain to me why the specific proposals I just mentioned wouldn’t work (yeah, I know: it’s because the energy needed would become so large that you’d exceed the Schwarzschild bound). But to my mind, that isn’t really a satisfying answer. I don’t want you to shoot down proposals for exploiting the continuity of spacetime to solve hard problems as I come up with them. Instead, I want a fundamental principle—something like relativity or the Second Law—that explains why the continuity of spacetime can *never* be used for that purpose. Is that so much to ask for? 🙂

Of course, if the continuity of spacetime were to *break down* at the Planck scale, then that would serve very nicely as such a fundamental principle. And that’s one of several considerations that makes such a breakdown attractive to me.

But as I said to Moshe, I’m not wedded to the notion of discrete spacetime. Maybe Lorentz invariance is an exact symmetry even at 10^{-33} centimeters. Maybe the idea of a smooth spacetime manifold resolving itself into a complicated combinatorial network when probed at the Planck scale, much like water resolves itself into individual molecules when probed under a microscope, is a nonstarter. But if so, then as I was saying before, I think we face the burden of finding an *alternative* fundamental principle that convincingly explains why hypercomputing is impossible. Of course, what looks like a burden to me might look like a wonderful research opportunity to someone else.