Nick Bostrom’s simulation argument is perhaps the most popular form of that sort of thing right now. Bostrom uses a funny argument to try to establish that there are more simulated people like us than non-simulated people like us, with the idea that if we accept the premises that establish that, we should guess that we’re among the simulated people. The argument relies on the assumption that the simulations in question are “ancestor simulations” which is to say that each universe being simulated is similar (in physics and scale) to the universe that contains it, and the folks running the simulation are much like us.

So Bostrom-style simulationists can’t appeal to nigh-infinite godlike entities. Curiously, the “ancestor simulation” constraint makes that form of the hypothesis effectively falsifiable – all we have to do is prove that you can’t simulate a universe like our own (w.r.t. physics and scale) from within a universe like our own and the argument fails.

So one could take a sufficiently strong lower bound on the complexity of simulating certain physical interactions as a strike against the Bostrom-style simulation argument. If we’d need a universe worth of memory to effectively simulate a few handfuls of electrons, then the universe simulations don’t scale at the rate they’d need to in order to establish that there are likely more simulated humans than non-simulated humans. Ergo, this general sort of result does potentially make problems for certain forms of the simulation argument.

]]>Or maybe there’s no computation going on and the simulation is just a big stack of sheets of paper, each with a really long integer written on it, representing one possible state of the universe, and related states/sheets just “know” each other (forming consistent spacetimes).

]]>A point that I find interesting (related to Scott’s #3) regarding a matrix-type simulations of humans (even a single human) is that it requires quantum fault-tolerance. The same applies to even more “mundane” tasks regarding predictability of humans and teleportation of humans. All these tasks seem to require quantum fault-tolerance.

(Three more points: the need for quantum fault tolerance to emulate complex quantum systems does not require that these systems demonstrate superior (beyond P) computation. The task of matrix-type simulations of individuals, as well as predicting and teleporting them would be extremely difficult even with quantum fault-tolerance. And, finally, there is nothing special here about “humans” and the same applies to sufficiently interesting small quantum systems based on non-interacting bosons or qubits.)

]]>- Do you think any continuous or infinite processes are observable in our universe? Do you think any continuous or infinite processes exist in our universe?

That depends entirely on what you mean by terms like “exist” and “are observable.” My best guess is that the history of the observable universe is well-described by quantum mechanics in a finite-dimensional Hilbert space (specifically, about exp(10^{122}) dimensions). If so, then the outcome of any measurement would be discrete; you’d never directly observe any quantity that was fundamentally continuous. But the amplitudes, which you’d need to calculate the probabilities of one measurement outcome versus another one, would be complex numbers with nothing to discretize them.

- Do you think the amount of information needed to describe the history of our universe is finite?

Again, on the view above, the amount of information needed to describe *any given observer’s experience* is finite (at most ~10^{122} bits). And the amount of quantum information contained in the state of the universe (i.e., the number of qubits) is also finite. But the amount of classical information needed to describe the quantum state of the universe (something that no one directly observes) could well be infinite.

- Is it important to consider historical quantum entanglement if it has already “collapsed”?

Is that question any different from just asking whether someone believes the Many-Worlds Interpretation? If not, then see my previous posts on the subject.

- Thoughts on retro-causality? Thoughts on a block universe?

A lot of my thoughts about such matters are in my Ghost in the Quantum Turing Machine essay. I guess some people might call the freebit picture “retrocausal,” in a certain sense, although it denies the possibility of cycles in the causal graph of the world.

In any case, the usual motivations for retrocausality—namely, to get rid of the need for quantum entanglement, and to restore the “time symmetry” that’s broken by the special initial state at the Big Bang—I regard as complete, total red herrings. Retrocausality doesn’t help anyway in explaining quantum phenomena (what’s a “retrocausal explanation” for how Shor’s algorithm works, that adds the tiniest sliver of insight to the usual explanation, and that wouldn’t if true imply that quantum computers can do much *more* than they actually can?). And I’ve never seen any reason why our universe *shouldn’t* have a special initial state but no special final state. Life is all about broken symmetries; a maximally symmetric world is also a maximally boring one.