In that case, perhaps the answer to Fermi’s paradox is that every advanced civilization figures out WAQ is true, and arranges circumstances so that all conscious entities in the civilization are obliterated in every branch of the universe except the ones with extremely unlikely but extremely beneficial properties (quantum paradise). For example, by constructing an antimatter bomb that is set to explode unless some extremely salient and complex question is answered by a giant quantum computer that “just happens” to form from random quantum fluctuations in the particles composing a nearby moon.

Since we don’t happen to live in one of those exceedingly unlikely paradise slices, we don’t see anyone “out there”. (Or, indeed, anyone stamping us out as exponential growth pushes them to expand at the speed of light).

]]>At first I was floored by Moravec’s essay. I felt that he waved his hands at the statistical “why do our experiences follow physical laws” argument against the Dust Theory, but I also felt that he was very eloquent in describing and supporting the Dust Theory. He also thought of a few consequences that I hadn’t (and hadn’t seen before), e.g. the ethics of producing works of fiction in which conscious entities suffer.

(If you haven’t read Moravec’s essay [I know you have, Greg, I mean other people], I heartily recommend it. He really does lead into the ethics question above in a way that isn’t completely laughable, which is a pretty impressive accomplishment in itself IMO.)

However, now that I’ve slept on it, I am less impressed overall. The “ethics of fiction” section fails to point out that all such sky high “simulation summaries” are played out in any sufficiently large arbitrary dataset. This means that the author’s/game player’s work affects these entities not a whit, which tends IMO to make that whole argument a mockery.

His hand waving at the consistency of experience as an argument against the dust theory bothers me more, especially since he doesn’t treat the argument with the self-doubt it deserves.

That said, he certainly did a far better job illustrating the whole argument than I have.

To elaborate further on the ‘quantum computing as a prerequisite for consciousness’ argument, I find myself a little confused about the power of quantum computing versus Turing machines.

My understanding is that if you ignore measurement events, quantum theory is deterministic. Everything is just a big evolving dataset, with the next value in the dataset computable by a Turing machine from the previous, with the possible issue that the data may be analog.

Further, our typical experience in “the real world” is deterministic.

It also appears that the quantum world is just a composite of all possible “real worlds” (shown by Feynman’s path integral work) and that the real worlds are just a special kind of slice of the quantum world (shown by Hugh Everett’s PhD thesis).

My point with all of this is it seems that in the pure quantum world, the most powerful computing engine is a Turing machine. In the everyday real world, the same appears to be true. It is only in the way the real world is a component of the quantum that allows quantum computing, which is demonstrably more powerful than a Turing machine. (Grover’s algorithm solves in n^1/2 what a Turing machine requires n/2 to solve).

My further point is that if you make the rather large assumption that consciousness requires quantum computing, then perhaps you could apply the Dust Theory to the quantum world, and consciousness then falls out, but only within those thin slices (which appear to have some statistical normalcy forced on them, which invalidates the statistical argument against the Dust Theory).

]]>OK, here’s a simulation-related technical question. Any simulation of an open quantum system is subject to a certain invariance … the question is, what is the name of that invariance?

Here I am talking about the invariance indexed by Nielsen and Chuang under the unwieldy name “Theorem: unitary freedom in the operator-sum representation.”

This is an elegant mathematical theorem, *and* a fundamental law of nature, *and* a very powerful tool for optimizing quantum simulations.

Such a powerful concept deserves a short, cool name.

What do your professors call it? What do *you* call it?

The smallest of the spheres we made last night are about 50 nm in diameter. Each sphere is doped with about 55,000 electron spins. These individual spheres are intended to serve as imaging targets for MRFM imaging experiments, where they will be placed in a large magnetic gradient (about 500 Gauss across the sphere), and subjected to complex RF pulses, while at the same time the electron spins interact with each other, and with the neighboring nuclear spins.

The above parameters are not too different from a quantum computer, really!

Needless to say, we are quite passionately interested to simulate the quantum dynamics of this imaging process: such simulations are the *sine qua non* of modern system engineering.

From Greg Egan’s point of view, if humanity ever does achieve the ability to create self-aware quantum simulations, this ability will likely not arise *de novo*, but rather, will grow incrementally out of simulations that are motivated by purely prosaic considerations of quantum system engineering.

For example, it wouldn’t be all that surprising if the TOE and Big Bang could be specified in less than 2^10 bits. Generating consciousness in other ways, such as putting in the necessary structures “by hand” in the input data, would involve many more bits.

I have no idea, though, how to justify a measure on the space of all bit strings that favours short ones. All I can say is that the version of the simulation argument that works best for me is the one where an infinite number of monkeys are typing on computer keyboards, and there’s a hot key for “Compile and execute everything that was typed so far; ignore all further input”. There is no interrupt key. That would explain our universe perfectly.

]]>The folks on this thread have broad-ranging interests in physics, and in particular many will recognize Wheeler’s aphorism “Matter tells space how to curve, and curved space tells matter how to move.”

Wheeler’s aphorism sums up a huge 21st Century cognitive jump in physics, in which static Newtonian state-space was replaced by a dynamical Riemannian state-space.

In the process, a lot of cherished conservation laws were upgraded. For example, energy and momentum still are conserved … *iff* gravitational radiation is taken into account.

Well, why shouldn’t old-fashioned Hilbert-space quantum mechanics be subject to a conceptually analogous 21st Century dynamical upgrade? In the sense that “Operators tell quantum state-space how to curve; and curved quantum state-space tells operators how to commute.”

The strategy here is not to make gravity look more like QM, but to make QM look more like gravity.

Of course, this makes for mathematics that is plenty hard. And the observable effects are presumably so small, relative to conventional QM, that definitive experiments are hard to conceive, much less accomplish.

Still, the idea that quantum state-space is perfectly linear, seems as implausible as (in retrospect) the idea that Newtonian state-space is perfectly Cartesian.

Plus, the Machian principle that “everything that exists is dynamical” is IMHO very effective at identifying enjoyable opportunities in physics and mathematics.

So even if the sysop of *our* universe compiled the operating system with the flag “–Hilbert linear”, there is no reason that *we* can’t compile our own Egan-style simulations with “–Hilbert dynamical”. ðŸ™‚

I think the universe we live in provides strong empirical evidence against the “pure” Dust Theory, because it is far too orderly and obeys far simpler and more homogeneous physical laws than it would need to, merely in order to contain observers with an enduring sense of their own existence. If every arrangement of the dust that contained such observers was realised, then there would be billions of times more arrangements in which the observers were surrounded by chaotic events, than arrangements in which there were uniform physical laws.

But on the other hand, given that we don’t know how to characterise the class of algorithms that yield subjective experience, I have no great confidence in these kinds of counting arguments. There are some fairly persuasive arguments that consciousness could not exist without a certain level of consistency in physical laws (in the traditional sense), so it *might* be possible to leverage that kind of argument into something that carries through to algorithms, if we could show that in order for anyone to be conscious at all, it’s most likely that the algorithm creating them would be effectively simulating something like a spacetime with a set of universal physical laws.

It’s very easy for us to *imagine* a “game-world” scenario, where the physics can be very weird and bug-ridden without annihilating the characters completely … but that might be a false impression created by the fact that all our current virtual environments don’t need to be doing anything to support consciousness. If consciousness is possible under algorithm A, I’m sure it’s always *possible* to splice in a patch that turns the environment from high-resolution quantum gravity into Super Mario Brothers, without destroying the conscious entity itself, but perhaps the statistics are actually stacked against that kind of thing after all. I just don’t know … and at this point in history I don’t think anyone else does either.

this is

I wrote a novel about this many years ago (Permutation City)…

I apologize for the digression into philosophy, but I read Permutation City several years ago. It led to some minor panic and some more deep thought on my part. I finally did think I saw a refutation of the notion that “software can be conscious implies all possible conscious algorithms run without a substrate”.

It’s not a refutation of the logic, but rather one that demonstrates some assumption must be incorrect.

Assumptions:

1) software can be conscious

2) two bit-identical representations of a conscious entity are indistinguishable

3) the algorithm for a conscious entity is valid with a wide array of input data (at least everything that is physically possible)

4) Every possible algorithm (or at least a great many of them) is simulated given the appropriate filtering algorithm on normal physical processes/matter arrangements.

Given those assumptions, there should be an indefinitely large number of simulations that include a conscious entity identical to me as I am now, each with wildly different input data. If you select one of those sets of input data arbitrarily, it is likely to be profoundly different from one’s normal experiences. Since one can’t distinguish between different ‘instances’ of the conscious entity, statistically speaking my next experience (and indeed my recent experiences) should be one of those experiences radically different from those that fall within the norms of our universe.

Since my last few minutes have been consistent with the pretty narrow range of input data that comprises “normal experiences in our universe”, and this is fantastically statistically if you accept the assumptions, then one of the assumption must be wrong.

To bring this back to QM, perhaps Penrose is right and there is a quantum component necessary to consciousness. Furthermore, perhaps this quantum component is not classically computable.

Without something preventing either

a) classical simulation of consciousness

b) separation of a consciousness from its surrounding input data

then it seems to me that everyone’s experiences should be almost random, based on the logic played out in Permutation City.