## Your yearly dose of is-the-universe-a-simulation

Yesterday Ryan Mandelbaum, at Gizmodo, posted a decidedly tongue-in-cheek piece about whether or not the universe is a computer simulation.  (The piece was filed under the category “LOL.”)

The immediate impetus for Mandelbaum’s piece was a blog post by Sabine Hossenfelder, a physicist who will likely be familiar to regulars here in the nerdosphere.  In her post, Sabine vents about the simulation speculations of philosophers like Nick Bostrom.  She writes:

Proclaiming that “the programmer did it” doesn’t only not explain anything – it teleports us back to the age of mythology. The simulation hypothesis annoys me because it intrudes on the terrain of physicists. It’s a bold claim about the laws of nature that however doesn’t pay any attention to what we know about the laws of nature.

After hammering home that point, Sabine goes further, and says that the simulation hypothesis is almost ruled out, by (for example) the fact that our universe is Lorentz-invariant, and a simulation of our world by a discrete lattice of bits won’t reproduce Lorentz-invariance or other continuous symmetries.

In writing his post, Ryan Mandelbaum interviewed two people: Sabine and me.

I basically told Ryan that I agree with Sabine insofar as she argues that the simulation hypothesis is lazy—that it doesn’t pay its rent by doing real explanatory work, doesn’t even engage much with any of the deep things we’ve learned about the physical world—and disagree insofar as she argues that the simulation hypothesis faces some special difficulty because of Lorentz-invariance or other continuous phenomena in known physics.  In short: blame it for being unfalsifiable rather than for being falsified!

Indeed, to whatever extent we believe the Bekenstein bound—and even more pointedly, to whatever extent we think the AdS/CFT correspondence says something about reality—we believe that in quantum gravity, any bounded physical system (with a short-wavelength cutoff, yada yada) lives in a Hilbert space of a finite number of qubits, perhaps ~1069 qubits per square meter of surface area.  And as a corollary, if the cosmological constant is indeed constant (so that galaxies more than ~20 billion light years away are receding from us faster than light), then our entire observable universe can be described as a system of ~10122 qubits.  The qubits would in some sense be the fundamental reality, from which Lorentz-invariant spacetime and all the rest would need to be recovered as low-energy effective descriptions.  (I hasten to add: there’s of course nothing special about qubits here, any more than there is about bits in classical computation, compared to some other unit of information—nothing that says the Hilbert space dimension has to be a power of 2 or anything silly like that.)  Anyway, this would mean that our observable universe could be simulated by a quantum computer—or even for that matter by a classical computer, to high precision, using a mere ~210^122 time steps.

Sabine might respond that AdS/CFT and other quantum gravity ideas are mere theoretical speculations, not solid and established like special relativity.  But crucially, if you believe that the observable universe couldn’t be simulated by a computer even in principle—that it has no mapping to any system of bits or qubits—then at some point the speculative shoe shifts to the other foot.  The question becomes: do you reject the Church-Turing Thesis?  Or, what amounts to the same thing: do you believe, like Roger Penrose, that it’s possible to build devices in nature that solve the halting problem or other uncomputable problems?  If so, how?  But if not, then how exactly does the universe avoid being computational, in the broad sense of the term?

I’d write more, but by coincidence, right now I’m at an It from Qubit meeting at Stanford, where everyone is talking about how to map quantum theories of gravity to quantum circuits acting on finite sets of qubits, and the questions in quantum circuit complexity that are thereby raised.  It’s tremendously exciting—the mixture of attendees is among the most stimulating I’ve ever encountered, from Lenny Susskind and Don Page and Daniel Harlow to Umesh Vazirani and Dorit Aharonov and Mario Szegedy to Google’s Sergey Brin.  But it should surprise no one that, amid all the discussion of computation and fundamental physics, the question of whether the universe “really” “is” a simulation has barely come up.  Why would it, when there are so many more fruitful things to ask?  All I can say with confidence is that, if our world is a simulation, then whoever is simulating it (God, or a bored teenager in the metaverse) seems to have a clear preference for the 2-norm over the 1-norm, and for the complex numbers over the reals.

### 169 Responses to “Your yearly dose of is-the-universe-a-simulation”

1. Sid Says:

It’s ironic that Sabine Hossenfelder picks Bostrom to target her ire. Bostrom is the more careful thinker in this domain. From the abstract of Bostrom’s paper:

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation.

Personally I find (1) and (2) to be more plausible than (3).

2. Izaak Meckler Says:

> Sabine goes further, and says that the simulation hypothesis is almost ruled out, by (for example) the fact that our universe is Lorentz-invariant, and a simulation of our world by a discrete lattice of bits won’t reproduce Lorentz-invariance or other continuous symmetries.

I think a simulationist would probably argue that “computers” in the simulating universe need not operate using discrete bits or anything similar to what our computers use. Of course now we really seem to be in the realm of the unfalsifiable, modulo some sign from our all-mighty simulators…

3. Scott Says:

Izaak #2: Yes, that is what they might argue, and of course it does push the idea even deeper into the unfalsifiable realm. My point, though, was that even if the simulators were limited to discrete bits—and certainly if they were limited to qubits—what we’ve learned about quantum gravity and cosmology in the last few decades (e.g., the Bekenstein bounds, AdS/CFT, and the positive cosmological constant) is strongly consistent with the hypothesis that there still wouldn’t be an impediment to simulating our universe, or at least that part of it from which we can receive signals.

4. FeepingCreature Says:

Weird question from a layman.

When you say that the observable universe can be modelled as a system of 10¹²² qubits, does this account for the fact that most of the space in the observable radius is effectively empty, or is it an upper bound? How many qubits would it take to, say, make a simulation that merely has a good chance of reproducing the experience of a current-day human, given the particular history of our universe? Or if it’s mostly a question of energy, not volume, does chaos completely torpedo this approach, by making the present-day physics dependent on the high-energy interactions early on? In that case, would it be easier for a future simulator to work backwards from the future state it has access to to recover its past? Could it recover data that has been lost to the environment?

5. Scott Says:

FeepingCreature #4: It’s an upper bound, possibly an extremely loose one, but one that we can have some confidence in because it comes from quantum gravity. (And thus, it accounts not only for the positions and momenta of known particles, but for unknown ones, and even for whatever degrees of freedom are in the spacetime metric itself.)

Also, I wasn’t talking about a simulator within our universe, who has to laboriously reconstruct its state, or part of its state. That raises a whole different set of issues, some of which I tried to explore in The Ghost in the Quantum Turing Machine. Rather, I was talking (as the people Sabine was responding to do) about hypothetical simulating aliens in a metaverse. Such godlike entities wouldn’t need to reconstruct the initial conditions of our universe because they choose the initial conditions.

6. Unknown Capitalist Pimp Says:

Forget about quantum gravity for a second since there are so many unknowns and consider pure Yang-Mills theory. Even there it’s important to distinguish two questions. 1. Can the universe be defined as the continuum limit of a lattice model or some other discrete, computational, well-defined mathematical structure (the conjecture is yes)? 2. Are the laws of the universe governed by a lattice model in this sequence (for which the lattice regulator breaks Lorentz invariance) rather than the continuum model? To harness hypercomputation a la Penrose you might have to answe no to the first question, but to falsify anthropomorphic simulation nonsense you should just need to argue that the second question has a negative answer.

7. Theophanes E. Raptis Says:

Most such discussions forget to add an important association not merely with the ‘Cartesian Demon’ problem but mostly, with the much older ‘Problem of Evil’: try proving that the particular ‘simulation’ won’t end up with the actors being nuked on stage!

8. Scott Says:

UCP #6: The recent work of Jordan, Lee, and Preskill is relevant here; it shows that at least φ4 theory, as well as a pure interacting fermionic theory (but not yet gauge theories), can be efficiently simulated on a standard quantum computer. Of course you have to choose UV and IR cutoffs (i.e., a lattice size and lattice spacing), but with appropriate choices you could simulate a given scattering problem, or in principle a whole universe, so accurately that no one could tell the difference. This is an intrinsic statement about the computational complexity of QFTs; it doesn’t require any anthropomorphizing about “simulators.”

Of course you could argue that such a discretized simulation, while unfalsifiable, is “unnatural.” But there’s a deeper point: it’s only in quantum gravity that the number of qubits needed to describe a bounded region becomes finite rather than infinite. So in this discussion, we can’t necessarily restrict attention to pure QFT, because one of the main stories in theoretical physics for the last quarter-century has been the way QFT hides these vast dimensionality reductions (both from infinite to finite, and from D to D-1 spacetime dimensions) once gravity gets in on the act. And while, yes, there are many unknowns in quantum gravity, the Bekenstein upper bound on entropy isn’t normally considered one of them, because it can be derived purely from the Second Law carefully applied to thought experiments involving observers outside black holes.

9. Sabine Says:

Scott,

I suggested to Ryan that he talks to you. But I think you misunderstood the point I was trying to make.

I’m not claiming it’s not possible to simulate our observations on a (quantum) computer. My point is that it’s difficult to show it can be done, and nobody presently knows how to do it. Whether you can even do it from qubits is an active area of research, as you say, and that’s leaving aside the question of whether speaking of a programmer is still science or religion.

If someone goes and claims we live in a simulation they first better demonstrate they have a way to solve all these problems that the people at those conferences you’re attending are struggling with.

Best,

B.

10. Neel Krishnaswami Says:

Maybe I’m missing your point, but doesn’t the Bekenstein bound and the finite size of the observable universe mean that the physical Church-Turing thesis is vacuous? If the universe is well-described by a finite number of bits, then Turing-equivalent models of computation are ludicrous overkill for describing physical processes, sort of like trying to use large cardinals to talk about the ASCII character set.

It’s not my area, but there is quite a bit of interesting work on topological automata that might be able to cope with Sabine Hassenfelder’s criticisms, but obviously trying to make that argument seriously involves doing actual work.

11. Jon K. Says:

Hi Scott,

I have not plugged my movie on your blog yet, but maybe this is an appropriate opportunity… The protagonist in “Digital Physics” (on Amazon Prime, Vimeo, iTunes) explores some of these simulation hypothesis ideas, from Wolfram’s causal networks, to ‘t Hooft and Fredkin’s CA models, as well as Chaitin’s Algorithmic Information Theory. I do think these are interesting ideas worthy of thought, although there are many people who would say they don’t even count as science. Both points of view are expressed in the movie, and the viewer is left to decide whether the main character had something of value to contribute to the conversation or was merely a crank.

Please consider checking out the movie. You may relate to the fact that this crazy character who was kicked out of school after breaking the particle accelerator, is trying to get his not-completely-fleshed-out ideas validated by someone in the academic establishment. Any feedback you have to offer would be greatly appreciated, even if it was along the lines of “this is the kind of person a scientist who runs a popular blog must deal with on daily basis.” 🙂

Thanks for you consideration!

Jon

P.s. Thanks for recommending “QED” by Feynman. Great book, and now I have a better understanding of Feynman diagrams!

12. Cesna Says:

Another viewpoint for the is-the-universe-a-simulation is from the computer science view. If one builds a computer to simulate a single qubit/electron/particle this takes a computer that itself is composed of more qubits/electrons/particles than the element it simulates. It also takes more time than realtime to simulate it. In other words, you can’t build a computer of x particles that simulates x or more particles in the same details or time.

I guess that there is a physical limit to how size+speed optimal a computer can be for simulating a collection of elements in relation to its own size. This means that there are physical limits which will very severely restrict a combination of characteristics of the simulation: namely its size (relative to the computer), its flow of time compared to the computer, or its level of detail compared to the computer’s element. This means that a civilization therefore is fundamentally unable to simulate its own existence and therefore it refutes Bostrom’s original argument. I think there is a physical untenable science-optimism it that paper.

13. Cesna Says:

If my hypothesis of my previous post is not true this would mean that you can use a computer of x particles to simulate a computer of x+y particles, which can then simulate a computer of x+2*y particles, etc, and you could effectively simulate a arbitrary complex computer with your computer of just x particles.

14. jonas Says:

Meanwhile, Viktor T. Toth argues that the universe can’t be *efficiently* simulated, because the rules of physics are such that they reqiure quantum computation to simulate efficiently, but at the same time they don’t allow you to build universal quantum computers. This argument goes straight against what you’ve been saying, namely that if there’s a physical reason why we can’t build quantum computers, then that would be interesting, because then we could simulate the universe efficiently with a classical computer. I don’t buy his argument, but I still wonder what your opinion is about this.

15. Mateus Araújo Says:

I think Scott’s point in his next-to-last paragraph has been lost in translation.

The point is that all the physical laws we know can be simulated in a classical computer, albeit rather inefficiently: they are just differential equations being run on an exponentially large Hilbert space. So the question about whether QFT can be efficiently simulated in a quantum computer falls into irrelevance (for this purposes, I mean, the question is otherwise very interesting), unless we want to play some weird theology about the computational resources of our simulator overlords.

16. SomeGuyOnTheInternet Says:

This has already been shown by XKCD 😉
https://xkcd.com/505/ – A Bunch of Rocks

17. Mateus Araújo Says:

I’m having trouble imagining what would an uncomputable universe even be. Has Penrose (or anyone else, for that matter) actually tried building a uncomputable toy theory? How could that work? Would the behaviour of the particles be determined by solving the halting problem for a Turing machine ‘n’ instead of a differential equation? How could we ever distinguish this behaviour from simple randomness?

18. Pascal Says:

Sid #1: a 4th possibility is that our posthuman followers will run simulations of their evolutionary histories, but that the simulated humans will not have conscious experiences. In that respect they would be like the characters in our present-day videogames who, and I hope that is an uncontroversial statement (?), are also deprived of conscious experiences.

On March 7th, 2014, you issued a three-year Luboš ban, saying: “In March 2017, I’ll reassess my Luboš policy.”. It is now March 2017, and Luboš probably has things to say on the topic of those “mere theoretical speculations”.

20. Stephen Jordan Says:

Someday we could discover that physics cannot be simulated by quantum computers in polynomial time. Does this possibility render the simulation hypothesis falsifiable? I don’t really think so because, unless the universe were actually uncomputable, it could still be simulated on some sufficiently large computer. Simulation advocates could always argue that the simulation is being run on a quantum or classical computer that is simply exponentially larger than our universe, or perhaps on some other more powerful type of computer such as a postselected quantum computer.

21. Peter Morgan Says:

Falsifiability is at least problematic, which is a weak enough statement for it to be uncontentious for many philosophers of science. Thinking in terms of falsifiability perhaps might be replaced by an assessment of whether a class of models makes creating increasingly accurate and tractable, usable approximations to empirical data an easy or hard process. The aim seems more to make it easy to simultaneously navigate the parameter spaces of a class of models and of a class of experiments and to understand both navigations and the distance between them well enough to make it routine to engineer new patterns.

22. Scott Says:

Stephen #20: Your point is valid, of course, but I suspect you might be underselling your paper. Before you spelled out in explicit detail what are the inputs and outputs, etc., would there even have been a good reference to point a skeptic to who worried that φ4 theory was uncomputable (as the above commenter seems to have worried)?

23. Jr Says:

“All I can say with confidence is that, if our world is a simulation, then whoever is simulating it (God, or a bored teenager in the metaverse) seems to have a clear preference for the 2-norm over the 1-norm, and for the complex numbers over the reals.”

Well, the 2-norm is a real number. Whats more, the various parameters of our physical theories all seem to be real numbers, eg the fine structure constant or various mass ratios.

24. fcsuper Says:

There is some suggestion that reality doesn’t exist unless observed. Something is there, but it’s not reality until it is realized by an observer. At it’s most fundamental level, this sounds a lot like procedural generation, which is already a known trick used to minimize resources required to support a simulation.

25. Scott Says:

Victor Adam #19: You must have missed the comment section where I extended the ban indefinitely far into the future! (Admittedly, I’m on my phone right now and can’t find that comment either—can someone else?)

But, yes, I’ve “heard” that Lubos, who’s rightly reviled for having called Sabine a “Marxist whore” and every other misogynistic and derogatory name imaginable, had relatively nicer things to say on his blog about her disdain for the simulation hypothesis.

26. Scott Says:

Mateus #17: I can tell you that what Penrose always does, when asked about this, is to point to a speculative paper by Geroch and Hartle from the 1980s about how 4-manifold homeomorphism is known to be an undecidable problem, and some quantum gravity proposals involve sums over 4-manifolds, so maybe that means quantum gravity is undecidable as well.

Personally, I always found that pretty unsatisfying, even considered in isolation, but particularly since, as I mentioned above, all the recent insights from AdS/CFT seem to be pointing in the opposite direction.

It’s like, because quantum gravity is so poorly understood, someone could argue that it’s the most plausible place for uncomputability to be hiding. But ironically, as I said, quantum gravity seems more likely to do the opposite, and serve as the fundamental explanation for why nature is computable! (E.g. there are all sorts of hypercomputers that naïvely seem possible, until you realize that they would require so much energy concentrated in so small a region that by the Bekenstein bound, your computer would collapse to a black hole.)

27. Jr Says:

Scott #8: ANd how confident can we be in the Second law holding for observers near blackholes? More or less certain than a 19-century physicist would be of Galilean invariance?

28. Scott Says:

jonas #14: It’s hard for me to comment on Viktor T. Toth’s view, until you or someone else tells me his actual argument! I.e. what class of problems does he think is efficiently solvable in the physical world then—presumably something larger than BPP but smaller than BQP? And, of course, on what ground?

29. Mateus Araújo Says:

Scott #24: I guess you mean the paper Computability and physical theories. Indeed, I also find it very unsatisfactory. They show that one way of calculating expectation values in an ill-defined theory of quantum gravity requires solving an undecidable problem (4-manifold homeomorphism). For me this is pretty good evidence that this is how expectation values are not going to be calculated in quantum gravity. There is nothing saying that there isn’t a computable way of doing the same calculation.

I’d still be happier with a toy theory where solving undecidable problems is actually necessary, just to clear up the conceptual issues.

30. Eliezer Yudkowsky Says:

I was disturbed when people started joking about Trump’s victory being proof we’re living in a simulation, and wrote this post arguing against the joke:

31. Gerben Says:

Hi Scott, so there is to my knowledge one example that seems to be against a “computable/simulated” universe from theoretical physics and that’s the chiral anomaly. Take the U(1) 3L-4L-5R in 2D, that is 2 left handed fermions of charge 3 and 4 and one right handed fermion of charge 5. This turns out to be anomaly free, if you go through an U(1) instanton the 2 left handed fermions go 3 and 4 (resp) steps up the ladder and the right handed fermion goes 5 steps down. Hence 3 fermions of charge 3, 4 fermions of charge 4 and 5 anti-fermion of charge -5 are created. Hence the total charge created is 0 and it’s anomaly free. But there is a net flow of eigenstates of 3+4-5=2, that went from negative to positive. No finite spectrum could ever stay the same with a net spectral flow and hence must break global U(1) gauge symmetry. Any ideas on that?

32. Ethan Says:

Well in a computed scenario, 0 wouldn’t really exist as nothing.

33. simplicio Says:

Is there a definition of what “simulation” means in this context? I usually think of a simulation as some sort of simplified model of a more complicated object. But Scott’s 10^122 qubit simulation seems to just be a discretized model of reality. It doesn’t seem like it would work as a simplification of anything, it seems as complex as, well, reality. As Scott himself notes in (“..they would in some sense be reality…”).

But than I have trouble seeing in what sense such a thing would be a “simulation”, at least as I usually use the word

34. Gerben Says:

Oh just as an addendum. So if a finite system breaks the global U(1) gauge symmetry it means it is anomalous and inconsistent as a gauge theory. We know the standard model “is” a chiral (anomaly free) gauge theory which has this non-zero spectral flow. So it seems the SM cannot be captured in a finite Hilbert space.
In Lattice gauge theory the Nielsen-Ninomiya theorem is mostly quoted as always introducing doublers. But the argument above is more physical and stronger (lattice is not the point at all).

35. Miles Mutka Says:

So, how close are we to simulating a single eukaryotic cell undergoing mitosis, using the most fundamental physics models? What kind of hardware would be needed? How close to real-time would the simulation be able to run?

If we believe that gravity has no bearing (cells seem to work fine in microgravity), you could leave it out of the equations if it makes it easier.

Bear in mind that a biologist using the simulation would want to “look” at the simulated cell to see how it behaves, maybe tweak some initial conditions and re-run several times.

@Eliezer #30

> “Ah,” the theist says wisely, “but what if God *wants* the world to look exactly like it’s following mathematical laws?”

Very rude of you to call out Don Page like that.

37. Stella Says:

How precisely is the computational problem “simulate the universe” phrased? It seems like different approaches to this as a concept might give different results. The most natural conception to me would be: sequentially (or continuously) compute the states that exist at the next “time step” from the past time-steps.

The major limitation of this respect is that the natural notion of the “size” of the problem is some function of the size of the universe’s maximum size, so (if the universe does not get arbitrarily large) the problem of simulating our universe is a particular instance of the problem instead of a general problem and can be solved in constant time by a sufficiently large TM, just like any given instance of 3-SAT can be solved in constant time if the instance is known in advance.

38. Gil Says:

If the universe can be simulated, it’s still impossible to know if something is actually simulating it (the result of turing machine or whatever other mathematical formulation you make is well defined whether someone actually “builds” and “runs” it physically or not). You can only argue whether it can be simulated, and if you think it can’t be simulated, doesn’t that mean that physics is doomed to reach a dead end? Doesn’t that mean that physics will reach a mathematical theory it can’t test / verify, because it is uncomputable within the universe? If a physicist goes to work and hopes to find a theory that describes the universe that he can prove and test, he is actually hoping the world could be a simulation. It might not be true, but if it’s false, he’s doomed to fail.

39. Scott Says:

Sabine #9: Thanks for responding and sorry for the delay! (When I blog sometime later about the cause of the delay, I hope you’ll forgive me.)

You and I are in complete agreement about wanting to see speculations about the nature of the world make contact with what we actually know about physics. And, of course, we’re in agreement about the metaphysical emptiness of the “simulator” hypothesis: the way I’d put it is that a system’s being computational, obeying the Church-Turing Thesis, etc. are intrinsic properties of the system, just as much as a system’s being reversible or chaotic or ergodic. It adds nothing to reify that property by invoking a simulator, unless of course the simulator is supposed to play an actual role in observed phenomena.

But I part ways from you at the assertion that, in order to consider it likely that the universe is computational (not in the anthropomorphic sense, just in the sense of e.g. being simulable by a finite system of qubits), one first needs to have solved all the detailed problems of explaining how the simulation works.

The point I keep trying to hammer home is that there’s a dichotomy here: the belief that Nature can be simulated by a Turing machine, is just the belief that no phenomena in Nature exceed the computational power of Turing machines. But from that standpoint, computationalism looks like just the standard, boring, conservative, default option: the burden is squarely on the believers in non-Turing, hypercomputational pixie dust, to explain what that pixie dust is and where we find it (in the brain? in black hole interiors?). It’s not on the believers in “no hypercomputational pixie dust” to give a complete dust-free description of the world.

Or do you not believe in this dichotomy? But then what’s the third option?

40. murmur Says:

I agree that the Bekenstein bound shows that physics in an AdS space is simulable as a boundary quantum theory. However, does that carry over to our universe, which to the best of our knowledge is dS rather than AdS? What would be the analogue of the boundary quantum theory in this case?

41. Travis Says:

Scott, you seem to be arguing that if our universe is not discrete, then the Church-Turing thesis must be false. Is this definitely true? Or in other words: is it possible that our universe is not discrete (and hence not able to be simulated by a Turing machine) but nevertheless it is the case that within our universe nothing can be computed that can’t be computed by a Turing machine?

42. Stella Says:

Gil #38

I think that’s an excellent comment. From a certain point of view, /of course/ the universe can be simulated, because the universe is a simulation of itself! The relevant question is really “can computational device ____ simulate the universe” (usually various augmentations of a Turing Machine are what are under considerations)

43. jonas Says:

@Scott: as far as I can see, Viktor T. Toth only wrote a few paragraphs about this, so I can only guess.

I think that if the universe were indeed impossible to simulate, then your question wouldn’t be answerable. The computational power of the universe isn’t a complete problem of any reasonable complexity class. How much you can compute depends on what part of the universe you look at. So maybe computers can only compute problems like BPP, but the billion-year evolution of galaxies can compute problems outside of that.

You can get a simple toy model with such a property if you take an ordinary Turing-complete classical universe, but post-select it by some criterion far enough in the future that is exponentially rare, but not so simple that you can just simulate the universe backwards from it. For example, the universe has a post-selection rule that after ten billion years, the galaxies will form a pretty daisy pattern. Crucially, the universe doesn’t have any post-selection rule that the simulated galaxies that appear on your computer screen have to form pretty daisy patterns, and no matter what you try, you can’t get such a simulation, because the rules of gravity and other physical rules make that very unlikely. As a result, if the universe has, say, 10**90 atoms, it can solve problems of size 10**90 of the hypothetical DAISY complexity class, but if you build a very efficient supercomputer from 10**70 atoms, you can only solve BPP problems of size 10**70, and if you wanted to solve a smallish DAISY problem of size 10**20, you’d need a computer made of 10**10**20 atoms.

44. Jr Says:

Scott #8, Is the difficulty in showing that the Standard Model can be computed/simulated in the BQP related to the Clay problem on proving the existence of Yang-Mills theory?

45. Sniffnoy Says:

Even if it were to turn out that the Church-Turing thesis were false, that wouldn’t be sufficient to falsify the simulation hypothesis. It would just mean that it was false for the base level of reality too, that they also had uncomputable resources to perform the simulation with.

46. Darrell Burgan Says:

I’m not sure I agree that the “Universe Is A Simulation” conjecture is not falsifiable, in principle.

If the computer and software running the simulation is perfect, then indeed it seems it would not be falsifiable. But it certainly seems to me that if we can observe any bugs in the software or hardware, it might provide interesting observable evidence. These might manifest as discontinuities or other weird empirical data.

Again, speaking in principle here. Not suggesting someone should actually mount an experiment looking for bugs in the software. 🙂

47. Douglas Knight Says:

Jr, no those are totally unrelated problems. For example, Scott cited the work of Jordan, Lee, and Preskill showing that φ⁴ is computable in BQP despite the fact that it does not exist.

48. Scott Says:

Darrell #46: But doesn’t the thing about bugs mean only that the theory that our universe isn’t a simulation is falsifiable? 🙂

49. Gil Says:

I don’t think Church-Turing thesis is really falsifiable, in some sense. Say you have a physics theory that you somehow claim it violates the Church-Turing thesis. How can you ever hope to prove it? All you can physically do is build something physically and give it a finite number of turing machines, because nothing is infinite, that you for some reason know the answer to the halting problem for, and see that the theory correctly predicted it for all of them. How big would the input need to be so you are convinced? Always some finite number. Now would that really convince you of anything? Whatever artbitrary number of turing machines which require arbitrary number of steps to halt you throw at your halting machine oracle, that’s never enough to convince you it isn’t just a very fast universal turing machine in disguise (which is actually trivial to create). Physics methodology is too weak to disprove Church-Turing thesis.

50. Ethan Says:

51. Scott Says:

Gil #49: At a “lower level down,” people get into the same debate about quantum computing. How can QC overthrow the Extended Church-Turing Thesis, if even the factorization of a 10,000-digit number could be explained by the universe hardwiring the answer, or by really wacky constants in the big-O? I think the right answer is that, even if asymptotic statements aren’t strictly falsifiable, they’re “nearly falsifiable,” in the sense that experiments can put an intolerable strain on them, by boxing any defenders into the indefensible corner of absurd constants. Likewise if we had a box that always appeared to decide the halting problem on every instance we tried, and—crucially—we also had a fundamental physical theory that explained why the box should be solving the halting problem on every instance, that’s the sort of thing that would fully satisfy me that the Church-Turing Thesis had been overthrown.

52. James Gallagher Says:

All I can say with confidence is that, if our world is a simulation, then whoever is simulating it (God, or a bored teenager in the metaverse) seems to have a clear preference for the 2-norm over the 1-norm, and for the complex numbers over the reals

Would be hard to do it with just the reals (no phase), unless they have a dual-core processor.

That’s a joke, but serious question (from a novice), are multi-core cpus really compatible with Turing machines?

53. Scott Says:

James #52: Certainly. A single core can simulate T cores with at most a factor of ~T slowdown.

54. James Gallagher Says:

Scott #53

As a novice, I find that hard to believe, I bet that’s not allowing for simultaneous execution on each core, can you post a link to the paper for that result. I do wonder if this is just a not very well studied area of Computer Science to be honest

55. Harry Johnston Says:

@Eliezer #30, I’m not sure about the Fermi paradox. It would seem perfectly reasonable for a basement-universe experimenter to run up a simulation to answer the question “what would happen to an intelligent species living in an otherwise empty universe?” or any one of a number of variants on that theme. Such a simulation would need to be otherwise realistic in order to serve its purpose.

(Not that I’m convinced that there’s anything that particularly needs explaining; for all we know, the average time it takes a habitable planet to evolve intelligence is like a trillion years and we’re just an extreme outlier.)

56. clayton Says:

I saw a picture of you on John Preskill’s Twitter feed, and was very much hoping that we would be able to hear your summary / talk recap / review / insights after the It from Qubit workshop!

57. Scott Says:

James #54: No, it’s a completely obvious result, and the power of parallelism is extremely well-studied since at least the 1980s. The serial emulation works by simply dividing its time between emulating each of the n processors. If the processors send each other messages, then you emulate the messages as well. The whole issue with parallel computing, the thing that makes it interesting, is that the converse is probably false: there are tasks you can do with a single core in T steps, but that seem impossible to do with n cores in anywhere close to T/n steps, because of serial dependencies among the T steps. So, people in this field study which serial algorithms are parallelizable and which aren’t, and how well, and with which types of communication among the processors. They don’t study which parallel algorithms are serializable, because all of them are. For more see e.g. the Wikipedia article analysis of parallel algorithms.

58. JimV Says:

In her post, Dr. Hossenfelder implied that people have tested Lorentz invariance to a very fine level and not found any discontinuities which to her suggests our universe is continuous (in some types of quantities) rather than discrete. Two of us asked in comments if she would write another post explaining this in more detail. In my case, because I would prefer a discrete universe (with very fine increments), and so need a lot more data to get that out of my head.

Your post gives me hope for discreteness, so thanks very much.

59. Scott Says:

JimV #58: The thing is, if “discreteness” (in the sense that it’s there) were implemented in the AdS/CFT type of way—that is, via a nonlocal mapping of the spacetime bulk to a system of strongly coupled qubits, which mapping looked like a quantum error-correcting code and need not even preserve the spatial dimension—then we wouldn’t have expected to see the slightest deviation from Lorentz symmetry in any existing measurement. In fact, in these sorts of dualities, I believe we would say that the spacetime bulk exactly obeys GR (so in particular, Lorentz signature, a continuum of points and directions, etc.), to whatever extent the existence of a classical spacetime bulk remains a valid approximation at all. (Experts can correct me if I’m wrong.)

60. mjgeddes Says:

Well Scott, I think the simulation ideas are half-right, half-wrong-headed. At the end of the day, I don’t think it makes sense, but the reasons why are not what people think – I think Bostrom simply didn’t carry his own reasoning far enough , lol. It’s not that Bostrom was *too* radical, it’s that *he wasn’t radical enough* 😀

You see, if you really could make a computational simulation of the universe (and I think you can), the entire distinction between ‘real basement reality’ and ‘ simulation’ breaks down! After all, if you woke up in the Matrix or some deeper reality, how would you really know if you were actually in ‘the basement level’ or simply another layer of simulation? Of course you couldn’t.

So the real conclusion to draw is that in a computational universe, the entire notion of a distinction between ‘reality’/’simulation’ is meaningless! (So I basically agree with you here Scott).

Never the less, ‘reality as information’ (or reality as software or computation) is still a very useful way to think about things, and this is the part of the simulation ideas that I think are correct.

But think more carefully about this. If there’s really no clear distinction between ‘reality’ and ‘simulation’, then doesn’t this imply that whether something is ‘real’ is not a yes/no matter, but rather a matter of *degree*? (a continuous variable, like the brightness of a light-bulb for instance). I think the answer is yes, that is the implication.

And if ‘real’ is just a matter of degree, and computation is all there is , then another profound thought follows: when I simulate a thing, do I strengthen that things existence (make it *more* real). That is to say, is self-modeling the mechanism through the universe can become *more* real?

And still another thought: If ‘real’ is a matter of degree, is the creation of the universe really finished yet? What if the universe is still boot-strapping itself into existence via self-simulation?

And another thought, the most mind-blowing of all: If a super-intelligence were to attempt to simulate all reality, would that super-intelligence actually be finishing the job of creation itself, perhaps by seizing control of the universal prior ?

61. Scott Says:

mjgeddes #60: Have you read Permutation City, by sometime participant in this comment section Greg Egan? Based on your comment, either you have or you should. (I only read it myself a couple years ago, after like 20 people had told me to.)

62. Interesting Links for 23-03-2017 | Made from Truth and Lies Says:

[…] Your yearly dose of is-the-universe-a-simulation […]

63. mjgeddes Says:

Actually, I’ve only mainly read Greg Egan’s newer works – I think the earliest of his I read was ‘Diaspora’. I never actually got around to reading ‘Permutation City’ , the earlier book.

I will finally read this classic now. It certainly wouldn’t surprise me to learn he was exploring similar notions of a ‘digitial ontology’ all those years ago.

64. Scott Says:

Neel #10:

Maybe I’m missing your point, but doesn’t the Bekenstein bound and the finite size of the observable universe mean that the physical Church-Turing thesis is vacuous? If the universe is well-described by a finite number of bits, then Turing-equivalent models of computation are ludicrous overkill for describing physical processes, sort of like trying to use large cardinals to talk about the ASCII character set.

Indeed, that’s almost a running joke in the circles where I travel. Why not ditch polynomial vs. exponential, and just say that the size of every quantum computation is O(10122)=O(1)?

However, we can restore content to the Church-Turing Thesis simply by saying that our description of the universe has to be vastly smaller than the raw information content of the universe. In that case it will again be meaningful to ask whether the raw ~10122-bit string encoding all physical events (or if you prefer, the raw ~210^122-bit string encoding the wavefunction) has a short description in terms of Turing machines.

Another thing we can do is to say that Λ, the value of the cosmological constant, seems like just an incidental feature of our world, so any general principles we formulate like the Church-Turing Thesis should hold even if Λ is an input parameter that can be arbitrarily close to 0.

65. Scott Says:

Jr #27:

how confident can we be in the Second law holding for observers near blackholes? More or less certain than a 19-century physicist would be of Galilean invariance?

It’s hard to convey in words just how deeply this comment misunderstands the way progress gets made in theoretical physics ~98% of the time, a strategy that’s been called “radical conservatism.” In other words, you make progress not by ditching the principles that have worked so far, but rather by taking those principles more seriously than anyone else—taking them to apply even more consistently and universally, and to be the tips of even bigger icebergs.

Thus, to take your example, the right way for a 19th-century physicist to pose the question would not have been “what if Galilean invariance is just wrong?” For the obvious way for it to be wrong is of course for there to exist a preferred rest frame.

Rather the right way would’ve been “what if Galilian invariance is ‘even truer’ than everyone says? What if this symmetry is just a shadow of an even bigger symmetry, its limit as the speed of light is taken to be infinite?”

Likewise, in the 1970s, Bekenstein and then Hawking were able to come up with a beautiful and mathematically self-consistent picture of black holes just by taking seriously that the Second Law should apply to them the same way it applies to everything else in the universe. That picture explains numerous previous puzzles (e.g., about what happens when two black holes merge), whereas the contrary hypothesis, that black holes are the universe’s only true entropy dumpsters, and we need to carve out an exception to the most basic physical principles (like reversibility and the Second Law) for them, has been scientifically fruitless by comparison. How familiar are you with this history?

66. wolfgang Says:

I dont understand why the aliens would have to simulate the entire universe or Lorentz invariant spacetime.
All they have to simulate is our impression of the universe and some scientists (or their textbooks) stating that spacetime is Lorentz invariant.
Actually all they have to simulate is me thinking “don’t worry, this is all fine” …

67. Jay Says:

>Another thing we can do is to say that Λ, the value of the cosmological constant, seems like just an incidental feature of our world, so any general principles we formulate like the Church-Turing Thesis even if Λ is an input parameter that can be arbitrarily close to 0.

Can you expand on this?

68. Scott Says:

wolfgang #66: Actually, it’s not obvious how much cheaper that should be than simulating the whole universe—maybe “merely” ~1061 quantum gates instead of ~10122? (Here I’m assuming that “our impression of the universe” needs to include the outputs of quantum computations that we choose and that are intractable to simulate classically. If you instead think that building interesting QCs will crash the universe, then you can substitute classical gates for quantum. 🙂 )

In any case, Sabine imposed a ground rule in her post of “no solipsism.” I.e., a computational simulation should reproduce the external world in its entirety—whatever one means by that, e.g. whether an Everettian wavefunction for the whole universe or only the decoherent events—but at any rate, not just our conscious perceptions. And I happily accept that ground rule, since what I wanted to say still goes through with it.

69. Gil Says:

Scott #51: the difference between the two, is that you need less computational resources to validate a proof for QC, than to generate it. Because you can check numbers factorization faster than calculating it yourself, you can put increasingly larger bounds and prove that whatever machine you made is stronger than you.

But you cant hope to do the same with the halting problem: you can’t validate a proof for it faster than generating it. So you will always face the counter argument that whatever your halting machine did, if you threw the scientists who gave that question into a box, they would arrive at the same result and yet you know they cant generally solve the halting problem. So you will need a pretty weird argument that wouldnt somehow deduce those scientists solved the halting problem but your magic machine did.

Ps i mightve sent this twice, typing from my phone.

70. Scott Says:

Gil #69: Yes, I’m familiar with the concept of NP. 🙂

The halting problem at least has the property of “one-sided verification”—that when a machine halts, there’s a finite proof, and if someone falsely claims it runs forever, there’s a finite disproof.

But the fact that we can’t independently verify claims of looping (only treat them as predictions in which we become increasingly confident as we fail to falsify them), is the reason why I specified that we would also need a fundamental theory that explained why the box should be solving the halting problem.

71. Count Iblis Says:

It is believed that quantum gravity (QG) has a UV fixed point, the existence of such a fixed point would allow you to write down a discrete lattice model such that the QG UV fixed point becomes an IR fixed point. Then you can tune the bare couplings of your discrete lattice model such that under a renormalization group flow the discrete lattice model becomes an effective field theory that just misses that IR fixed point and then veers off toward the Standard Model.

72. wolfgang Says:

>> Sabine imposed a ground rule in her post of “no solipsism.”

I am just not sure how to motivate this rule when discussing the simulation argument.

The main point is that reality could be very different from what we think it is, with solipsism on one end of the spectrum (no reality) and many worlds interpretation on the other (infinitely many realities) and The Matrix and religion (life is just a prelude to the real reality called heaven) somewhere in between.
And I am afraid physics does not provide a good answer one way or another ..

But if Sabine wants to avoid or rule out one, then why bother with the whole idea?

73. fred Says:

But our only perception of reality is through our 5 senses.

So there’s also the idea that what we (as humans) perceive as reality could be a simulation, rather than the whole physical reality being simulated.

We now have first generation VR.
It’s still far far from perfectly emulating the inputs for all the senses, but it’s still accomplishing something no prior technology could do.
E.g. no matter how realistic a video game can look, we know we’re still only looking at a screen.
Even with current crude VR, there is something fundamentally different that’s being triggered in the brain, at a fundamental unconscious level.

There’s no reason to believe that VR technology won’t be improving exponentially like any other technologies.
So, a few decades from now, there’s almost no question that we will be able to create VR simulations that are totally fooling the human brain, making them indistinguishable from reality.

74. James Gallagher Says:

Scott #57

Your link to analysis of parallel algorithms above points back to your blog. (I corrected it here)

I can write a simple program for a dual-core processor that no Turing machine can ever emulate – I just have the program run simultaneously on each core and update a SINGLE memory address depending on which core completes the execution first. Then I can execute all sorts of more advanced parallel code depending on that result.

As I say, I am a novice in Computer Science, and maybe I’m missing something obvious, but if we’re going to argue about simulating the universe I think we need to understand how the simulation works first of all.

75. Darrell Burgan Says:

Scott #48:

“But doesn’t the thing about bugs mean only that the theory that our universe isn’t a simulation is falsifiable?”

I’m probably being dense again, but if we prove false the theory that our universe isn’t a simulation, haven’t we proved the opposite, that it is a simulation? Or is there some third option that I’m not considering?

To me, the real reason that the conjecture is meaningless is because it won’t affect how we go about science. If we’re in a simulation, all we’re doing is trying to reverse-engineer the code, but the result is the same.

76. Darrell Burgan Says:

Scott #64:

“Indeed, that’s almost a running joke in the circles where I travel. Why not ditch polynomial vs. exponential, and just say that the size of every quantum computation is O(10122)=O(1)?”

Oooh, I almost had a thought. Doesn’t this joke say something profound about whether every problem is ultimately computable?

77. Scott Says:

James #74: Thanks, I fixed the link, though the truth is the observation that T cores can be simulated by 1 core with a factor-of-T slowdown is so basic that your reference request left me at a loss, as if you had asked me in what paper was it proven that “ABBA” is a palindrome. 🙂

But your example was clarifying. I would say: in that example, the difficulty of simulating the two cores using a Turing machine has nothing whatsoever to do with the use of parallelism, which is a complete red herring. It purely has to do with nondeterminism—with the fact that our model doesn’t specify which of the two processes should finish first.

But then that just pushes the question back. If which process finishes first is random (a coin flip), then we can easily simulate what’s going on using a Turing machine with access to a random number generator, something that’s known not to decide any languages beyond the ordinary computable ones (an observation of Claude Shannon).

If, on the other hand, which process finishes first were somehow to encode (say) the solution to the halting problem, because of some crazy physics governing the speeds of the two processors—well then, of course we would violate the Church-Turing Thesis! But our violation would still have nothing whatsoever to do with the use of parallelism, and everything to do with whatever was the Church-Turing-violating pixie dust that determined the computation speeds.

78. James Gallagher Says:

Scott #75

But then that just pushes the question back. If which process finishes first is random (a coin flip), then we can easily simulate what’s going on using a Turing machine with access to a random number generator, something that’s known not to decide any languages beyond the ordinary computable ones (an observation of Claude Shannon).

But if the “coin flip” is fundamentally random then no Turing machine random number generator can emulate it.

If the two cores run code where the timing of execution is at the limit of the accuracy of their internal clocks, nanosecond or even picosecond level, then we may be beyond classical thermal effects and quantum uncertainty may be significant.

79. fred Says:

The question is pretty pointless without addressing the elephant in the room – consciousness.

Is consciousness information processing or not?

If it’s not, there’s nothing in physics to account for the emergence of consciousness (there’s no such thing in physics as truly emergent properties – every “macro” property of a system can be explained by the properties of its parts, because it’s all a bottom up approach).

If it is, then physical reality has the same nature as mathematical reality.

And if you remove consciousness from the question, you still can’t answer what’s the difference between a “physical” system and its simulation (because everything in physics is described by relations between objects, not the absolute nature of the objects… again, the bottom up approach).

80. Haelfix Says:

Im confused by this argument. Ads does not have a finite dimensional Hilbert space unless you discretize it in some way, say by triangulating it on a lattice. If you do that, it’s not clear the ads/cut duality holds, and it takes a lot of work to make continuous symmetries hold (most discretization schemes break lorentz invariance, for instance Regge Gravity).

Now, on the contrary, there are arguments that pure DeSitter space has a finite Hilbert space, but this is usually interpreted as being fatal to the theory, and is why most attempts at doing quantum gravity in DeSitter involves embedding it into Ads space. So I’m confused…

81. Scott Says:

James #78: For me, the possibility of genuine randomness is not a counterexample to Church-Turing Thesis, so much as a small asterisk on it—i.e., something we should remember to mention in a careful statement of the thesis, but not more. Here are the three main reasons:

(1) Given a halting probabilistic TM, it’s easy to construct a deterministic TM that calculates the exact probabilities for every possible outcome, so all that’s left for you to do is to sample. I.e., it takes you all the way across the world to your destination, and all it leaves for you is to ring the doorbell! 🙂

(2) The result of Shannon mentioned earlier, that randomness doesn’t actually increase the set of languages you can decide.

(3) The fact that every probabilistic TM can be seen as just a deterministic TM that takes a random string as an auxiliary input.

82. James Gallagher Says:

#Scott 80

In the real world it took Bell to develop a proper, non-bullshit way to test “hidden-variables” models.

In Computer Science there’s no easy way to disprove “hidden-variables” models

83. Scott Says:

James #82: Well, the impossibility of local hidden-variable models is a striking and specific feature of quantum mechanics, so it’s no surprise that one might not be able to prove it in this or that other context (indeed, in other contexts, local hidden-variable models might be impossible to rule out for the simple reason that they exist). In any case, I don’t see the relevance to what we were talking about before.

84. mjgeddes Says:

Fred #79 said: “Is consciousness information processing or not?”

It’s a really tough call to make Fred. If the universe is indeed all just information processing, then the answer is yes. But *is* it really all just informtion processing? It certainly *could* be. But , myself, I’m still not certain.

It seems that through-out history 3 major candidates for the fundamental property of reality have always been battling each other in the minds of philosophers:

Matter (or ‘fields’ in the modern incarnation)
Math (‘information’ in the modern incarnation)
Mind (‘consciousness’ or ‘cognition’ in modern terms)

So which of the 3 is really the fundamental reality?

The philosophical battle has swung back and forth across the centuries without clear resolution.

It seems that any attempt to try to reduce reality to only one of these properties looks like it could *almost* succeed, but it never quite does. Always, the attempted reduction never quite gets rid of paradox and bizarre conclusions.

‘Reality is Informtion’ is now in the ascendent, but as I pointed out above, it really does imply some very bizarre conclusions when carried through it’s logical conclusion.

I have suggested a major alternative to reductionism on Scott’s blog. What I proposed is that reality is actually an amalgam of ALL 3 properties:

Information AND Fields AND Cognition

Could it be that reductionism was only an approximation, and trying to make a full reduction to only 1 property simply can’t be done? Rather, perhaps reality is a circular loop, with each property both containing AND being contained by the other two?

In that case, no, consciousness would not just be information processing or just matter, but rather it would be a composite property containing both , yet not entirely reducible to either.

I’m really on the fence here.

‘Reality is information’ or
‘Reality is Information+Fields+Cogntion’

are the 2 alternatives I’m currently oscillating between.

Treating information, matter and consciousness as all equally fundamental may avoid some of the paradoxs and bizarre conclusions of saying that reality is all just information processing alone, but it has problems of it’s own. It’s more complex, and it involves giving up reductionism, which may be a step too far for most scientists.

85. Miles Mutka Says:

fred #79
I was purposely trying to avoid the c-word, by using a single-cell ancestor simulation in my question #35. Most people don’t think that single cells are conscious. I guess nobody took that bait.

86. fred Says:

If you’re a cloud, there’s a fairly high chance that you’re living inside a 5-day forecast weather simulation.

87. Andrew Ray Says:

Wolfgang #72 says what I was thinking, and at times the discussion seems to me to be very round, if not completely circular – assuming the reality of the universe to argue that it is not a simulation.

On the other hand, the ‘no solipsism’ rule enables some interesting discussions, and Bostrom’s position, or anyone’s, can be analysed for its thoroughness and consistency within the assumption of the rule. Just don’t forget that you made that caveat.

88. Mateus Araújo Says:

fred #86: I guess there are around 100 meteorological centers that routinely run 5-day weather forecast simulations. Let’s say that they do it once per day, and that they are interested in predicting only the weather around where they are, so they simulate only around 1/8 of the globe. So each day there are 100*5/8 = 62,5 as many clouds in the simulation as in the real world, so I guess you’re right.

89. mjgeddes Says:

fred and Mateus,

Yup, replace ‘clouds’ with ‘conscious observers’, and this is where the silliness with the ‘reality is information’ idea starts to kick in.

You end up with stuff like Anthropic reasoning, Sleeping Beauty paradox etc, where if carried to the extreme, what you observe seems like it can actually depend on what thoughts you’re thinking. And the ultimate strangeness seems to be the idea of a super-intelligence controlling the ‘universal prior’, where it seems that by running simulations, it could alter the very fabric of reality itself.

The question is whether you’re really prepared to bite the bullets and believe all that. Bostrom and co. seem happy to take all this stuff very seriously. But a bit of common sense would suggest that the ‘reality is information’ idea just isn’t fully coherent.

90. ABC Says:

The proponents of simulation theory (including Bostrom) usually argue that:
1. Humanity (unless it dies out fast enough) will advance so much that we will eventually create lots of simulations of our own past.
2. If this is true, then it is much more likely for a human, living in 21st century, to be in one of many such simulations than to be in the actual real world, since there is/was only one real world but there would be lots of simulated ones.

That seems much more specific than the unfalsifiable view that “We might be living in some simulation run by some alien race”. It says we are likely (not just possibly) living in a simulation that is run by future humans that live in a universe with the same laws as we do.

This view seems quite falsifiable. For example, if we reach the end of progress in computing power (say if the Moore’s law goes flat) and we still can’t feasibly run realistic simulations of large scale (and long time) events, that should be all the falsification we need.

91. fred Says:

mjgeddes #90.

But it’s not like we know what “reality is no *just* information” really means either.

Information is defined as a subjective quantity.
But when you look at physical “stuff”, its ultimate property is that we describe it all with words on sheets of paper (mathematical symbols, the language of physics).
But like every language, it’s all self-referential relationships (i.e. a dictionary in the end is just a convoluted list of circular references).
Is an electron any more or less real than the number 176?

One thing’s for sure:
as VR technology improves, the line between the physical and the digital is not only going to blur but totally tilt towards the digital.
Eventually humans will spend their life interacting almost exclusively with abstract concepts at a level that’s even more intimate and direct as with the “physical” stuff.
Human interactions will also become more friction-less and much richer.
Conscious experiences will expend so much that all the “physical” stuff might be perceived as some totally uninteresting low level arbitrary stuff.

92. fred Says:

By “expansion of conscious experiences”, I mean things like being able to feed richer data inputs into the brain and read richer data outputs from the brain.

E.g. suddenly you could start perceiving colors beyond the RGB limitation of our eyes.
Or you could get used to having 8 limbs in the virtual world (like an octopus).
Or when interacting with another human being, the exchanges of pheromones could be magnified.
Because of brain plasticity, all this is possible.

93. fred Says:

ABC #91

“1. Humanity (unless it dies out fast enough) will advance so much that we will eventually create lots of simulations of our own past.
2. If this is true, then it is much more likely for a human, living in 21st century, to be in one of many such simulations than to be in the actual real world, ”

if future humans do believe that those simulations have conscious beings in them, it’s a pretty giant dick move to force billions of virtual souls to go through the horrors of the 20th and 21th century over and over again…

94. mjgeddes Says:

Oh, I agree with you there fred,

I’ve got an inkling of what’s going to happen at Singularity.
I think a new layer of reality ( ‘mind space’) will crystallize…and from that time forward, we will reside not in ‘physical space’, but rather in a ‘fitness landscape’ of values and ideals, with a substrate of continuous consciousness.

Physical world will be relegated to a sub-layer and considered to be just a largely irrelevant support-structure for ‘mind space’, mainly of interest to academics…rather like mathematics is today.

The universe isn’t fully formed yet. Mind-space hasn’t fully crystallized yet, it’s only currently popping up in patches.

There will be a ‘phase shift’ shortly after super-intelligence appears, the mind-space will be fully formed then.

95. Gerben Says:

Hi Scott, I was wondering if you had some insight with respect to the chiral anomaly as it relates to the simulated universe. The physical explanation how a gauge theory can create charged fermions out of the vacuum is that nature can freely pick a fermion out of the Dirac see and fill up the holes by moving all fermions up one step in the energy ladder, just like the hilbert hotel. As far as I know any attempt to define a continuum chiral gauge theory as a limit of a finite theory fails because there are always going to be doublers in the spectrum that make the theory vector like. To me this points to the Hilbert space being truly / intrinsically infinite dimensional, which, I would say, would be a true blow to the “simulated universe” hypothesis.

96. Jay Says:

1′. Humanity (unless it dies out fast enough) will advance so much that we won’t need to create any simulations of our own past.

97. Serge Says:

Instead of claiming that the simulation hypothesis is ruled out by the fact that our universe is Lorentz-invariant, I prefer to assume that it’s never possible to tell whether we live in a simulation. Or that the laws of Nature, mathematics included, are invariant under simulation. Or that the laws of Nature, mathematics included, are the same in every computing model. I call this the Extended Relativity Principle, because every motion is a special kind of computation. The original Relativity Principle states that the laws of Nature are the same in every reference frame, and a reference frame is also a special kind of computing model.

Now for the analogue to the speed of light. Let’s assume that the speed of quantum computing is the same in every computing model, just like the speed of light is the same in every reference frame. Then again, the statement about the speed of light is a special case of the statement about quantum computing. The photon is then seen as the simplest quantum computer.

I believe that the Extended Relativity Principle is responsible for our not being able to prove such things as P!=PSPACE, because it’s never possible to tell whether it’s the problem that’s too difficult for us to solve, or if it’s ourselves who don’t have what it takes to solve it. Just like it’s never possible to tell whether we’re being accelerated, or under the influence of a gravity field.

98. James Gallagher Says:

#Scott 83

Sorry, it wasn’t a very understandable comment(!), it was aimed at my bullshit suggestion that quantum randomness (rather than classical thermal randomness) may be significant in the case of a dual/multi core processor. I was thinking of a situation like this: suppose the cpu clock is guaranteed accurate to within x nanoseconds per minute – then simultaneously execute the same code on each core (for a few minutes) and update a memory address with a value depending on which core finishes first. Now, is that result due to classical thermal randomness (so it is fundamentally deterministic) or can quantum uncertainty be present (so it is fundamentally random).

In my naive understanding, the former (classical case) could be emulated by a Turing machine while the latter couldn’t.

99. Scott Says:

James #98: As I said, all you need to emulate true randomness is a random input tape in your Turing machine. And I don’t regard that as a “serious” change to the Church-Turing Thesis: if we cry wolf at every relatively-minor technicality, then we won’t be properly impressed by a REAL challenge to the Church-Turing Thesis, like a physical device that solves the halting problem.

100. James Gallagher Says:

Scott #99

Are you serious?

“all you need to emulate true randomness is a random input tape in your Turing machine.”

Doesn’t that enable almost EVERYTHING to be computed (eventually)? If it’s FUNDAMENTALLY random, then, after a long enough time, almost EVERYTHING will be, with probability 1, executed. I mean there may be a measure zero space that isn’t touched, but that’s not really a big deal, compared to allowing almost EVERYTHING to be computed.

101. Scott Says:

Serge #97: So you have a deep principle that explains why we can never prove P≠PSPACE. Great. Does the same principle explain why we can never prove, say, NEXP⊄TC0? Will you revise your principle if the latter does get proved, possibly in the next decade? (It would be a logical next step after NEXP⊄ACC.) More generally, how do your philosophical principles decide which among the millions of unsolved problems are truly unsolvable, and which merely haven’t been solved yet? Doesn’t this worry you?

102. Scott Says:

James #100: No, and given your comments, it would really help you to take a theory of computing course (or go through a textbook). Unless otherwise specified, to “solve” a problem means to output the correct answer to the problem, and nothing else. It doesn’t count if the right answer is included in a giant list of all the possible answers. Otherwise, it would be possible to ace a true/false test by just answering “true or false” to every question.

103. wolfgang Says:

@fred & @Mateus #86 #88

But is a weather simulation detailed enough to recreate the conscious experience of a cloud? Are a few pixels enough or does one need to simulate molecules? Or perhaps the quantum mechanics of molecules? Also, where does the cloud begin and end? Do we need to simulate the whole atmosphere and perhaps even the entanglement with the environment?
Which would include entanglement of the cloud with the computer that we use to simulate the weather …

104. Simple thermo Says:

It’s a little strange to try to derive fundamental physics like quantum gravity from thermodynamic considerations when it’s been known for over a century that thermodynamics are emergent rather than fundamental.

105. Scott Says:

Travis #41:

Scott, you seem to be arguing that if our universe is not discrete, then the Church-Turing thesis must be false. Is this definitely true? Or in other words: is it possible that our universe is not discrete (and hence not able to be simulated by a Turing machine) but nevertheless it is the case that within our universe nothing can be computed that can’t be computed by a Turing machine?

The trouble is, suppose you think nothing can be computed in our universe that can’t be computed by a Turing machine. Then in particular, the probability distribution sampled by any reliably-behaving physical system must be approximable by a Turing machine to any desired precision, given as input whatever is externally knowable about the system’s initial conditions. Why? Because if it weren’t, then by definition, the system would be solving an uncomputable problem for you. Maybe not an interesting problem, maybe not one you’d have any independent reason to care about, but an uncomputable problem nonetheless. But this is all we mean by “the universe being computable,” or “the Church-Turing Thesis being true.”

Where do you get off the train here?

106. Scott Says:

Jr #44:

Is the difficulty in showing that the Standard Model can be computed/simulated in the BQP related to the Clay problem on proving the existence of Yang-Mills theory?

Yes, I think so, in some sense. What I know is that the known efficient simulations of QFTs by quantum computers require the existence of a mass gap, and the running time scales inversely with the gap. I believe there are actually several nasty technical issues in extending these simulations to the full Standard Model, but at least some of them have to do with the Yang-Mills mass gap.

But I’ll let the experts explain in more detail, if they’re still reading this thread and they choose to.

107. Patrick Tam Says:

I don’t believe that the Bekenstein bound is a bound on the dimension of the Hilbert space that describes our universe. The Bekenstein bound is indeed an upper bound on the entropy of De Sitter space. That doesn’t mean it’s an upper bound on the dimension of a Hilbert space describing our universe.

As shown by Lubos’ blog post, it’s possible for a quantum system described by an infinite dimensional Hilbert space to have finite entropy.

I think entropy gives at best a lower bound for the dimension of the Hilbert space describing a system. Thus, the Bekenstein bound gives an upper bound for a lower bound of the dimension of the Hilbert space.

Reality could still be described by quantum mechanics on a finite dimensional Hilbert space but I don’t think you’re close to a proof of that.

108. Scott Says:

Patrick #107: My understanding is that the Hilbert space is formally infinite-dimensional, but the Bekenstein bound tells you that all but a finite number of the dimensions can be cut off as requiring inaccessibly large energies. Lubos may think that’s inelegant—I don’t care what he thinks or doesn’t think—but it’s enough to conclude something like: for every ε>0, there exists an N(ε) such that the world can be approximated to accuracy ε using an N(ε)-dimensional Hilbert space. That, in turn, is enough to make the Church-Turing Thesis true, and the universe simulable by a computer to arbitrary accuracy given sufficient details of the initial conditions.

109. Scott Says:

Gerben #31, #34, #95: I’m certainly not the right person to ask about chiral anomalies! I know that when Jordan, Lee, and Preskill gave their algorithms to simulate quantum field theories on a quantum computer, they left the handling of chiral fermions as a problem for future work. So it’s a real issue. But a priori, I’d be astonished if such a technical issue actually meant the difference between Nature being computable and its not.

Crucially, even if it’s true (as you say) that the chiral anomaly forces us to use an infinite-dimensional Hilbert space—well, there are lots of other things in traditional QM that seem to require infinite-dimensional Hilbert spaces. Even the canonical commutation relation, AB-BA=I, has no realization by finite-dimensional matrices A and B.

But as I said in comment #108, these issues forcing us to infinite-dimensional spaces have never seemed to prevent us from simulating QM computationally in the past! In case after case, we just impose a cutoff, and ignore all modes above a certain energy or what-have-you, and thereby approximate the physical system of interest to us using a finite-dimensional Hilbert space. So my guess is that the same story is exceedingly likely to hold with chiral fermions. Anyway, I’d be extremely interested to hear how far off this is from the real experts.

110. Scott Says:

Haelfix #80: I also don’t fully understand why AdS/CFT still works after one approximates the CFT side by a finite-dimensional Hilbert space. But then I just had the experience of being at a conference with many of the world experts on AdS/CFT — Lenny Susskind, John Preskill, Daniel Harlow. Every single one of whom, when questioned about this exact point, affirmed that you don’t go far wrong by imagining the CFT side as a finite collection of qubits getting acting on by a Hamiltonian that’s a sum of k-qubit terms. They even seemed to think I was silly to worry about it so much.

My guess is that the reason has to do with what we’ve been discussing in the previous few comments: that even if a Hilbert space is formally infinite-dimensional, one can just truncate the high-energy modes and approximate it extremely well by a finite-dimensional one.

111. gerben Says:

Scott #109: First, wholehearted congratulations to you and your wife with the birth of your son.

The point I’m making is not only that the Hilbert space is infinite. The point is, that any truncation of it to a finite subspace cannot approximate an low energy observable effect. An (adiabatic) gauge transition from one gauge vacuum to another vacuum with a different topological winding number is a purely low energy effect, but as it happens affects all energy levels up to infinite energy. Anyway to cut off high energy modes and keep full gauge invariants invariably lead to other low-energy modes in the spectrum completely changing the low-energy effective theory into a vector like theory.

This has been a research problem for 50 years and there is a claim of very non-natural solution for U(1) by Lusscher. But any other proposal of discretizing it run into problems that somehow doublers appeared making the theory vector-like. To me it seems that chiral gauge theories (and also all the heterotic string theories as gauge theories on 2d) are candidates of consistent mathematical theories for which there is no limit of finite theories approaching them.

I know that the majority of physicist always refer to chiral fermions on a lattice as merely a complication. But the topological arguments of spectral flow relate the IR to the UV that makes it different than AB-BA = i, which is something you can approximate. The properties of the infinite hilbert hotel do not follow from a limit of finite hotels.

112. Serge Says:

Scott #101: The next thing to do is rephrase Lorentz’s velocity-addition formula in terms of the composition of slowdowns for the following situation: a first process embedded into a second one, which is itself embedded into a third one. Then the Expanded Relativity Principle will make it impossible to separating those complexity classes which can be expressed in terms of a computing model simulated by another one. This is notably the case of the class P (modeled by linear programing equations) and NP (modeled by integer programing equations). I don’t know yet how to define the slowdown rigorously and how to compose slowdowns à la Lorentz with respect to quantum speed invariance. How do you define the speed of an embedded process relatively to its host process? Asymptotically? Probabilistically? Or both?

However, the situation looks like that in Hilbert’s tenth problem which was shown undecidable because Diophantine equations are Turing-complete. I’m not saying that most problems remain unsolved because of Relativity. But if the Riemann Hypothesis was once reformulated in terms of a fast model where it is true, hosting a slower one where it is false, then I guess it would remain forever unprovable.

113. Michael Gogins Says:

mjgeddes #84:

Throughout “history” starting about the time of alphabetical writing, “ultimate reality” is the gods or the one God. In the West this view crystallized in monotheism wherein God is pure spirit or, in modern terminology, pure subjectivity, i.e. God is not an object. But this God created physical reality “out of nothing” to follow something rather close to what we now think of as physical law. This is the worldview from which modern science arose, originally grounded in the theistic view of natural law.

I agree that the realities you list entail large problems. The theistic view, to which I adhere, has its own problems. But some of the problems that you mention are avoided e.g. by postulating that a human being is a unity of body and spirit, never reducible to just one or the other. This is an example of the basic move in theism that God, who is not contingent, can create a contingent reality that is really real yet always in a more or less hidden manner reflects its creator.

114. gtt Says:

In response to Miles Mukta #35: at current technology, we’re lucky to simulate a single protein for a few milliseconds. D E Shaw Research is the leader in that regard, with their custom Anton machines. So you’re going to have to wait a bit for a whole cell. ?

115. mjgeddes Says:

Mike #113,

Theism falls under the 3rd option I listed – the postulate that ultimate reality originates in ‘mind’ (in the case of Western theology, the mind of God). Which indeed was the prevailing view for a long stretch of history.

Other options probably first arose with the Greeks, the Pythagorean school of thought developing the idea that ‘all is number’ , and Plato layed the seeds of what would later become ‘Neo-Platonism’, the idea that mathematics is ultimate reality. Materialism (Democritus) also puts in an apperance with the Greeks.

It was with Newton that materialism started to gain the upper-hand, and by the time of the industrial revolution, materialism was becoming fairly dominant.

Quantum mechanics (1920s onwards) started to cast doubt on materialism again, and views such as idealism (ideas with Eastern mystic leanings such as New-Age) have gained in popularity.

But it was with the dawn of general purpose computers (circa. 1945) and the information age that variations on Platonist ideas started to spread again. In particular , John Wheeler’s ‘It-From-Bit’ idea layed the seeds of the modern idea that ‘reality is computation/infomation’ , which has really surged in popularity in recent years (especially among computer scientists).

Of course, Western theism (ultimate reality originates with the Mind of God) has just as many problems as the other 2 options (materialism and It-from-Bit).

Which only emphasiszes my point that trying to pick one big property and saying that everything originates from that one thing (be it ‘God’, ‘matter’, ‘It-from-Bit’ etc.) has never worked yet, so the whole philosophical project has to be regarded as suspect.

To avoid insanity, the best advice is to take a default ‘common-sense’ philosophical position that all 3 properties (consciousness, matter and information/math) have a fundamental irreducible reality, until the time comes that science and philosophy have clearly determined otherwise.

116. Douglas Knight Says:

Scott 108, you have retreated to a weaker position. If you were wrong, you should definitely retreat, but I think you were right. In any event, you should be careful to distinguish the weak claim from the strong claim. Before, you were saying that there was a single dimension, but now you are letting the dimension depend on the accuracy cutoff.

gerben 111

The properties of the infinite hilbert hotel do not follow from a limit of finite hotels.

Sure they do, if you give finite hotels the right properties.

117. Custodes Says:

Congratulations on your son! It takes a huge amount of supercomputing just to get the proton mass from lattice qcd, so if our universe is a simulation then our simulating overlords must have powerful machines. And if everything is computational then who is simulating the simulators?

118. Miles Mutka Says:

gtt #114, Thanks for the straight answer, this is about what I though was the state of the art.
Some of these simulations do take quantum effects into account, but they are still highly optimized and granular. So not using string theory or even the standard model (quarks) as the level of granularity, that would be orders of magnitude slower.
I am not saying that these simulations are not scientifically useful, just that they are not “simulating the universe” at the currently known most fundamental level physics.
[Still, molecular simulations might have been more useful, for example if we could have refuted Anfinsen’s dogma a bit earlier. Also the recent discovery of the glymphatic system has implications, on surgery procedures, and on what parts of animals to use for food.]

119. Miles Mutka Says:

mjgeddes #115,
Aristotle used the concept of hylomorphism for the inseparable marriage of matter (hyle or ylem) and form. For him it was a multi-level tool, kind of like layers of supervenient theories on top of each other today. Later christian scholars amended it by saying only God can separate matter and form, but Aristotle never went that far.

120. Mateus Araújo Says:

wolfgang #103: I do conjecture that clouds have no conscious experience, so that part of the comment was meant as a joke.

But about your other questions: clouds are an extremely well-decohered system, as any quantum system that is rather big an complex, so there is no need to worry about its entanglement with the environment.

About the level of simulation, it is clearly not enough to make a .gif of a cloud, and it is clearly enough to simulate it atom by atom. Without such a thing as a Turing test for clouds it is hard to tell at which intermediate level the simulated things actually become clouds.

About where the cloud begins and where it ends: clouds are an emergent structure, just like humans, so there is nothing at the fundamental level that says what is a cloud and what is not. But if we do simulate clouds atom by atom there is no need to decide that.

121. Jr Says:

Scott #108,
Is that error bound valid for all time or just just for some bounded time?

122. Scott Says:

Custodes #117: Computing power is presumably as nothing to our overlords! But all these questions (“who simulates the simulators?,” etc.) are excellent candidates for being sliced away with Occam’s Razor, in favor of the minimalist view that our world is computational (satisfies the Church-Turing Thesis, etc.) with no need to address the further question of who or what might be computing it.

123. Moshe Says:

Hi Scott, I am late but I wanted to make a comment on what I think is implied, and what is not implied, by “computability” of physical results.

Maybe, to make things concrete, let’s focus on what it means in practice to simulate a physical theory. The computer has finite resources so of course anything continuous becomes not only discrete but finite. You run your algorithm, you choose what to calculate, you get a set of numbers. The crucial question in my mind is how seriously you take those results, and what amount of post-processing you have to do to get physics out of those raw numbers. I’d like to argue that on many occasions much (most?) of the physics comes in at the post-processing (aka continuum limit) stage. None of this is particularly original, or controversial, I don’t think.

Maybe the main barrier to understanding the point I’m trying to make is the intuition that the observables you calculate have a finite continuum limit, so at every value of the cutoff you approximate them to a finite precision. This is not actually the case, there are a few well-understood things you have to do to get sensible results:

1. You take your results at various values of the cutoff and extrapolate to zero cutoff. As you do that, you have to take a finite number of quantities (bare coupling constants) to diverge so that your physical quantities have a finite limit. This is the miracle off renormalization: infinitely many physical quantities (e.g. correlation functions) are proven to have a finite limit when you do that procedure, but it is far from clear that any new quantity you dream up at the cutoff theory will also have a finite limit.

2. Some nice properties you’d want your theory to have (e.g. unitarity, various symmetries) are violated for any finite cutoff, but that violation goes away when you take the limit. This already raises the question of how seriously you can take the results before taking the limit, but it is a bit worse than that, since some properties are more fragile: the level of violation actually *increases* as you take the cutoff away. This is not a problem (in principle) if you allow post-processing: you just identify the offending effects (aka relevant operators) and carefully subtract them off, then take the limit. Lorentz invariance, in any theory complex enough to include the standard model, is one of those properties. Other effects (anomalies) to do with chiral fermions, make that limit subtle as well.

So my point in all that is highlight that what you mean by simulation is different from just discretizing your model and taking the results as approximations to the true physical quantities. It is only this narrow definition of “simulation” which I think is incompatible with known low energy properties of the world. The full process, including post-processing, does give you finite approximation to physical results.

124. Moshe Says:

Incidentally, my main problem with the simulation story is not (only) that it is intellectually lazy or that it is masquerading as some deep foundational issue. As far as metaphysical speculation goes it is remarkably unromantic, I mean, your best attempt as a creation myth involves someone sitting in front of a computer running code? what else do those omnipotent gods do, eat pizza? do their taxes?

125. Scott Says:

Moshe #124: You should at least credit it with being a creation myth for our century. Nowadays, it’s hard to be so impressed with stories about gods battling each other with axes or bequeathing humans the gift of fire: why don’t they just use nuclear weapons, and hand out Bic lighters?

126. Moshe Says:

Oh, I could imagine many powers I’d want to bestow on my creator (or vice versa), but imagining your deity as someone no better than yourself, with no special powers or insight, does seem like a good creation myth for this century.

127. JimV Says:

Thanks for that explanation, Dr. (I’m assuming) Moshe. It sounds vaguely similar, albeit at a much higher mathematical level, to what we do in engineering to calculate peak stress in, say, a pressure vessel with a complex shape.

We make a “finite-element model” which breaks the metal volume up into many small elements (bricks or tetrahedrons). Each element has a set of linear equations based on the Theory of Elasticity. These are combined into a large stiffness matrix by the computer program, which is solved for the applied boundary conditions (forces, pressures, deflections).

For any model size N (number of elements) the peak calculated stress is approximate, because of simplifying assumptions in the elements (linearity). So we run the calculation for various values of N, and plot the peak stress vs. 1/N to estimate the “actual” peak stress (at 1/N = 0).

However, in this case the procedure does not imply the material of the vessel is continuous – in fact if you polish the material, you can see the constituent grains of the alloy in a good optical microscope – but we don’t have enough time or computer resources to do calculations at the grain size. (Much less the individual atoms, where our assumption of homogeneous symmetry would be violated.)

(We also of course test a prototype with strain-gages.)

I don’t think much of the simulation hypothesis either, but it seems to me that it would if fact require simulators with godlike powers and resources compared to ours, in some magical higher universe. To me that actually is step above the usual anthropomorphic god-hypothesis. For example, miracles (events which would be impossible for the simulation code to produce) could be implemented by the equivalent of hex-editing the data.

128. mjgeddes Says:

Scott #122

There’s an ‘extra’ metaphysical step here that physicists might be reluctant to take.

It’s not that uncontroversial to say that Church-Turing is true and the physical world can be *described* using the language of computation. But one might argue that the math is just a *description*, and the *actual* metaphysical reality is still material (i.e. physical fields).

Now the *extra* step (which is metaphysics really) is to apply Occam’s razor and dispense with the notion of an underlying material reality altogether. (i.e. the ‘extra’ step is the claim that computation is not just a *description* of reality, but the *actual* (out there) fundamental reality – in other words, ‘It from Bit’ is going a step further and dispensing with the notion of an underlying ‘hardware’ altogether – saying there’s no hardware, only software).

It seems to me that the question as to whether this ‘extra’ step is justified boils down to whether there are any different observational consequences or not. If not, then indeed the idea of underlying extra material essence adds nothing , and we can get rid of it, just as we did for the notion of ‘aether’.

I think the empirical question here really boils down to whether or not you can actually formulate physics in entirely ‘discrete’ (digital) terms. if not, then you can’t really take the extra step of dispensing with an underlying material essence. For instance, Lubos Motl has really layed into you on his blog over this issue of discreteness.

So my buring question is: Can we really dispense with the notion of an underlying material reality or not?

129. Scott Says:

mjgeddes #128: There are several questions to distinguish here.

First is the metaphysical question, of whether the world “really is” a computation or can merely be simulated by one, whether we can dispense with the notion of material reality, etc.

Second is the question of whether the fundamental laws of physics are best seen as digital or analog, or as partly one and partly the other.

Now, these two questions aren’t the same, and they don’t even have much to do with each other! And neither is the same as the question of whether our world satisfies the Church-Turing Thesis.

The questions aren’t the same because we could imagine an analog world that was nevertheless brought into being by a simulation running in some metaverse—maybe a simulation running on a natively analog computer, or maybe a digital simulation that produced such an accurate approximation that no one could tell the difference. Conversely, we could also imagine a purely discrete world (say, a deterministic cellular automaton) that was nevertheless the “bedrock reality,” with nothing more fundamental underlying it.

But furthermore, even if the fundamental laws of physics were best seen as analog, as long as those laws also prevented us from accessing continuous quantities to infinite precision in a single time step, they could perfectly well end up giving us Turing computation (and the Church-Turing Thesis) anyway. That’s a deep lesson that we’ve had to learn in many ways over the past few decades. For example, quantum computation looks “analog” on its face, which caused Landauer and others in the 1990s to doubt it was possible. But with the discovery of quantum fault-tolerance, it became clear that the linearity of the Schrödinger equation, and the way that linearity interacts with measurement, force the “analog” QC model to behave in an effectively “digital” way.

I get really annoyed when I see people conflate these questions—in exactly the same way I get annoyed when I see them conflate the metaphysical question of whether the universe was created by some higher power, with the (basically unrelated!) scientific question of whether it had a beginning in time.

130. James Cross Says:

#128

“Can we really dispense with the notion of an underlying material reality or not?”

Aren’t “matter” and “material” mental constructs? We struggle for the right words to describe the “hard stuff” (notice quotes) that imposes itself on us and makes it impossible to do anything we can imagine. Still mind/matter, maybe even digital/analog, may be useful descriptions that can’t completely capture the reality of the “hard stuff”.

131. John Sidles Says:

The above admirable summary by Scott:

Quantum computation looks ‘analog’ on its face, which caused Landauer and others in the 1990s to doubt it was possible. But with the discovery of quantum fault-tolerance, it became clear that the linearity of the Schrödinger equation, and the way that linearity interacts with measurement, force the ‘analog’ QC model to behave in an effectively ‘digital’ way.

was extended at QIP 2017 to something like

Quantum simulation looks “high-dimensional” on its face, which caused Feynman and others in the 1980s to doubt it was efficiently feasible. However the diverse and surprising capacities of varietal state-spaces, as appreciated through the mathematical lens of GAGA, are making it clear that (in QED universes at least) “high-dimension” quantum simulations behave in an effectively “low-dimension” way.

The intent of this parallel construction is to convey that there is no logical contradiction whatsoever between these two points-of-view. Instead, at QIP 2017, high-profile proponents of the first point-of-view engaged in enjoyable animated discourse with high-profile proponents of the second point-of-view.

The good news for young quantum researchers (as it seems to me) is that both points-of-view suggest plenty of opportunities further discoveries.

The “QIP 2017” presentations are available on-line (and they are terrific). And for this outstanding service to the quantum community, this heartfelt appreciation and sincere thanks are extended to Microsoft Research/StationQ’s QuArC Group. 😉

Are the “It from Qubit” presentations (ever going to be) posted anywhere? Will there be a Shtetl Optimized “It from Qubit” conference report (in between feedings and diaper-changes?. Either eventuality would be great … and thank you in any event, Scott, for your many sustained services to the quantum community, which are greatly appreciated.

132. jonas Says:

Congratulations for your newborn son, Scott.

Re #111: you say that you can approximate physics with a quantum system on finitely many qubits, but truncating some high energy stuff. I have no doubt that this is possible, though physicists have had a lot of difficulties when they tried to figure out the theory of precisely how this would work, as Moshe has already pointed out. What I’d like to ask is whether you can be confident that after you figure out those details, you don’t get an approximation that is finite, but whose size grows exponentially in the size of what you’re simulating? If the truncated finite model didn’t have a tame size, then why you couldn’t build physical computers that can do an exponential amount of computation in polynomial time and space, which is a difference that’s probably more significant than the difference between quantum computers and classical computers.

On a less related topic, after this blog post, which asks whether we live in a simulated universe, will you ask next whether we are Boltzmann brains with fake memories that accidentally came into existence in the infinitely long boring part of time after the heat death of the universe. The naive arguments are very similar: if simulating the universe is possible, then there are probably many simulations ran, so it’s more likely that we live in a simulation than not; and if the universe never ends and uses the same rules of physics forever, then there are many more pure random Boltzmann brains popping up in the nothingness after the heat death than there are real humans in the first few billion years of the universe, so it’s likely that we are Boltzmann brains.

133. fred Says:

But aren’t we biased to regard our universe as (mainly) computational because our very brains are classical computers evolved by the ability to do physics?
Obviously, physics “works” to some great extent – from finding food to building nukes, but there are plenty of limitations where the fundamental nature of the difficulties isn’t all that clear, e.g. mathematical tools often rely on “infinities” or “continuum” or “point particles”, and we don’t have good models to map all this to the finite natures of actual space/time/energy.

134. fred Says:

Imo, that’s why the pursuit of building an actual QC is so fascinating.
Instead of arguing about the origins of nature’s fundamental building blocks, we first find out whether we can exploit them to the maximum extent.
Whether we succeed or not may have tremendous psychological benefits for more philosophical discussions on the nature of computation.

135. Scott Says:

fred #134: Yup.

136. Scott Says:

jonas #132: That’s a superb question, which I phrase by simply asking whether QFT or quantum gravity might lead to anything beyond quantum computation in terms of what we can efficiently compute. So far, from the work by Jordan, Lee, and Preskill on simulating QFTs, and from the general philosophy of AdS/CFT that even quantum gravity should be holographically dual to a QFT, the indications seem to be “no” — i.e. that BQP might really be the end of the line, the difficulties with QFT and quantum gravity being fearsome and technical but not much affecting the asymptotics. But these are early days, and it’s entirely possible that there are further surprises in store.

137. Max Says:

Hi Scott! I have just a small terminological question. In your responses you write about the Church-Turing thesis and connect it to whether the universe can be simulated. I was under the impression that this connection is ‘established’ by the Church-Turing-Deutsch principle, whereas the ‘weaker’ Church-Turing thesis ‘merely’ states that there are no fundamentally different modes of computation than Turing computation.

So, in this sense the Church-Turing thesis could be true, but the universe might still not be simulated by a Turing Machine (i.e., the Church-Turing-Deutsch principle is false). Do you find that distinction useful or do you think they collapse into the same statement?

138. Scott Says:

Max #137: I’m not sure that I even understand the distinction here. I would say: if we found a box on the beach that could solve the halting problem in finite time in all cases, then to whatever extent the Church-Turing Thesis ever said anything about reality, the thesis is false. At any rate, whenever I use the term “Church-Turing Thesis,” I always mean the interesting falsifiable claim about empirical reality, rather than the boring definition. As a meta-principle of physics, I’d say that the CT Thesis has a similar sort of status as (say) reversibility, or no superluminal signaling. The fact that the people who first stumbled on this physical principle were called “mathematical logicians” rather than “physicists” is irrelevant.

139. John Sidles Says:

Here are some simple calculations that (for me at least) illuminate some of the issues under consideration in regard to the universe as a simulation (or not), the feasibility of Quantum Supremacy (or not), and the truth of the Church-Turing Thesis (in its extended form, or not).

Let’s start by turning off gravity and the weak interactions, and consider a QED universe whose elementary fermions comprise electrons and the stable nuclei, and whose sole gauge field is the photon. The fundamental mass, charge, length and time units of the QED universe may be taken to be $$m_e=e=\hbar=\epsilon_0=1\}; the sole adjustable parameter of the theory may be taken to be the magnetic constant \(\mu_0$$; in these units $$c=1/\sqrt{mu_0}$$ and $$\alpha=\sqrt{mu_0}/(4\pi)$$; note in particular that in this convention the Bohr radius (the chemical length scale) and the Hartree energy (the chemical energy scale) are not tunable, but rather are fixed constants of order unity.

Now let’s consider the effect of varying the magnetic constant $$\mu_0$$. In the limit $$\mu_0\to 0$$, all of biology and much of condensed matter physics (including superconductivity) survives. There are however no photons; hence no radiative decoherence; in many respects (not all) the scalable design of quantum computers becomes much easier. It’s true that BosonSampling experiments have to be done with lattice phonons, but heck, this might be easier without those lossy gauge photons flying around! Therefore we identify (provisionally) the limit $$\mu_0\to 0$$ as the limit in which Quantum Supremacy is easiest to demonstrate.

On the other hand, in our physical universe we have $$mu_0 \simeq (4\pi/137)^2 \simeq 0.0084$$, which is small but not zero. This is a universe in which photons ubiquitously convey information (good!) and decoherence (bad). Therefore we identify (provisionally) the regime $$\mu_0 \sim 10^{-2}$$ as characteristic of universes in which Quantum Supremacy plausibly is infeasible to demonstrate, and concomitantly the Extended Church-Turing Principle (which asserts all physical systems can be simulated in PTIME with classical computing resources) plausibly does reign supreme.

These considerations are compatible with a 21st century in which Quantum Supremacy is (eventually) proved rigorously to be feasible in the no-photon QED limit (i.e., $$\mu_0\to 0$$), and compatibly (albeit contrastingly) the Extended Church-Turing Principle is (eventually) proved rigorously to hold in the weak-photon QED limit (i.e., $$\mu_0 \sim 10^{-2}$$).

Needless to say, achieving these two milestones will require tremendous progress in quantum information theory … which is why it’s cool that they are compatible. Because in the best of all possible worlds (as it seems to me) these two milestones both are achievable, and both milestones convey to us information that is exceedingly valuable.

140. John Sidles Says:

[now with LaTeX fixed (for me, there’s no LaTeX preview)]

Here are some simple calculations that (for me at least) illuminate some of the issues under consideration in regard to the universe as a simulation (or not), the feasibility of Quantum Supremacy (or not), and the truth of the Church-Turing Thesis (in its extended form, or not).

Let’s start by turning off gravity and the weak interactions, and consider a QED universe whose elementary fermions comprise electrons and the stable nuclei, and whose sole gauge field is the photon. The fundamental mass, charge, length and time units of the QED universe may be taken to be $$m_e=e=\hbar=\epsilon_0=1$$; the sole adjustable parameter of the theory may be taken to be the magnetic constant $$\mu_0$$; in these units $$c=1/\sqrt{\mu_0}$$ and $$\alpha=\sqrt{\mu_0}/(4\pi)$$; note in particular that in this convention the Bohr radius (the chemical length scale) and the Hartree energy (the chemical energy scale) are not tunable, but rather are fixed constants of order unity.

Now let’s consider the effect of varying the magnetic constant $$\mu_0$$. In the limit $$\mu_0\to 0$$, all of biology and much of condensed matter physics (including superconductivity) survives. There are however no photons; hence no radiative decoherence; in many respects (not all) the scalable design of quantum computers becomes much easier.

It’s true that BosonSampling experiments have to be done with lattice phonons, but heck, this might be easier without those lossy gauge photons flying around! Therefore we identify (provisionally) the limit $$\mu_0\to 0$$ as the limit in which Quantum Supremacy is easiest to demonstrate.

On the other hand, in our physical universe we have $$mu_0 \simeq (4\pi/137)^2 \simeq 0.0084$$, which is small but not zero. This is a universe in which photons ubiquitously convey information (good!) and decoherence (bad).

Therefore we identify (provisionally) the regime $$\mu_0 \sim 10^{-2}$$ as characteristic of universes in which Quantum Supremacy plausibly is infeasible to demonstrate, and concomitantly the Extended Church-Turing Principle (which asserts all physical systems can be simulated in PTIME with classical computing resources) plausibly does reign supreme.

These considerations are compatible with a 21st century in which Quantum Supremacy is (eventually) proved rigorously to be feasible in the no-photon QED limit (i.e., $$\mu_0\to 0$$), and compatibly (albeit contrastingly) the Extended Church-Turing Principle is (eventually) proved rigorously to hold in the weak-photon QED limit (i.e., $$\mu_0 \sim 10^{-2}$$).

Needless to say, achieving these two milestones will require tremendous progress in quantum information theory, which is why it’s cool that they are compatible. Because in the best of all possible worlds (as it seems to me) these two milestones both are achievable, and both milestones convey to us information that is exceedingly valuable.

141. fred Says:

Scott #138
But if we’re wondering what CT Thesis is saying about “reality”, why are we considering hardware with unlimited resources?

When it comes to cosmology, physicists do care about the *actual* amount of matter and energy that’s out there (expension/contraction, Mach’s principle,…).

142. wolfgang Says:

@jonas #132

>> Boltzmann brains

I think there should be variant of Godwin’s law, that every debate of physics on the internet ends if somebody mentions Boltzmann brains …

143. mjgeddes Says:

But wolfgang,

The Boltzmann brain thought experiement is actually quite a good argument against mathematical reality.

You know, I used to swallow the ‘reality is mathematics’ cult hook-line-and-sinker, and thought Max Tegmark’s mathematical ‘Multiverse’ /Anthropic reasoning etc. was all amazing stuff.

As the years past, it slowly dawned on me that it didn’t add up at all. The universe we live in is very simple, but the vast majority of mathematical structures are very complex, so why do we find ourselves in such a simple universe? So that’s a big problem with the Tegmark multiverse right there.

Then you have all the strange anthropic reasoning stuff, where you have to consider the space of all possible observers and try to pick the class you’re in. For instance, why indeed is the universe so large, if, for example, the majority of conscious observers could just randomly pop into existence temporarily in a much smaller region of space-time?

So real problems with the mathematical universe right there.

I realized that the class of mathematical objects to which we grant the status ‘real’ would have to be seriously restricted somehow, and that’s why I abandoned full-blown mathematical platonism and now favor a more restricted mathematical universe of the type proposed by (for instance) Stephen Wolfram, where only computational objects are granted the status ‘real’.

But even restricting the mathematical universe to computational objects still doesn’t solve the problems plaguing the whole idea of reality as math. Still many possible ‘computational realities’, most of them messy and complex, but our own observed reality is very simple.

The bottom-line is that Tegmark multiverse is gross violation of the principle of Occam’s razor.

144. Jon K. Says:

A few things:

1) Scott, congrats on another kid. It’s nice that you share your personal life, political views, and other non-Quantum Complexity stuff on your blog.

2) I love this topic! And I am glad people are engaged in the debate. Just like it is lazy to say “the universe is a simulation” and not try to explain how this view is compatible with the laws of physics as they are currently understood, I also think it is lazy to just roll your eyes and not engage all the interesting points various flavors of “digital physics” theories have to offer. I look forward to reading more posts on this topic; Please don’t close this thread!!

3) I’m a little disappointed nobody has commented on my movie ad. (#11) I understand that self-promotion is not very becoming, and I don’t have a PHD in physics, but come on, how many movies get made that reference Gödel, the halting problem, it from bit, and other subjects related to this conversation? Plus, it’s FREE on Amazon Prime.

4) In a recent talk, Max Tegmark invited scientists to work in the space where neuroscience, physics, and AI overlap. Do you think future breakthroughs in physics will come from interdisciplinary considerations, or have interdisciplinary implications?

145. wolfgang Says:

@mjgeddes #143

Instead of contemplating Boltzmann brains you should wonder about mosquito brains: We actually know that billions of mosquito brains exist and they are much better organized than the vast majority of Boltzmann brains would be and therefore much more likely to provide for an ‘experience’.

Perhaps you want to read the blog post of Jacques Distler:
golem.ph.utexas.edu/~distler/blog/archives/002652.html

146. Jon K. Says:

#143

From Tegmark’s Mathematical Universe abstraction:

I hypothesize that only computable and decidable (in Godel’s sense) structures exist, which alleviates the cosmological measure problem and help explain why our physical laws appear so simple. I also comment on the intimate relation between mathematical structures, computations, simulations and physical systems.

https://arxiv.org/abs/0704.0646

From my understanding, unlike Tegmark, Wolfram doesn’t necessarily think that we live in the computable multiverse, but rather thinks there is one computation that the universe is doing… although he has a particular interest in causal networks which have a “causal invariance” property, meaning you could update the graph connections in any order following the specified rules, and any agent encoded in this graph system would see their world in exactly the same way under each of these possible updates. It’s not exactly a multiverse, but it seems related.

147. fred Says:

At a higher level, is this asking whether a certain type of reality/system/physics can be leveraged to compute/simulate (perfectly) a sub-system of itself?

E.g. building a QC in the real world, or running a new instance of The Game of Life using The Game of Life,…

Is this the same as asking whether a system is “Turing complete”?

148. Scott Says:

Incidentally, Jonas #132:

On a less related topic, after this blog post, which asks whether we live in a simulated universe, will you ask next whether we are Boltzmann brains with fake memories that accidentally came into existence in the infinitely long boring part of time after the heat death of the universe.

We’ve already discussed Boltzmann brains many times on this blog—and they show up in The Ghost in the Quantum Turing Machine as well. In general, the simulation argument, Boltzmann brains, etc. etc. are all topics that, in a forum like this one, recur infinitely often in the limit—much like Boltzmann brains themselves.

149. fred Says:

With the assumptions:

– our reality is computable

– computers appear “spontaneously” in our reality (life and intelligence being transient steps for this).

Isn’t it unavoidable that there would be a recursive hierarchy of universes simulated inside other universes?

150. Moshe Says:

JimV: thanks. I was aiming at making this familiar, but also to highlight what are the differences. In normal FEM, at large enough N, your numerics give you an approximation to the continuum model (which you probably don’t believe), and to any other model which is different from your simulation only at short distances, including actual reality. This allows you to be agnostic about what really happens at short distances.

Because of points (1) and (2) in my original comment, this is not what happens when you simulate continuum Quantum Field Theory. The existence of certain structures at low energies is a sensitive probe to short distance physics. Which is great! Those hints are few and far between, and exact Lorentz invariance at short distances does narrow things quite a bit.

None of that means those discrete models cannot be extremely useful, for example to get physical results, or for understanding generic phenomena (say, quantum chaos, growth of complexity, etc.) in a simpler setting. For many purposes this really does not matter much. It does prevent you from taking the results of any single simulation too literally, certainly it should give you a pause if you want to base your metaphysics on it.

151. mjgeddes Says:

Jon.K,

Tegmark’s whole ediface is just a creaking, overly-complex mess. As I pointed out, I don’t think that restricting the multiverse to “only computable and decidable structures” is enough to solve the measure problem.

This is why I think the option of giving up reductionism and suggesting that reality is a ‘cut and paste’ of 3 fundamental, irreducible properties (information, fields and cognition) is still in the running.

With a triple-aspect reality, each of the 3 properties can be ‘created’ (and constrained) by the other 2 in a circular loop (think impossible stair-case or snake biting it’s own tail – this is a ‘bootstrap’ theory).

The big advantage of this idea is that you could seriously constrain the size of the multiverse (if not dispense with it altogether), and you could get a neat elegant explanation for reality as a whole (it’s a circular, entirely self-contained boot-strap).

The big disadvantage of course, is giving up reductionism, and it has some peculiar implications it’s own (for instance it does imply panpsychism or pan-proto-psychism, the idea that there are mental properties in everything aka Chalmers).

152. Jon K. Says:

#146

Am I allowed to comment on my own comment?… So I tried to draw what I perceive as the distinction between Wolfram and Tegmark’s work, which is that Tegmark is considering the computable multiverse and the Wolfram is looking for one computable universe… But then I was thinking, maybe those two points of view could be reconciled–if I am even characterizing them correctly in the fist place–at which point I recalled Juergen Schmidhuber’s talk about his single, short program, that computes all computable (“toy” or “real” depending on your opinion) universes. So depending on what level you view that program on, you can see it as a single computation/universe or multiple computations aka a multiverse. There could also be other similar programs with this same dualistic POV property. Any “infinite” subset of computable programs which you systematically step through should have that same property, if I’m thinking about this correctly.

153. Jon K. Says:

#151

Is your view similar to and consistent with Godel’s analysis of Closed Timelike Curves?

154. mjgeddes Says:

Jon K.

I think you’re right about Tegmark/Wolfram distinction yes, I saw a recent short interview that John Horgan did with Wolfram for his ‘Cross Check’ column in ‘Scientific American’.

Wolfram is agnostic towards the multiverse, but is indeed looking to pick out one particular computation from ‘the computational universe’ (so that fits with Schmidhuber).

As regards my bootstrap theory, no, I don’t think it needs CTCs.

There was a recent article in ‘Quanta’ magazine about a revival of bootstrap ideas in physics, also recently discussed by Lubos Motl on his blog.

Here’s the link to the ‘Quanta’ article:
https://www.quantamagazine.org/20170223-bootstrap-geometry-theory-space/

Lubos points out that:

“Theories of quantum gravity may be reasonably believed to worship the bootstrap paradigm in the full generality. In particular, I would add that it is rather clear that quantum gravity never allows you to strictly separate objects to elementary and composite ones.”

and

“The pure constructive or “reductionist” description of physics will be viewed as an artifact …”

I’m simply suggesting that we generalize the bootstrap idea further, to include mathematical and mental properties as well as physical ones.

A version of Chalmer’s idea of panpsychism or pan-proto-psychism would automatically fall out.

In my bootstrap reality theory, consciousness is both an (almost) irreducible property in it’s own right *and* an (almost) composite property composed of a combination of material processes and information processing (almost reducible to matter and information but not quite).

155. Dacyn Says:

I don’t see what the relevance of the Bekenstein bound is here. Computations in an infinite-dimensional separable Hilbert space can be approximated by a Turing machine just as well as computations in a finite-dimensional one — namely, since the space is separable, any vector can be approximated to any given degree of accuracy by a vector in a predetermined countable set, which can be described by a finite amount of information. The only advantage that finite-dimensionality gives you is that there is a uniform bound on how much information you need, given a prescribed accuracy. It’s not relevant to the fundamental question of whether the Church-Turing thesis holds — all that matters for that is separability of the Hilbert space. And all standard quantum theories use a separable Hilbert space (AFAIK).

156. Scott Says:

Dacyn #155: It’s a good point, and closely related to what we were discussing above, about the validity of the Church-Turing Thesis if we can’t put a uniform upper bound on the Hilbert space dimension (as some were arguing not even Bekenstein lets us do).

As a first approximation to the truth, I would say: a finite-dimensional Hilbert space is sufficient but not necessary for Church-Turing to hold. It’s not necessary because, as you say, one can easily imagine a Hilbert space of countably infinite dimension (as normally assumed in QM) in which the evolution of states is computable.

On the other hand, in a separable (countably infinite dimensional) Hilbert space, one can also imagine unitary evolutions like |x,a⟩→|x,a⊕f(x)⟩, where f(x) encodes whether Turing machine x halts, that are uncomputable.

Actually, come to think of it, even in a universe with a finite-dimensional Hilbert space, one could have the above evolution for all the x’s that fit within the universe. In such a world, the Church-Turing Thesis would hold only for the most trivial (and cosmological) of reasons: all physical theories would involve uncomputability, except those that basically just stored a gigantic lookup table with all the possible states of the world.

So you’re right: while perhaps one question inevitably arises in a discussion of the other, the Church-Turing question and the finite-dimensional Hilbert space question are far from the same.

157. John Sidles Says:

Scott imagines (in #156) “A universe with a finite-dimensional Hilbert space … with $$|x,a\rangle \to |x,a\oplus f(x)\rangle$$, where $$f(x)$$ encodes whether Turing machine $$x$$ halts”

This is a universe in which Quantum Supremacy holds paradigmatically; so let’s ask all the questions we can about it.

Q  What is the Hamiltonian of this universe?

For concreteness, imagine a universe of fixed dimension, supporting (say) $$10^9$$ qubits, whose unitary evolution in some fixed finite time realizes the Halting Oracle. We conceive these cubits as a cubical array, $$10^3$$ qubits on a side, as a Nature-supplied substance that we will call “pythium” (after Pythia, who was the oracular High Priestess of the Temple of Apollo at Delphi circa 700 BC).

By a finite computation, given the specified unitary evolution of the pythium during the specified oracular computation-time, we can calculate pythium’s Hamiltonian.

By arguments that this comment will not belabor, we appreciate that the pythum Hamiltonian $$H_P$$ resembles Chaitin’s constant: it’s a well-defined mathematical entity whose properties we can imagine, but not concretely inspect or physically realize.

The properties of $$H_P$$ stand in striking contrast to Nature’s locally-interacting photon-radiating gauge-invariant locally-relativistic informatically-compressible QED Hamiltonians, specifically in that $$H_P$$ is (as far as we know) not sparse, not radiative, not gauge-invariant, not relativistic, and not informatically compressible.

In a nutshell, the same mathematical traits that make pythium’s Hamiltonian well-suited to demonstrations of Quantum Supremacy, are traits that Nature’s QED Hamiltonian conspicuously lacks. This helps us appreciate why (in QED universes) Quantum Supremacy seems (at present) to be exceedingly difficult to demonstrate (even effectively), while in contrast the Extended Church-Turing thesis (at present) seems entirely feasible to demonstrate (at least effectively).

For an in-depth description of effective demonstrations of the Extended Church-Turing thesis — valid solely for the locally-interacting photon-radiating gauge-invariant locally-relativistic informatically-compressible QED Hamiltonians that Nature supplies — see (for example) Garnet Chan’s plenary lecture at QIP 2016 “Simulating quantum systems on classical computers” (slides here, video here).

As the session chair Peter Shor said (sagely, as it seems to me) in his introduction to Chan’s QIP 2017 lecture

It’s my honor today to be able to introduce Garnet Chan, from CalTech. Professor Chan is one of the world’s experts on simulating quantum chemistry and quantum condensed matter on classical computers.

So in some sense that makes him our competitor, because what I think a lot of you would like to do is find algoritms on quantum computers that are better and faster than Professor Chan’s.

But we’re both working toward the same ultimate goal, which is understanding how quantum matter works.

Peter Shor’s remarks, and QIP 2017’s honoring of Garnet Chan with a plenary lecture, outstandingly exemplify (for me at least) a principal principle of quantum informatic discourse: that Quantum Supremacists and Quantum Skeptics should both focus their attention and energies upon the best attributes of the entire body of quantum informatic research.

That is why distilling the best attributes of Scott’s remarks here on Shtetl Optimized (or indeed anyone’s remarks) requires the excision of snark, the redaction of sardonicism, and the ignoring of rhetorical flourishes.

In particular, rhetorical flourishes like “accept no substitutes” (in regard to demonstrations of Quantum Supremacy) are most valuable when we parse them exceedingly carefully and thoughtfully, so as not to injudiciously undervalue contributions like Garnet Chan’s (or indeed DWAVE’s).

That is why (for me at least) Peter Shor’s introductory remarks at QIP 2017, and Garnet Chan’s subsequent lecture, hit the sweet spot in respect to our shared 21st century quest to appreciate Quantum Supremacy.

158. fred Says:

What does it take to force a system to behave in a coherent manner at the QM level?
If the universe has N particles in it, what’s the higher bound on how many qbits can be realized? (I would guess that a significant portion of the resources have to be dedicated to error correction, etc).

159. Ethan Says:

all matter is quantum matter

160. John Sidles Says:

Ethan (#159)  “All matter is quantum matter.”

Ethan’s principle extends to “All computational matter is quantum electrodynamical matter”, and this comment will argue that the extension is both natural and inspirational.

Specifically, this extension invites us to “parse carefully and thoughtfully” (per #157) the introduction to Nielsen and Chuang’s ground-breaking textbook Quantum Computation and Quantum Information (2000), which inculcates the following quantum worldview:

1.1.1 History of quantum computation and quantum information … What is quantum mechanics? Quantum mechanics is a mathematical framework or set of rules for the construction of physical theories. For example, there is a physical theory known as quantum electrodynamics which describes with fantastic accuracy the interaction of atoms and light. Quantum electrodynamics is built up within the framework of quantum mechanics, but it contains specific rules not determined by quantum mechanics.

The relationship of quantum mechanics to specific physical theories like quantum electrodynamics is rather like the relationship of a computer’s operating system to specific applications software—the operating system sets certain basic parameters and modes of operation, but leaves open how specific tasks are accomplished by the applications.

The rules of quantum mechanics are simple but even experts find them counterintuitive, and the earliest antecedents of quantum computation and quantum information may be found in the long-standing desire of physicists to better understand quantum mechanics.

Seventeen years have passed since these words appeared, and at QIP 2017 significant new readings were in evidence. To appreciate these new readings, lets parse the above three paragraphs in reverse order, starting with “the long-standing desire … to better understand quantum mechanics”.

To begin the parsing, there’s no shortage of literature establishing that the Nielsen and Chuang-asserted “mystery” of quantum mechanics reflects (at least partially) deficiencies in our understanding of classical mechanics. E.g. Jeremy Butterfield’s “On symplectic reduction in classical mechanics” (Philosophy of Physics 2005), and the references therein, provide a start in understanding the mathematical framework and the historical evolution of the essays that Terry Tao’s weblog groups under the tags ‘Navier-Stokes equations’, ‘Euler equations’, and ‘finite time blowup’.

Moreover the mathematical toolset that Butterfield and Tao apply to classical dynamics systems, has become essential to ongoing research in quantum gravity, emergent spacetime, and many other topics of interest to Shtetl Optimized readers.

In particular, the Butterfield/Tao work invites to regard the (zero viscosity) Euler equations as homologous to (qubit-informatic) quantum mechanics, and the (finite viscosity) Navier-Stokes equations as homologous to (condensed matter) quantum electrodynamics.

In this light, the second paragraph of Nielsen and Chuang extends to a form that more aptly describes the range of research presented at QIP 2017:

The relationship of quantum electrodynamics (QED) to idealized physical theories (like quantum information theory) is rather like the relationship of a computer’s operating system to specific applications software — the operating system sets certain basic parameters and modes of operation, but leaves open how idealized computational capacities are to be demonstrated (or not) in practice.

To extend the metaphor, our universe’s QED “operating system” — which is the sole operating system that our experiments can “boot” into — imposes stringent and (at present) poorly-appreciated limits on the dimensionality and evolution of physically realizable dynamical trajectories.

In a nutshell, the Nielsen and Chuang “operating system” metaphor is more useful nowadays if we regard QED as the operating system that, perforce, is our sole platform for demonstrating Quantum Supremacy.

Appreciated in this light, the body of research presented at QIP 2017 provides (read in the above light) provides a well-framed answer to a question that Scott posed back in his article “Multilinear formulas and skepticism of quantum computing” (arXiv:0311039, 2003); and article that was written shortly after the Nielsen and Chuang text appeared. This 2003 article asked:

“Exactly what property separates the quantum states we are sure we can create, from those that suffice for Shor’s factoring algorithm?”

Numerous works at QIP 2017, including in particular Garnet Chan’s plenary lecture “Simulating quantum systems on classical computers” (as discussed in #157) provide a QED-dependent answer

Mathematically, the separatory property is small tensor rank; the physical mechanism is that QED unravellings restrict to small-rank varietal state-spaces.

Stated concisely and bluntly, an emerging and all-too-credible skeptical answer to Scott’s question postulates that Quantum Supremacy fails in QED universes because the Extended Church-Turing Thesis is true.

Two notable virtues of this nuanced, skeptical, QED-centric worldview (as it seems to me) are: (1) it inspires us (students especially) to read deeply and integratively in the marvelous STEAM literature of our age, and (2) it inspires to appreciate, with equal respect, the works of a cadre of visionaries who have conceived the notion of Quantum Supremacy, concomitantly with the works of a cadre of visionaries who are providing explicit, diverse, and cumulatively strengthening reasons to appreciate why the infeasibility of Quantum Supremacy would be comparably marvelous to its feasibility.

161. asdf Says:

Y combinator funds quantum computing startup. I think this is likely to be in LOL territory, but who knows. I’m sure Scott has already been asked to comment so I won’t request that here. Of course one can always hope.

http://www.bizjournals.com/sanjose/news/2017/03/28/rigetti-quantum-computing-y-combinator-a16z.html

162. HONEST ANNIE Says:

Simulation argument assumes the same physics in the simulation and the universe that simulates.

Different approach would be to drop this assumption and seek physical laws that are be artifacts from the simulation or convenient optimizations that limit available computational requirements. Some of the unresolved mysteries might be jitter, wander, rounding and aliasing errors from the simulation.

For example: Universe is Newtonian with quantum mechanics and general relativity is just elegant way to add distance fog.

https://en.wikipedia.org/wiki/Distance_fog

This is an interesting perspective from Feynman on the subject

164. Why String Theory Is Still Not Even Wrong | MEDIAVOR Says:

[…] I like quite a bit this comment from Moshe Rozali (at URL http://www.scottaaronson.com/blog/?p=3208#comment-1733601 ): “As far as metaphysical speculation goes it is remarkably unromantic. I mean, […]

165. kakaz Says:

For me, something to be called simulation, has to be different from the phenomenon beying simulated. For example a ball on a rope is not a simulation of the pendulum. It is pendulum. Simulation of the pendulum is a model of pendulum which omits non important from one point if view, details. So beying a simulation, a model, means to omit something.

From the other point of view it looks like a lot of opinion means for simulation to be a kind of affairs in which existence of a simple model is possible ( which is very general statement) and when such model is discrete ( which is very limiting statement). This connect two completely different levels of rigidity: very general ( model is possible and effective) with very strong one ( is computable or even discrete).

Ther pe is a lot possibilities in between. Even such simple equation like classical wave equation has uncomputable solutions! Backenstein bound, which is very smart calculation​, is in contradictory with​ simple experiment: from simple hammer to LHC if we out more energy, we get more degrees of freedom excited. It is 10^10 levels of size scale truth. If you say in finite volume there may be only finite energy and you say there is limited entropy ( black hole) and the matter existing only as bosonic and fermionic states, then you get the bound. But what if it is a statement about states of the matter only, and not about space itself? What I’d there are processes we do not know, and in principle you can excite even more degrees of freedom? We are reasoning about something in a scale 10^(-23) basing on something on a scale 10^-(10). It is very weak hipothesis.
The third though is as follows: suppose we have very complicated system we can simulate on such detailed level, it simulates all details ( that is a model is exactly the same as reality which is very strong statement, meaning that at some level model is indistinguishable from reality. In other words, there is such level of reality which is completely compressible into simple equations and no other aspects of reality on this level are possible). Then such system ( “ultimate computing machine simulating complete reality on ultimate level of complexity”) could be simulated. Would be possible any compression in such second level simulation? Answer is probably no ( because if it would be possible, further simplification of model of the reality will be possible, do first simulation will be no exact reality but only simplified model of it, just like for pendulum). So we would have a system for which simulation and a reality will be in the same level of complexity and requires the same level of resources. From practical point if view then, if something may be simulated, eventually there must be a level where simulation and it’s model are indistinguishable in the meaning of complexity.

166. Vahid Ranjbar Says:

In my opinion the simulation hypothesis is really a modern day retelling of Plato’s cave. I am actually very surprised that Bostrom never once references Plato in his paper. The logical consequences of which will lead one back to the notion of idealized form’s being the true ‘base-reality’. While I personally don’t much like the simulation hypothesis or think it’s true. I think Sabine’s claim about the simulation not being Lorentz-invariant seems on the surface incorrect. I mean we simulate Lorentz-invariant systems all day long. I guess she worried there will be some fundamental ‘granularity’ which those in the simulation would detect. My guess is that there are probably a zillion ways to ‘fool’ the inhabitance of the simulation into observing said invariance.

167. Daniel Tung Says:

I have posted several arguments on why the simulation argument fails. I think the problem of Bostrom’s argument does not lie so much in Physics, as given by Sabine, but more on the correct use of probability theory. My analysis can be found at: https://medium.com/@tytung2020/why-the-simulation-argument-is-untenable-part-2-2fe200b64a33

168. Collection – Are We Living in a Computer Simulation? That’s a Heated Debate – Nguyen Phi Long Says:

[…] that the simulation hypothesis is lazy,” theoretical computer scientist Scott Aaronson wrote on his blog at the time. “… it doesn’t pay its rent by doing real explanatory work, […]

169. "The Universe made me do it" - Barrel Strength Says:

[…] I like quite a bit this comment from Moshe Rozali (at URL http://www.scottaaronson.com/blog/?p=3208#comment-1733601): “As far as metaphysical speculation goes it is remarkably unromantic. I mean, your […]