Because you asked: the Simulation Hypothesis has not been falsified; remains unfalsifiable

By email, by Twitter, even as the world was convulsed by tragedy, the inquiries poured in yesterday about a different topic entirely: Scott, did physicists really just prove that the universe is not a computer simulation—that we can’t be living in the Matrix?

What prompted this was a rash of popular articles like this one (“Researchers claim to have found proof we are NOT living in a simulation”).  The articles were all spurred by a recent paper in Science Advances: Quantized gravitational responses, the sign problem, and quantum complexity, by Zohar Ringel of Hebrew University and Dmitry L. Kovrizhin of Oxford.

I’ll tell you what: before I comment, why don’t I just paste the paper’s abstract here.  I invite you to read it—not the whole paper, just the abstract, but paying special attention to the sentences—and then make up your own mind about whether it supports the interpretation that all the popular articles put on it.

Ready?  Set?

Abstract: It is believed that not all quantum systems can be simulated efficiently using classical computational resources.  This notion is supported by the fact that it is not known how to express the partition function in a sign-free manner in quantum Monte Carlo (QMC) simulations for a large number of important problems.  The answer to the question—whether there is a fundamental obstruction to such a sign-free representation in generic quantum systems—remains unclear.  Focusing on systems with bosonic degrees of freedom, we show that quantized gravitational responses appear as obstructions to local sign-free QMC.  In condensed matter physics settings, these responses, such as thermal Hall conductance, are associated with fractional quantum Hall effects.  We show that similar arguments also hold in the case of spontaneously broken time-reversal (TR) symmetry such as in the chiral phase of a perturbed quantum Kagome antiferromagnet.  The connection between quantized gravitational responses and the sign problem is also manifested in certain vertex models, where TR symmetry is preserved.

For those tuning in from home, the “sign problem” is an issue that arises when, for example, you’re trying to use the clever trick known as Quantum Monte Carlo (QMC) to learn about the ground state of a quantum system using a classical computer—but where you needed probabilities, which are real numbers from 0 to 1, your procedure instead spits out numbers some of which are negative, and which you can therefore no longer use to define a sensible sampling process.  (In some sense, it’s no surprise that this would happen when you’re trying to simulate quantum mechanics, which of course is all about generalizing the rules of probability in a way that involves negative and even complex numbers!  The surprise, rather, is that QMC lets you avoid the sign problem as often as it does.)

Anyway, this is all somewhat far from my expertise, but insofar as I understand the paper, it looks like a serious contribution to our understanding of the sign problem, and why local changes of basis can fail to get rid of it when QMC is used to simulate certain bosonic systems.  It will surely interest QMC experts.

OK, but does any of this prove that the universe isn’t a computer simulation, as the popular articles claim (and as the original paper does not)?

It seems to me that, to get from here to there, you’d need to overcome four huge difficulties, any one of which would be fatal by itself, and which are logically independent of each other.

  1. As a computer scientist, one thing that leapt out at me, is that Ringel and Kovrizhin’s paper is fundamentally about computational complexity—specifically, about which quantum systems can and can’t be simulated in polynomial time on a classical computer—yet it’s entirely innocent of the language and tools of complexity theory.  There’s no BQP, no QMA, no reduction-based hardness argument anywhere in sight, and no clearly-formulated request for one either.  Instead, everything is phrased in terms of the failure of one specific algorithmic framework (namely QMC)—and within that framework, only “local” transformations of the physical degrees of freedom are considered, not nonlocal ones that could still be accessible to polynomial-time algorithms.  Of course, one does whatever one needs to do to get a result.
    To their credit, the authors do seem aware that a language for discussing all possible efficient algorithms exists.  They write, for example, of a “common understanding related to computational complexity classes” that some quantum systems are hard to simulate, and specifically of the existence of systems that support universal quantum computation.  So rather than criticize the authors for this limitation of their work, I view their paper as a welcome invitation for closer collaboration between the quantum complexity theory and quantum Monte Carlo communities, which approach many of the same questions from extremely different angles.  As official ambassador between the two communities, I nominate Matt Hastings.
  2. OK, but even if the paper did address computational complexity head-on, about the most it could’ve said is that computer scientists generally believe that BPP≠BQP (i.e., that quantum computers can solve more decision problems in polynomial time than classical probabilistic ones); and that such separations are provable in the query complexity and communication complexity worlds; and that at any rate, quantum computers can solve exact sampling problems that are classically hard unless the polynomial hierarchy collapses (as pointed out in the BosonSampling paper, and independently by Bremner, Jozsa, Shepherd).  Alas, until someone proves P≠PSPACE, there’s no hope for an unconditional proof that quantum computers can’t be efficiently simulated by classical ones.
    (Incidentally, the paper comments, “Establishing an obstruction to a classical simulation is a rather ill-defined task.”  I beg to differ: it’s not ill-defined; it’s just ridiculously hard!)
  3. OK, but suppose it were proved that BPP≠BQP—and for good measure, suppose it were also experimentally demonstrated that scalable quantum computing is possible in our universe.  Even then, one still wouldn’t by any stretch have ruled out that the universe was a computer simulation!  For as many of the people who emailed me asked themselves (but as the popular articles did not), why not just imagine that the universe is being simulated on a quantum computer?  Like, duh?
  4. Finally: even if, for some reason, we disallowed using a quantum computer to simulate the universe, that still wouldn’t rule out the simulation hypothesis.  For why couldn’t God, using Her classical computer, spend a trillion years to simulate one second as subjectively perceived by us?  After all, what is exponential time to She for whom all eternity is but an eyeblink?

Anyway, if it weren’t for all four separate points above, then sure, physicists would have now proved that we don’t live in the Matrix.

I do have a few questions of my own, for anyone who came here looking for my reaction to the ‘news’: did you really need me to tell you all this?  How much of it would you have figured out on your own, just by comparing the headlines of the popular articles to the descriptions (however garbled) of what was actually done?  How obvious does something need to be, before it no longer requires an ‘expert’ to certify it as such?  If I write 500 posts like this one, will the 501st post basically just write itself?

Asking for a friend.


Comment Policy: I welcome discussion about the Ringel-Dovrizhin paper; what might’ve gone wrong with its popularization; QMC; the sign problem; the computational complexity of condensed-matter problems more generally; and the relevance or irrelevance of work on these topics to broader questions about the simulability of the universe.  But as a little experiment in blog moderation, I won’t allow comments that just philosophize in general about whether or not the universe is a simulation, without making further contact with the actual content of this post.  We’ve already had the latter conversation here—probably, like, every week for the last decade—and I’m ready for something new.

50 Responses to “Because you asked: the Simulation Hypothesis has not been falsified; remains unfalsifiable”

  1. Domenic Denicola Says:

    > did you really need me to tell you all this?

    No, but, it *is* great to have somewhere to point less-clueful friends/relatives/popsci enthusiasts. I think posts like this are a great public service, so thanks for writing them!

    I can understand getting tired of it, so I won’t fault you for giving up at some point, but until then I plan to take full advantage of this kind of summary.

  2. Just Says:

    for what it’s worth, Dima is at Oxford, not Cambridge.

  3. Scott Says:

    Just #2: Thanks! Fixed. Google misled me: when you search for him, his Cambridge homepage shows up first.

  4. Mitchell Porter Says:

    I wondered if the world was going to hassle you about this…

    Concerning their *technical* conclusion: They are appealing to a popular formal analogy between thermal transport in chiral systems, and gravitational anomalies. But as I understand it, the chiral anomaly for quasiparticles is an artefact of a cutoff, it only appears in effective field theory above the cutoff scale. Does this affect the validity of their argument? (That question is to the world, not to Scott.)

  5. Paul Chapman Says:

    In answer to your specific survey question: I read the Cosmos article “Physicists find we’re not living in a computer simulation” and, despite knowing next to nothing about the effect described and interpreted, I came up with your difficulty 4. I didn’t need to know about the other difficulties, because difficulty 4 stands by itself. I know that Bounded Very Large Finite Numbers are very large indeed, and yet all of them are infinitely closer to zero than they are to omega. No matter how large BVLFNs are (bounded, say, by the total quantum information content of this visible universe), a finite classical computer can run a simulation using them in *some* imagined bounded universe.

    And the required size of those BVLFNS can, if desired, quite possible be contained in a classical computer in OUR universe (with your trillion-years-to-one-second time dilation), if all that is required is that reality need only be simulated, as you say, “as subjectively perceived by us”; the total (classical) information ever processed by all humans put together is really rather small.

    Cheers, Paul

  6. Joshua Zelinsky Says:

    So, it seems like my comment on this Slashdot thread https://slashdot.org/comments.pl?sid=11182851&cid=55293399 managed to channel you very effectively; I wrote it essentially imagining what you would have said. Hopefully I didn’t do such a terrible job (I certainly didn’t make point 4 you made).

  7. Shmi Says:

    Mildly relevant, remember the very popular https://xkcd.com/505/
    Are there any obstructions to that approach?

  8. disconcision Says:

    i wonder if the general issue here (wrt popularization) is pervasive equivocation on what a ‘computer’ is. like when it comes to the possibility of AGI you get people talking past one-another about whether they’re referring to a prospective intelligence as being ‘a computer’, which is to say something essentially similar to a concrete kind of object they experience every day, as opposed to some realization of computation in the abstract sense.

    when it comes to the simulation hypothesis we see this in popular discussions where people (non-experts) bring up quantum discreteness as a purported analogue for visual aliasing artifacts in video games. this is paradigmatic of the way these connections usually go: an incidental feature of computers-in-practice is equated to a (misunderstanding of) quantum-systems-in-principal.

    in this frame, it’s obvious that (mis)taking the paper to mean “quantum systems can’t be simulated on quantum computers” is going to translate to “the universe can’t run a computer”, because a quantum computer is, in-frame, “quantum” first and a “computer” hardly at all. the “quantum” prefix obviates whatever follows, because a computer is something concrete, and “quantum” is a proxy for irreducible otherness.

    i know at this point this probably just reads as lay-bashing, but, on the AGI side, i find it personally hard to read Penrose’s Orch-OR as anything but an extremely refined form of the same essential blockage.

  9. Sniffnoy Says:

    To your points 1-4, Scott, I’d add the following #5, which has come up here before: Even somehow finding that the Church-Turing thesis is false wouldn’t disprove the simulation hypothesis; it’d just mean that the universe simulating us had physics capable of doing uncomputable things as well.

  10. Qiaochu Yuan Says:

    Scott, the answer is absolutely yes, people do need you to tell them these things, because the act of you doing so creates common knowledge about them in a way that counters the common knowledge created by fake news. You wrote the blog post on this and everything!

  11. Scott Says:

    Qiaochu #10: Aha, you’ve solved the mystery! And indeed, I feel like an idiot for not thinking of it.

    Though maybe I repressed that solution from my consciousness, because of course it implies that I’ll keep needing to write this sort of post forever.

  12. Matthias Goergens Says:

    In response to Qiaochu #10 and Scott #11: that suggests a way to reduce Scott’s workload:

    Scott, you can just have someone else write the standard rebuttal like Joshua Zelinsky’s #6, and publish as a guest post on your blog. Less work and still almost the same common knowledge effect.

  13. jonas Says:

    I appreciate that that popular article explicitly links to the research article. (Ideally they should also give more details than just one link that might quickly become stale, but still, this is a good first step.) Some popular articles I’ve seen are much worse, as they refer to scientific articles only in ways that are difficult to track down.

  14. Māris Ozols Says:

    What might have gone wrong with this article’s popularization in The Daily Mail is best summarized by this YouTube video:

  15. Atreat Says:

    Hadn’t heard of this, but read the abstract. It is clear that it does not support the popular interpretation. If we discover some manifestly uncomputable physical behavior of the universe I’d regard that as satisfying the “universe decidedly NOT a simulation” position.

    On another subject, Scott did you hear of Vladimir Voevodsky’s passing? I’ve long been fascinated by HoTT and wonder what you think of the endeavor to change how mathematics is conducted to be more like programming with automated proof checkers? People are right now basically coding up famous mathematical proofs and assembling libraries of mathematical knowledge all checked with computer aided proofs. I would think this is right up your alley as it involves the combination of math and comp sci…

  16. Atreat Says:

    Sniffoy #9: “the universe simulating us had physics capable of doing uncomputable things as well”

    I think in that case our commonly understood definitions of “computer” and “simulation” no longer apply. Something decidedly *other* would be going on.

  17. Scott Says:

    Atreat: Yes, I was saddened to hear of Voevodsky’s passing. I never knew him, though I knew others who did. I don’t know his work, or indeed type theory in general, well enough to say anything intelligent.

    My perspective is more like Sniffnoy’s: in some sense, the entire program of theoretical physics could be summed up as, “supposing the universe to be a simulation, what can we learn about the nature of the Simulator by examining it?” (The way Einstein put it was something like, “I want to know the Old One’s thoughts, whether He had any choice in creating the world.”)

    Modern notions like “It from Bit,” and directly studying the universe’s computational and information storage capacities, just make this particularly clear and explicit, without changing it fundamentally.

    Even if the physical Church-Turing Thesis turned out to be false, that would simply be another (especially weird and surprising) facet of the putative Simulator.

    It’s ironic, of course, that asking whether or not the world “is” a simulation is such a scientifically empty question, whereas understanding what it would take to simulate the world is (to my mind) the highest aspiration of physics.

  18. A B Says:

    Well, if we permit quantum simulations, then perhaps our universe is running the simulation of itself.

  19. Eray Özkural Says:

    If it is “unfalsifiable”, then it also falls short of a scientific hypothesis according to the approximate but flawed Popperian theory of science. However, positivism is IMO the correct foundation of science, and simulation argument does not contain a scientific hypothesis for entirely other reasons I tried to explain here:
    https://examachine.net/blog/simulation-argument-does-not-contain-a-scientific-hypothesis/

    The short summary is, I suppose, that whenever you talk about such mythological stuff with no evidence, when not even the slightest reason/observation exists for such an explanation, the explanation is bogus. There is no problem, hence no “solution” is needed.

    I (strongly) refuted the simulation argument in this blog post written several years ago, which does contain the argument from quantum simulation time complexity (iterated in this paper, though it was not needed as Scott Aaronson already explained), but the really strong rejection is the argument from a priori probability, both of which you may find here.

    https://examachine.net/blog/why-the-simulation-argument-is-invalid/

    I am preparing a journal version of these essays (hitherto unpublished except for my blog), and I would welcome any scientific criticism.

    Best Wishes,

    Eray

  20. Atreat Says:

    I don’t think any of us are in real disagreement. Should someone find the universe violating physical Church-Turing I think that would be the discovery of the millennia! I’d be awe struck and not at all interested in wasting time redefining an ultimately vacuous question.

  21. PDV Says:

    >did you really need me to tell you all this?

    Yes. (Well, not me, I looked at the headline, thought “Bullshit.”, and moved on. But for collective-you? Yes.) Two reasons:

    1) There are always more idiots.

    2) Even for people who have seen enough to draw their own (accurate) conclusions, their word won’t carry much weight with their excitable friends. Yours will. An expert endorsement of “Yeah this is dumb.” will be much more effective at discouraging irrational exuberance.

  22. Ranbir Sigh Parmar Says:

    Love the summary – simulation of universe by quantum computing and computing technologies beyond it, an interesting endeavour on which the worldwide scientific community is already working so hard, is music for all who cares for vexing issue/s. Regards.

  23. TvD Says:

    Scott, I think you’re overthinking it. To get “from here to there”, I present to you the following simple steps.
    1) The first sentence of the abstract says: “It is believed that not all quantum systems can be simulated efficiently using classical computational resources.”
    2) The world is quantum, right?
    3) Then the universe is not a computer simulation!
    QED
    p.s.: Of course, I *did* enjoy your four-step breakdown.

  24. Scott Says:

    TvD #23: Yeah, well, one person’s “overthinking it” is another’s “trying not to criminally underthink it.” 😉

  25. Scott Says:

    Maris #14: That song is great; thanks! But alas, many more “reputable” outlets also ran this story.

  26. Zach Says:

    Comment #7 – Turns out that the initial placement of all those rocks was …

  27. Scott Says:

    Shmi #7: No, I see no obstructions whatsoever to the xkcd approach.

    Zach #26: …did you mean to write more?

  28. Scott Says:

    I’ve already got several comments in the moderation queue that just pontificate in general about the simulation hypothesis, in direct violation of my comment policy … please don’t!

  29. Joshua Zelinsky Says:

    One thought: most “serious” versions of the simulation hypothesis are things like Bostrom’s discussions of ancestor simulations, where a species might be interested in simulating their ancestors. In that context, if one did have sufficiently powerful complexity results, couldn’t rule out some forms of ancestor simulations at least. For example, if BQP really does take exponential time on a classical computer then we could at least rule out an ancestor simulation that was purely classical (since an ancestor simulation would be occurring in a universe with our own base physics). Similar remarks would for space-bounded resources.

  30. Scott Says:

    Joshua #29: In Greg Egan’s Permutation City, and probably in various other SF works, people live in simulations running on classical computers that are perfectly good enough to render their everyday experience, and that simulate quantum mechanics (if they do) merely in a hacky, approximate, and as-needed basis. You could argue that one of the main scientific motivations for building scalable quantum computers is to test and (presumably…) refute the hypothesis that that’s the kind of world we’re living in!

    You might say that that’s too wacky a motivation to appeal to anyone who actually funds experimental QC work, but I wouldn’t be completely sure. When I met Larry Page, it’s the first motivation he wanted to talk about.

    Of course, even building scalable QCs will do nothing to rule out the possibility that our remote descendants are conjuring us (for some definition of “us”) into our present existence by simulating us on quantum computers … which, well, why wouldn’t they?

  31. Joshua Zelinsky Says:

    Scott #30,

    “In Greg Egan’s Permutation City, and probably in various other SF works, people live in simulations running on classical computers that are perfectly good enough to render their everyday experience, and that simulate quantum mechanics (if they do) merely in a hacky, approximate, and as-needed basis. You could argue that one of the main scientific motivations for building scalable quantum computers is to test and (presumably…) refute the hypothesis that that’s the kind of world we’re living in!”

    Great, so if they have systems to do more detailed computations when the humans are paying attention, we may end up using way too many resources and crash the system. Heck, that could be the Great Filter; if any given section of the universe develops intelligence life that uses too much resources, it just resets that portion of the universe to some simple initial state.

  32. JimV Says:

    No, I didn’t need you to tell me the paper was misconstrued, but I enjoyed the post anyway. (As usual.)

    Although not a falsification, it does seem to me it makes the possibility that we are in a simulation, which was infinitesimal already in my estimation, just a tad smaller.

  33. Scott Says:

    JimV #32: The issue is, we already had general arguments for why quantum mechanics should be hard to simulate classically, and this paper doesn’t really add to them. Instead it explains why particular quantum systems (bosonic systems in their ground state with such-and-such additional properties) are hard to simulate using a particular method (QMC with local transformations). That’s what makes it so weird and arbitrary to latch on to this one particular paper when discussing the simulation hypothesis: if the hardness of classically simulating quantum systems is considered relevant at all, then why not take a much, much broader view of what we know about that?

  34. Mark Gubrud Says:

    re comment policy: some questions you might ask: Is it substantive? Is it challenging? Is it new or not said enough?

  35. Lou Scheffer Says:

    Scott and PDV #21,

    Yes, it’s good to say this. This, in my mind, is perhaps the main function of prominent (usually senior) scientists in a field. Given any suggestion, good or inane, they reduce it to the core problem, then recap the history – “yes, researchers A, B, and C have looked at this, this is what they found, check these papers”, and summarize the conclusions. Or if you are lucky, “That’s an interesting suggestion – I don’t think anyone has looked at that.” The combination of thorough knowledge of a field, and the property that the answers can be trusted, makes everyone else much more productive.

    I can’t count the number of times that a short answer from an expert, with the underlying reasoning, has saved me hours to days of puzzling over what would have either been a dead end or replication of known results.

  36. Craig Says:

    On the somewhat more serious quantum computation front; have you seen this paper about better classical algorithms for BosonSampling?

    http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys4270.html

  37. Yan Says:

    I must be missing something, because the problem seems simpler. Isn’t the falsification claim a form of question begging?

    Aren’t they doing something a little like:

    Simulation Hypothesis: the perceived laws of physics might be part of a simulation, governing virtual objects and without necessary causal relationships to real physics.

    Skepticism-skeptic: but the simulation’s laws of physics would never allow that!

  38. Honest Annie Says:

    This particular simulation hypothesis should be called ‘accurate simulation hypothesis’.

    I am proposing alternate simulation hypothesis. We may be living in simulated reality R1 that is attempt to simulate underlying fundamental reality R0 but that simulation many not be accurate. Some of our physical laws may be simulation artifacts or even optimization shortcuts. The question is, can we differentiate between artifacts and the intended laws of physics and discover true physical laws of R0?

  39. Jon K. Says:

    Scott, thank you for being a true scientist/logician.

    I know you aren’t a big fan of classically-computed simulation theories, so I appreciate that when misinformed popular press articles try to close the door on these theories, you point out their flawed reasoning. Your defense seems almost ironic, but since you’re a man of truth and reason, I guess it makes perfect sense after all.

    Viva la “Digital Physics” (the movie and the theories) ! 🙂

    Questions: Do you think any continuous or infinite processes are observable in our universe? Do you think any continuous or infinite processes exist in our universe? Do you think the amount of information needed to describe the history of our universe is finite? Is it important to consider historical quantum entanglement if it has already “collapsed”? Thoughts on retro-causality? Thoughts on a block universe?

  40. Scott Says:

    Craig #36: Yes, of course I’ve seen it, and I believe we’ve even discussed it previously on this blog. It’s a really nice result, and I’m thrilled they did it.

    But it’s important to understand that, far from overturning standard wisdom about BosonSampling or whatever, it confirms a conjecture that Alex Arkhipov and I made since 2013 or so: namely, that the complexity of BosonSampling should increase exponentially, but “only” with the number n of photons, and not also with the number m of optical modes. We knew that that was true, at any rate, in the Haar-random case under probabilistic assumptions (because of rejection sampling), so that any counterexample would have to be really weird. The new work shows that it’s true in the general case under no assumptions.

    Many of our experimental friends kept optimistically thinking that they could get more computational hardness merely by upping the number of modes, without having to deal with more photons simultaneously. So in conversations with them, we kept having to push back on that, and saying, “no, we have no evidence whatsoever that that’s true, even though we lack a rigorous proof that it’s not true. So please continue to concentrate on getting more photons!”

    It’s nice to now have a rigorous proof to back us up. 🙂

  41. Eray Özkural Says:

    Honest Annie, when you try to make a simulation hypothesis scientific, you will inadvertently have to invent a plausible multiverse hypothesis of the type where our universe is a kind of a subspace of a larger space though that’s still a speculative concept. IOW, the complexity would be decreased to exclude a Boltzmann-brain like posit. In its place, we would have to posit an evolutionary mechanism, which we might call “multiversal evolution”. I liken that scenario to a Virtual Machine. Virtual Machines are likely to evolve in an open-ended physical system with rich dynamics, because they will have self-consistent rules that preserve their bubbles, it turns out that is not a hypothesis that assumes itself, the general hypothesis is called “self organization” by some authors, though there are many names for it in the literature across several fields. Note that there is no generally agreed theory of multiverse in physics, some physicists would point to MWI, some others would talk about each universe having its own set of laws, and some would say they exist in a different temporal direction, and/or distant region of cosmos. Interestingly, this theory was pondered in the great science fiction classic Star Maker (which had an unnecessary theological bent), but the basic idea was that our universe evolved from lower dimensional manifolds. A hypothesis to ponder? Surely, computability was inherent to the very first manifold whatever it was, but it is entirely plausible that there are regions of the cosmos with different physical regimes. What’s more important is that any such scenario is much more plausible than an intelligent designer, which is shaven by Occam’s razor. Could it still be true? Sure, but you’d have to show some strong evidence for it, such evidence doesn’t exist, though I’d say that we have almost strong evidence for a (quantum) computable universe. That would be computational physics, not theology.

    I have an answer to your question, though, I think that with enough data we would be able to ascertain the laws of the root manifold (the very first quantum vacuum, or whatever that is). That question is equivalent to asking whether a theory of everything is possible. “I can’t see strings, can I still find the correct theory of strings, or disprove them?”. It’s the very same question. There is nothing fundamental in the science of epistemology (AI) that tells us we cannot infer the hardest and most complete theory in science. No foundational barrier. We should be able to find it.

  42. Scott Says:

    Eray #40, and Annie: Apologies, but this is getting too close to “general philosophizing about the simulation hypothesis, which could’ve just as well gone in any post on the topic rather than this one.” So if you want to continue in this vein, please do so elsewhere (I’m fine for you to post a link).

  43. Jon K. Says:

    Just for the record, I did no philosophizing in my unpublished post. I gave a compliment, made a little movie plug, and asked a few interesting questions. Oh well…

  44. Scott Says:

    Jon K.: Sorry, your comment got caught in my spam filter for some reason. It’s now published.

      Do you think any continuous or infinite processes are observable in our universe? Do you think any continuous or infinite processes exist in our universe?

    That depends entirely on what you mean by terms like “exist” and “are observable.” My best guess is that the history of the observable universe is well-described by quantum mechanics in a finite-dimensional Hilbert space (specifically, about exp(10122) dimensions). If so, then the outcome of any measurement would be discrete; you’d never directly observe any quantity that was fundamentally continuous. But the amplitudes, which you’d need to calculate the probabilities of one measurement outcome versus another one, would be complex numbers with nothing to discretize them.

      Do you think the amount of information needed to describe the history of our universe is finite?

    Again, on the view above, the amount of information needed to describe any given observer’s experience is finite (at most ~10122 bits). And the amount of quantum information contained in the state of the universe (i.e., the number of qubits) is also finite. But the amount of classical information needed to describe the quantum state of the universe (something that no one directly observes) could well be infinite.

      Is it important to consider historical quantum entanglement if it has already “collapsed”?

    Is that question any different from just asking whether someone believes the Many-Worlds Interpretation? If not, then see my previous posts on the subject.

      Thoughts on retro-causality? Thoughts on a block universe?

    A lot of my thoughts about such matters are in my Ghost in the Quantum Turing Machine essay. I guess some people might call the freebit picture “retrocausal,” in a certain sense, although it denies the possibility of cycles in the causal graph of the world.

    In any case, the usual motivations for retrocausality—namely, to get rid of the need for quantum entanglement, and to restore the “time symmetry” that’s broken by the special initial state at the Big Bang—I regard as complete, total red herrings. Retrocausality doesn’t help anyway in explaining quantum phenomena (what’s a “retrocausal explanation” for how Shor’s algorithm works, that adds the tiniest sliver of insight to the usual explanation, and that wouldn’t if true imply that quantum computers can do much more than they actually can?). And I’ve never seen any reason why our universe shouldn’t have a special initial state but no special final state. Life is all about broken symmetries; a maximally symmetric world is also a maximally boring one.

  45. Gil Kalai Says:

    Scott’s Point #4 (or perhaps an extra point beyond #4) goes much further and does not really require any reference to computation. If the laws of physics, and the laws of computer science and mathematics are not really the governing rules of the universe but just the rules of the computer simulation (or dream or whatever) that we live in then we cannot apply scientific reasoning to refute living in simulation like other older supernatural claims.

    A point that I find interesting (related to Scott’s #3) regarding a matrix-type simulations of humans (even a single human) is that it requires quantum fault-tolerance. The same applies to even more “mundane” tasks regarding predictability of humans and teleportation of humans. All these tasks seem to require quantum fault-tolerance.

    (Three more points: the need for quantum fault tolerance to emulate complex quantum systems does not require that these systems demonstrate superior (beyond P) computation. The task of matrix-type simulations of individuals, as well as predicting and teleporting them would be extremely difficult even with quantum fault-tolerance. And, finally, there is nothing special here about “humans” and the same applies to sufficiently interesting small quantum systems based on non-interacting bosons or qubits.)

  46. Scott Says:

    Gil #45: Yes, I tried to order the points from “specific to this paper,” to “specific to our current state of knowledge where we can’t prove P≠PSPACE,” all the way down to “general-purpose refutation that logically applies forever to any purported disproof of the simulation hypothesis.”

  47. Físicos mostram que não estamos vivendo em uma simulação de computador – Mundo Cetico Says:

    […] Segundo Scott Aaronson, um cientista da computação que trabalha com computação quântica, o que os cientistas descobriram não tem relação com a hipótese do universo como uma simulação …. Em conclusão, a hipótese da simulação não foi falseada e permanece […]

  48. fred Says:

    Sniffoy #9: “the universe simulating us had physics capable of doing uncomputable things as well”

    Or maybe there’s no computation going on and the simulation is just a big stack of sheets of paper, each with a really long integer written on it, representing one possible state of the universe, and related states/sheets just “know” each other (forming consistent spacetimes).

  49. Jon Skyler Nelson Says:

    Scott: I’m mostly in agreement with your four points, but I think you’ve missed a technical detail of the simulationist argument that makes lower-bound complexity results connect with their arguments in a way that prevents them from escaping e.g. by appealing to god-like entities.

    Nick Bostrom’s simulation argument is perhaps the most popular form of that sort of thing right now. Bostrom uses a funny argument to try to establish that there are more simulated people like us than non-simulated people like us, with the idea that if we accept the premises that establish that, we should guess that we’re among the simulated people. The argument relies on the assumption that the simulations in question are “ancestor simulations” which is to say that each universe being simulated is similar (in physics and scale) to the universe that contains it, and the folks running the simulation are much like us.

    So Bostrom-style simulationists can’t appeal to nigh-infinite godlike entities. Curiously, the “ancestor simulation” constraint makes that form of the hypothesis effectively falsifiable – all we have to do is prove that you can’t simulate a universe like our own (w.r.t. physics and scale) from within a universe like our own and the argument fails.

    So one could take a sufficiently strong lower bound on the complexity of simulating certain physical interactions as a strike against the Bostrom-style simulation argument. If we’d need a universe worth of memory to effectively simulate a few handfuls of electrons, then the universe simulations don’t scale at the rate they’d need to in order to establish that there are likely more simulated humans than non-simulated humans. Ergo, this general sort of result does potentially make problems for certain forms of the simulation argument.

  50. Mark Gubrud Says:

    There is no evidence for simulation other than arguments people have invented and their greater or lesser appeal to our minds. The inability of classical systems to simulate quantum systems is irrelevant to the Bostrom simulation hypothesis, which assumes the world is just CGI jiggered up in whatever is the most efficient way to fill in our experience as ancestors of the sysadmins. The whole point of this argument is that this requires many orders of magnitude less compute than a full simulation of the physics would. Enough that anyone would bother with ancestor simulations (not as obvious a pursuit to me as it must be to Bostrom) and still have panoplies of crunch for every other purpose, assuming gargantuan 3D nanotech computers are possible. And no, you wouldn’t need quantum computing; all you’d have to do is make sure any experiments the human sims set up yield the right results. Your software might miss this sometimes, but then, if a QC-inconsistent result were reported, you could make sure it never happens again, and nobody would believe the original claimant. But seriously, this is all just bar talk. It’s very anthropocentric for a hypothesis about the fundamental nature of reality. The arguments invoke and wield concepts that we know we don’t entirely grasp or aren’t entirely sure make sense. It’s terribly suspicious that we’re discussing this at a time when people are increasingly immersed in the “virtual reality” of video games, the internet and apps for all things, when the computer almost does loom as a deity over our lives, increasingly so, and we have all this apocalyptic mythology about the Singularity and so on. And it’s just another version of It’s All Fake (or it could be, anyway), an argument that subsumes Descartes Demon, conspiracy theory and paranoia of every description, which feeds on itself because of its psychological, emotional, symbolic significance in the absence of any decisive refutation. What is the appeal of thinking, “It could be that the whole world is a big computer game”? That the game world is as real as the real world? Is this appealing to some people? Or does this sort of contemplation just distance one from everyday concerns and the demands of other people? In the case of Bostrom, it’s pretty clear that this comes out of the transhumanist idea of uploading to the cloud and becoming a big fat brain. And then there’s that clever argument.

Leave a Reply