The Ghost in the Quantum Turing Machine

I’ve been traveling this past week (in Israel and the French Riviera), heavily distracted by real life from my blogging career.  But by popular request, let me now provide a link to my very first post-tenure publication: The Ghost in the Quantum Turing Machine.

Here’s the abstract:

In honor of Alan Turing’s hundredth birthday, I unwisely set out some thoughts about one of Turing’s obsessions throughout his life, the question of physics and free will. I focus relatively narrowly on a notion that I call “Knightian freedom”: a certain kind of in-principle physical unpredictability that goes beyond probabilistic unpredictability. Other, more metaphysical aspects of free will I regard as possibly outside the scope of science. I examine a viewpoint, suggested independently by Carl Hoefer, Cristi Stoica, and even Turing himself, that tries to find scope for “freedom” in the universe’s boundary conditions rather than in the dynamical laws. Taking this viewpoint seriously leads to many interesting conceptual problems. I investigate how far one can go toward solving those problems, and along the way, encounter (among other things) the No-Cloning Theorem, the measurement problem, decoherence, chaos, the arrow of time, the holographic principle, Newcomb’s paradox, Boltzmann brains, algorithmic information theory, and the Common Prior Assumption. I also compare the viewpoint explored here to the more radical speculations of Roger Penrose. The result of all this is an unusual perspective on time, quantum mechanics, and causation, of which I myself remain skeptical, but which has several appealing features. Among other things, it suggests interesting empirical questions in neuroscience, physics, and cosmology; and takes a millennia-old philosophical debate into some underexplored territory.

See here (and also here) for interesting discussions over on Less Wrong.  I welcome further discussion in the comments section of this post, and will jump in myself after a few days to address questions (update: eh, already have).  There are three reasons for the self-imposed delay: first, general busyness.  Second, inspired by the McGeoch affair, I’m trying out a new experiment, in which I strive not to be on such an emotional hair-trigger about the comments people leave on my blog.  And third, based on past experience, I anticipate comments like the following:

“Hey Scott, I didn’t have time to read this 85-page essay that you labored over for two years.  So, can you please just summarize your argument in the space of a blog comment?  Also, based on the other comments here, I have an objection that I’m sure never occurred to you.  Oh, wait, just now scanning the table of contents…”

So, I decided to leave some time for people to RTFM (Read The Free-Will Manuscript) before I entered the fray.

For now, just one remark: some people might wonder whether this essay marks a new “research direction” for me.  While it’s difficult to predict the future (even probabilistically :-) ), I can say that my own motivations were exactly the opposite: I wanted to set out my thoughts about various mammoth philosophical issues once and for all, so that then I could get back to complexity, quantum computing, and just general complaining about the state of the world.

152 Responses to “The Ghost in the Quantum Turing Machine”

  1. Scott Says:

    I just realized that comments had been accidentally closed on this post! They’re open now. Sorry about that.

  2. wolfgang Says:

    So does my dog have free will?
    He seems mostly driven by instincts, but I also notice a fundamental uncertainty about his next move …

  3. Scott Says:

    wolfgang #2: LOL! See Section 5.3 of my essay, about “the gerbil objection.” I’m willing to ascribe Knightian unpredictability even to a gerbil—and while I’ve never met your dog, I’d hope he’s at least as unpredictable as the rodents that he might enjoy chasing.

    (As someone who’s been scared of dogs my entire life, I’ve nevertheless often found myself adopting what Dennett calls the “intentional stance” toward them. That is, I think of a dog that barks at me, chases me, or bites me as choosing to be evil, and of one that refrains from doing those things as choosing to be good. But I concede that even a dog that bites me might simply be a victim of bad genes or of an unhappy puppyhood.)

  4. wolfgang Says:

    Scott,

    >> See Section 5.3 of my essay, about “the gerbil objection.”

    I am not sure it is the same thing. The gerbil in your example is used like a “random number generator” for the Turing-AI machine.

    I am thinking about my dog as he is. So my question is in between your “weather objection” (the dog’s brain is at the same “knife-edge” situation as a human brain I assume) and your “gerbil objection”.

    I like the dog question because i) it is easier to think about the free will of a ‘wet robot’ who is not human and ii) it asks if “Knightian uncertainty” really is about intelligence (of a Turing-AI machine).

  5. johnmerryman Says:

    Scott,
    Admittly I’m one who hasn’t had the time to read the essay(doing this on a smartphone), but freewill is a pet peeve, so thought I’d offer a comment anyway.
    Whether the source of conciousness originates from a spiritual absolute, or is emergent along the way, it is still bottom up. The absolute is basis, not apex, so it would be the essence from which we rise, not an ideal from which we fell.
    Good and bad are not top down judgment, but bottom up binary code of attraction and repulsion. To the extent we decide between options, we are part of the deterministic code. It is this very process that is our will. To the extent external options guide our choices, we affect the external.
    This necessary determinsim doesn’t mean we are fated, because while the process may be governed by laws, the input cannot be predetermined, as the cone of input is not complete prior to the event

  6. Oleg Says:

    Scott,
    Here are some questions that popped up in my head after reading your essay

    1. What are your thoughts about another source of freebits – the ones that can be tracked down to unknown physical laws. For example, given all prior information, rational individual could not decide on probability of existance of Higgs boson before actual experiments were made and that particular freebit was mined. Surely, for a super-powered demon with full knowledge of the laws of the Universe that particular source of freebits will be empty. However it would be a strong statement that an advanced civilization million years from now will have nothing new to learn. Moreover, while it is not obvious to see exactly how primodial freebits have frozen out as macrofacts, the situation with unknown-laws freebits that were measured in experiments is much more clear.

    2. Have you tried to formalize your definition of a macrofact? How there can be a consistent system of describing macrofacts? What does it mean that some macrofact F equals or not equals F’? How do macrofacts “CMB photon was registered by detector at noon”, “detector showed CMB photon had such-and-such frequency”, “detector showed CMB photon had such-and-such polarization” and “detector showed CMB photon had another polarization” relate to each other?

    3. Is it possible to transfer the source code and current state of predictor P to subject of prediction S via allowable sensory input? Would it break down P? Or is there limitation on ability of S to understand the source code and memory snapshot of P in order to make any practical use of it (for example, to avoid being predicted)?

    4. What is your scenario of how could we detect any Knightian freedom in the brain given our current ability to simulate molecular and macroscopic systems with standard computers? What do you think will change with appearance of quantum computers?

  7. The Freebit Vampire Says:

    Would my Freebit story be more appropriate here? I tried to sneak it in the last blog entry, but maybe it was too out of place …

    “They found another one, John.”

    “Another one what?” John replied distractedly. A moment later he swiveled his head and fixed his gaze directly on Mark. “Oh. No. You are kidding me!?” he shouted.

    “It only took the A.I.s a few days this time,” John said quietly. “The simulations confirm that the victim could not have been attacked more than a week ago.”

    “Poor bastard became predictable that quickly, huh?” Mark exhaled slowly, eyes closed, but remained visibly shaken.

    “Yeah. The QPol threw him in a scanner minutes after the alarm went off. His predicted-action-error-rating was the lowest they’d ever seen. There wasn’t a single freebit left in his brain.” John cracked a weak smile. “Our perp must really be starved for uncertainty.”

    Mark did not smile in return. “You know the press is going to have a field day with this!” he replied sternly.

    John tossed the e-paper he was holding down on the table. “They already are” he said, and pointed to the headline.

    “Freebit Vampire Strikes Again!”

  8. Scott Says:

    Freebit Vampire #7: Yes, the only problem with your horrifying story was that it was off-topic in the D-Wave thread! Here in the GIQTM thread, I hope it will amuse others as much as it amused me. :-)

    (At the risk of taking your story more seriously than you did: if you believed the freebit account, then a freebit vampire—something that converted systems with “baked-in Knightian unpredictability” into nearly-identical but predictable systems—would be almost exactly as practical to construct as a “negentropy vampire,” which converted irreversible systems into nearly-identical reversible ones.)

  9. George K Says:

    That’s all well and good, but how does your perspective respond to Peter van Inwagen’s Consequence Argument?

  10. Scott Says:

    George K. #9: LOL!! Now let’s wait for the first unironic such question…

  11. Shmi Nux Says:

    Scott, I have posted my summary of your paper along with a whole lot of quotes from it on the Less Wrong forum, in a (possibly vain) hope that it may spark a better quality discussion than the last one. I tried to highlight where the freebit model differs from the Eliezer’s metamodel of the world. I apologize in advance if I neglected to include essential pieces while emphasizing less relevant ones.

    http://lesswrong.com/r/discussion/lw/hq7/quotes_and_notes_on_scott_aaronsons_the_ghost_in/

  12. Scott Says:

    Shmi Nux (cool name by the way): Thanks! Nice choice of quotes.

  13. Scott Says:

    BTW, John Sidles, if you’re reading this (which I assume you are): your 2-week blog ban is officially over today. Young STEM researchers in the 21st century, QIST roadmaps, Kahler manifolds, nonlinear quantum state space: bring it on, I suppose… :-)

  14. Markk Says:

    I have a, well, practical philosophical question about your fun idea. You make a distinction between essentially unknowable states (Knightian) and just rather boring probablistic uncertainty which is baked into our current physical models. But aside from a God’s eye view, how would anyone ever tell in the real world the difference between a Knightian state and a regular one?

    Two photons are coming into a future laboratory from deep space. We have figured out the exact mechanism the Brain uses to amplify the microstate state to be a classical state and apply it to each photon. Is the difference detectable in principle, or is it just the history of the particle that tells us? If there is a detectable difference you are seeming to posit a new law of nature in how to detect this. If not then what would be the difference if I just substituted the regular randomness in my predictions for the Knightian? We have no physical means of telling them apart, so my physical models of prediction should end up being comparable for both? The probability distributions would be different how?

    By current QM theories Time independence I have lots of uncertainty any about the past history of a particle anyway unless again, I have the entire interaction history which to me is impossible for a phyisical entity in the real world (godlike).

  15. John Sidles Says:

    Scott, although you and I differ pretty significantly in both style and substance, still I admire your scientific work greatly, and appreciate too the vital service that Shtetl Optimized performs in sustaining vibrant debate. Your views are — for me — an ongoing reminder that it is neither necessary, nor feasible, nor desirable that everyone think alike!

    Your Ghost in the Quantum Turing Machine essay has precisely *one* numbered equation (the unnumbered ones are more-or-less trivial corollaries) and in this respect the essay is *effectively* a homage to Stephen Hawking’s A Brief History of Time. Is this homage accidental? or deliberate?

    As for substantive comments regarding your essay, Shtetl Optimized‘s lack of equation capabilities makes them pretty difficult … in this regard Not Even Wrong is a better venue. As for why medical researchers care about these arcane quantum issues — and we do care, passionately — that’s discussed on Dick Lipton and Ken Regan’s wonderful venue Gödel’s Lost Letter.

    Summary  The farther we dads (and moms and aunties and uncles) journey into parenthood, the more new reasons we discover to care about quantum dynamics. Good!

  16. Michael Gogins Says:

    I am amused by this “once and for all” notion. I have not observed anyone publishing or posting philosophy in a way that even resembles “once and for all.” Perhaps my irony detector needs adjustment.

  17. Scott Says:

    John Sidles #15: No, there was no conscious homage to Hawking’s A Brief History of Time in my essay, though I did have occasion to mention Hawking radiation as well as the Hartle-Hawking no-boundary proposal. I’d humbly suggest turning down your “homage/allusion detector”—it seems to turn up too many false positives! :-D

  18. Scott Says:

    Michael Gogins #16: Obviously, I wouldn’t dream of suggesting that I’ve “licked the millennia-old free will problem once and for all”—anyone who thinks I’m capable of such a doofus claim simply hasn’t read the essay. My goal, rather, was to get what I, personally had to say about free will and related problems permanently out of my system. Now, maybe I failed even at that more limited goal, but time will tell: I’m not ready to concede failure yet! :-)

  19. Michael Gogins Says:

    I didn’t mean to suggest you thought you had licked this one for once and for all. No, my irony detector was shorting out on the idea that, having contributed a substantial and as far as I can tell somewhat original essay on the issue of free will, you would actually be able to walk away from the issue and just leave it alone.

    Frankly, I think we’d all be better off if you would simply oscillate, more or less gently, between science and philosophy. You’d hardly be the first.

  20. Scott Says:

    Michael #19: LOL, judging from my past obsessions you might get your wish! (And of course, there are lots of other interesting puzzles in philosophy having nothing to do with free will. In fact, my 59-page Why Philosophers Should Care About Computational Complexity essay never mentioned free will at all, mostly because I don’t see how complexity theory illuminates it much.)

  21. IRTFFWM Says:

    Page 66:In order for the freebit picture to work, it’s necessary that quantum uncertainty—for example, in the opening and closing of sodium-ion channels—can not only get chaotically amplified by brain activity, but can do so “surgically” and on “reasonable” timescales.

    Though it’s not quite the same thing, consider one photon hitting the retina, to at least show the plausibility of (chaotic?) amplification done surgically on reasonable timescales.

    On the other hand , maybe it’s not so different. What’s to stop freebit reception from being mere sensory input?

  22. IRTFFWM Says:

    In section 3.1 page you (quite reasonably) dismiss the argument (*)

    (*)Any event is either determined by earlier events (like the return of Halley’s comet), or else not determined by earlier events (like the decay of a radioactive atom). If the event is determined, then clearly it isn’t “free.” But if the event is undetermined, it isn’t “free” either: it’s merely arbitrary, capricious, and random. Therefore no event can be “free.”

    by rejecting the pre-determined/arbitrary-capricious-random dichotomy. You bring up the possibility of Knightian uncertainty. So far I’m fine with this. You just need enough wiggle room to evade such a dichotomy so that the argument in (*) fails to preclude free-will.

    But the specific concept of freebits seems to me to be merely one way of doing this. If the laws of physics are just effective theories, that model an extremely, but not absolutely, mathematical world (i.e. if the universe is not literally identical to a piece of mathematics, but instead merely bears a very close resemblance to a piece of mathematics almost everywhere and almost always), then that should give enough wiggle room to thwart (*). You don’t need to assume that laws are time reversible, or that the validity of quantum mechanics is as extensive as you assume. It seems that you just need some form of Knightian uncertainty, not necessarily freebits.

  23. Woett Says:

    Scott, I just read the first 10 pages of your essay and I already love everything about it. Thank you very much for writing it!

  24. Andrew Foland Says:

    I don’t understand the CMB-freebit discussion on p. 38. The surface of last scatter (SLS) is classically well-understood, it’s a hot thermalized plasma. So in the analogy, it seems that would mean the CMB is like the laser drift with classical degrees of freedom that can be (and have been) noninvasively probed? It seems like the CMB SLS stand between us and any early-universe freebits, rather than being a potential source of them.

    (And isn’t any degree of freedom that “pierces” the SLS also going to be essentially uncoupled from the ordinary matter that makes up our brain? So you’d have to posit a vast relic number density of them in order for them to regularly interact with us; perhaps a number density high enough that it would have falsifiable cosmological consequences. For instance, there are already limits on additional neutrino species from cosmological measurements.)

  25. Koray Says:

    I think Bostrom’s puzzle of waking clones is just a mistake.

    If he/you were to formally write out the math, you’d realize that the statement that corresponds to “P(subject waking up in a white room)” is meaningless because it doesn’t refer to a fixed subject, i.e. subject#0 or subject#1..subject#999 in the case of cloning.

    One may object that revealing their numbers to subjects already tells them the result of the coin flip. Then, consider this variant:

    There’s always cloning, and numbered subjects are randomly assigned to rooms 0 through 999 that favors assigning higher numbered subjects to higher numbered rooms. Subjects don’t know their own numbers, but they do know their room numbers. A subject wakes up in room#987. How would he calculate his odds of waking up in that room?

    He simply could not because it would depend on his subject number, which he doesn’t know. He can’t just say that “I could be subject this or that”. There’s no such event as “a” subject waking up in room#5; there’s “always” a subject waking up in room#5. Therefore, no subject waking up in room#5 can tell whether they are waking up in a room against very strong odds.

    Similarly, lacking this identity information, the person that wakes up in a white room in either variant of the original puzzle cannot write the precise name of the event he witnessed in order to calculate its odds.

  26. Scott Says:

    IRTFFWM #22:

      But the specific concept of freebits seems to me to be merely one way of doing this. If the laws of physics are just effective theories, that model an extremely, but not absolutely, mathematical world (i.e. if the universe is not literally identical to a piece of mathematics, but instead merely bears a very close resemblance to a piece of mathematics almost everywhere and almost always), then that should give enough wiggle room to thwart (*).

    First of all, I’d regard math as a universal language, capable of describing anything that’s describable at all. (And if something isn’t describable at all, then why are we leaving blog comments about it? Whereof one cannot speak, etc.) So the real question—and maybe the question you meant—is just whether something is describable by simple, elegant, or nontrivial mathematics.

    Even there, however, for me it’s not a satisfying answer to say: “sure, the universe bears a very close resemblance to a piece of mathematics M governed by simple, elegant rules, almost everywhere and almost always—but it’s not literally identical to M.”

    I want to push further, and ask: where, exactly does the correspondence between the universe and M break down? And can we understand, from the study of M itself, why that would be a “natural” place for it to break down?

    I completely agree that “the freebit concept” isn’t the only logically possible way of answering the above questions: it’s just the only way I was able to think of, based on physical laws I understand.

  27. Scott Says:

    Andrew Foland #24:

      The surface of last scatter (SLS) is classically well-understood, it’s a hot thermalized plasma. So in the analogy, it seems that would mean the CMB is like the laser drift with classical degrees of freedom that can be (and have been) noninvasively probed? It seems like the CMB SLS stand between us and any early-universe freebits, rather than being a potential source of them.

    It’s an excellent question, and something I worried about a great deal.

    The best I can tell you is that, in this sort of discussion, I’m leery of any argument of the form, “X is thermalized, therefore we fully understand X, and see that it screens off any interesting information.” After all, Hawking made an analogous argument about Hawking radiation, but there’s now evidence from AdS/CFT that he was wrong: it seems like Hawking radiation really can encode all the data about the infalling matter, if you probe it more carefully (even though, of course, piecing together the original information would be ridiculously hard).

    In the case of the CMB, we know that there are anisotropies, and moreover that the anisotropies persist at whatever scale you look. And people continue to examine those anisotropies for deviations from pure Gaussianity (though claims to have found such deviations have tended to get quickly shot down). Now, we know that the anisotropies at the largest scale (whether purely Gaussian or not) played a central role in structure formation in the early universe. It would certainly be amusing if the anisotropies at the smallest scale played a role in what you’ll eat for breakfast tomorrow! Maybe such speculations can be completely ruled out through more detailed consideration of the SLS; and if they can, then I’d say that the idea of the CMB as a freebit carrier is dead. I don’t know, and would be grateful for thoughts.

      And isn’t any degree of freedom that “pierces” the SLS also going to be essentially uncoupled from the ordinary matter that makes up our brain? So you’d have to posit a vast relic number density of them in order for them to regularly interact with us; perhaps a number density high enough that it would have falsifiable cosmological consequences. For instance, there are already limits on additional neutrino species from cosmological measurements.

    Yes, for exactly the reasons you give, I’d be leery of positing additional neutrino species, dark matter, or anything similarly exotic as freebit carriers. For as you point out, it wouldn’t be enough for the hypothesized new species to exist at high enough density: it would also have to interact with ordinary matter at a high enough rate, in which case why haven’t we detected it already?

  28. mkatkov Says:

    Scott #26. “First of all, I’d regard math as a universal language, capable of describing anything that’s describable at all. ”

    There is logical possibility you are wrong. Math is convenient language for human society to comprehend and find a consensus. Whether it is universal, in the sense being of independent on human beings, is an open question.

    One of the places where link between U and M may be broken is countable/uncountable infinity. Most of the math, including complexity is based on Cantor diagonal argument, which is, may be convincing for human beings, but not necessary reflects physical reality.

    As a consequence, probability, measure, etc. require a lot of not elegant constructions like measurable sets, almost sure, open set, etc. which “require” a true infinity to be in the physical reality to neglect “measure 0″.

    At the end there maybe more math than you think, e.g. “quantum math”, where the real statements M. is publishing are reflected in the “measurement” of “free will” agent, in particular by the human brain.

  29. Scott Says:

    mkatkov #28: Well, a simple “proof” of math’s descriptive universality would be to consider any other possible language—call it L—and then note that, provided sentences in L can be reliably transmitted from one physical entity to another, those sentences can also be encoded as positive integers. ;-)

    That sounds stupid, but I think it does point to the general difficulty with even imagining a “non-mathematical” way of understanding the world. The issue is, what does one even mean by “non-mathematical” in this context? Whatever new objects (or relationships between objects) were required, why wouldn’t our mathematical understanding simply grow to let us discuss those objects and relationships, as happened many times in the past?

    Of course, nothing in this argument demands that the math will be simple or elegant—or that the concepts involved (complex numbers, group representations, Riemannian geometry…) will be the same ones that mathematicians found to be fundamental for their own separate reasons. That that’s so often turned out to be the case I think is the real mystery (or miracle, or puzzle, or whatever) that Wigner was talking about in his famous essay.

  30. Anonymous Programmer Says:

    I think MIT is kind of a bubble in lack of belief in free will because physicists and AI gurus don’t like the idea of free will. If you were to walk to Harvard the belief in free will would go up because business, law, and economics is more prominent.

    I think that it is much easier to convince the average person there is no god than there is no free will because they know from first hand experience there is.

    Mathematicians including TCS shouldn’t have to kow tow to physicists — they are to right of them in the xkcd purity scale http://xkcd.com/435/

    TCS researchers can explore what real free will might mean by inventing complexity zoo animals all without having to commit to a belief in free will one way or another.

    This seems to have happened already — Alice and Bob are presumed to have free will. Non deterministic Turing machines seem to even have God consulted on each important step whether to go left or right. MA and AM have a wizard. IP has two interacting free will agents.

    Maybe someone could invent a well defined complexity class that could see a large 2d grid of colors and had strong emotional reactions to some patterns and could using its free will to react to those emotions to move its virtual head or hands to change its programming or data. Have any of the AI gurus made a complexity class? Maybe too complex for a complexity class?

  31. Scott Says:

    Anonymous Programmer #30: Well, I’ve noticed that, when I’ve talked to (non-quantum-information) physicists about these ideas, I’ve often had to give a lengthy explanation of what “Knightian uncertainty” means, and how it’s even logically possible that something could be neither deterministic nor probabilistic. For theoretical computer scientists, on the other hand, I just say “it’s nondeterminism” and we’re done! :-)

    In fairness, though, math and CS folks will sometimes display an a la carte attitude to established physics that leaves even me aghast. E.g., “if standard, linear, textbook QM is too hard to reconcile with our favorite conjectures about how the world should work, then why not just assume standard QM is wrong? It never made much sense to me anyhow.”

    (Also, however much some “physicists and AI gurus” might disagree with my essay, I’d say I’m still much closer to them in attitude than to anyone who claims to “believe in free will” on emotional or philosophical grounds, and isn’t even troubled about how to reconcile that belief with a scientific worldview.)

  32. wolfgang Says:

    Scott #18

    >> what I, personally had to say about free will

    I think this statement smells of free will 8-)

  33. Gil Kalai Says:

    Scott, why is “probabilistic determinism” incoherent with “free will” just as (full) “determinism” is. Suppose that the dinosaurs could have predicted long ago that conditioned on you existing now (and they could specify “you” as “a blogger, TCS researcher with initials ‘SA,’ in the Boston area at the beginning of the 21th century”), for moral choices, there is 50% chance you make the right choice and 50% chance you make a wrong choice (at your free will). Why such probabilistic indeterminism is inconsistent with belief in free-will?

  34. John Sidles Says:

    Scott asks (rhetorically) “Math and CS folks will sometimes display an a la carte attitude to established physics that leaves even me aghast: ‘If standard, linear, textbook QM is too hard to reconcile with our favorite conjectures about how the world should work, then why not just assume standard QM is wrong?'”
    —————————-
    In recent decades, mathematicians have embraced a comparably skeptical attitude toward thermodynamics — with great success!

    Shtetl Optimized readers might perhaps enjoy two well-respected articles in this regard: David Ruelle’s Is Our Mathematics Natural? The Case Of Equilibrium Statistical Mechanics (1988) and Vladimir Arnold’s Contact geometry: the geometrical method of Gibbs’s thermodynamics (1989).

    The latter article begins with Arnold’s celebrated aphorism:

    ———————-
    “Every mathematician knows that it is impossible to understand any elementary course in thermodynamics.”
    ———————-

    Is quantum mechanics (as physicists explain it) comparably “impossible for mathematicians to understand?” Yes, and for well-founded mathematical reasons! :)

  35. Scott Says:

    Gil #33: All I claim is that, if you want to hold that sort of probabilistic determinism to be compatible with free will, then you might as well go “hardcore compatibilist,” and hold even deterministic determinism to be compatible with free will. Otherwise, you’re forced to some strange conclusions. For example, a robot with a built-in quantum random number generator might have free will, but as soon as the quantum RNG was swapped out for a pseudo-random generator, the robot’s free will would be lost. (There was actually a recent Futurama episode based around that premise—but they treated it as a joke!) Likewise, suppose there were a million copies of me; then the dinosaurs could be essentially certain that 500,000 copies (plus or minus a few thousand) would make one choice and 500,000 would make the other. Moreover, the million-bit string encoding which copies of me made the first choice would almost certainly be Kolmogorov-random, with no discernible pattern. And still I’m supposed to accept that each individual copy has “free will”?

  36. ppnl Says:

    Have not looked at the essay yet so apologies if I’m covering old ground…

    The free will debate seems to me to miss the real mystery. As has been pointed out it seemingly cannot even be defined coherently.

    The real mystery is the fact of experience itself. The problem with mindless zombies isn’t their lack of free will but their lack of experiences. If we figure out how to give them experiences then maybe ask coherent questions about free will.

    I should note that the existence of experience does not require free will. But the lack of experience would seem to make free will moot. And the existence of experience is a mystery even if free will does not exist.

  37. Boris Says:

    I read your blog regularly, but I don’t normally comment. Posts here frequently attract some rather harsh criticism, while supporters sometimes stay silent, so I’d like to mention my appreciation.

    For various reasons, I won’t comment on any of the content of your paper, but know that I did tirelessly read through it all.

    In any case, it’s hard to convey sincerity even in person, yet alone on the Internet in general or your blog specifically, but please understand that I mean this very directly:

    Thank you for writing and posting this paper. I disagree very strongly with many of the ideas espoused in the paper, and I can’t imagine myself having the courage to post a similar paper in such a public forum. However, I believe that I (and the world) derive value from your having done so, so again, as sincerely as you’ll take it: thanks.

  38. Scott Says:

    ppnl #36: Well, yes, “the fact of experience” is a toughie! :-) In fact, I regard the “hard problem of consciousness” as so far beyond us, that it’s not even clear that science or rational argument give us any sort of toehold. By contrast, my claim is that, while the “will” part of “free will” might be just as ethereally inaccessible as consciousness is, we can make progress on understanding the “free” part, by studying the actual predictability of human choices (both in principle and in practice). What would such progress look like, and why would it have anything to do with “free will”? Well, read the essay!

  39. Scott Says:

    Boris #37: Thanks! But it turns out that it’s easy to have “courage” when you also have tenure. ;-) And anyway, I already caught a thousand times more flak over D-Wave than I seem to be catching over this—I suppose because there’s no actual money at stake here, but only the nature of time, causation, and human freedom.

  40. Shmi Nux Says:

    Scott, thanks! For the record, while I find the idea of freebits fascinating, and the notion of fundamental unpredictability based on them even more so, I am very skeptical of the idea that the free part is related to free will at all. I would be more interested in someone setting up an experiment proving that some reasonably microscopic events, like, say, spontaneous emission in the CMB frequency range, have some Knightian uncertainty in them.

    I also hope that this tenure thing will not cause you to go off the deep end and into the murky waters of untestability, like what happened to Don Page with his Boltzmann brains or to David Deutsch with his MWI apologia.

  41. Mike Says:

    Yeah, Page has made no recent contributions, and what can one say about that crackpot Deutsch? I mean just read his books. Not one intelligent, clear and thoughtful thing to say. Nothing new or interesting. A real disappointment. Why, I’ll bet he doesn’t even know what the word “apologia” means! ;)

  42. Scott Says:

    Shmi #40: I did often question my sanity in agreeing to write this essay, but there are two amusing ironies in what you wrote. First, whatever else one says about it, this essay probably has more “testable predictions” than any other paper I’ve written! (Unless you count, e.g., “we conjecture that there exists an oracle separating SZK from QMA” as a testable prediction.) Second, if “going off the deep end” means becoming like David Deutsch, then I guess I should ask where to jump… :-)

    (Just realized I crossed comments with Mike there.)

  43. Jay Says:

    First thank you Scott for this paper. ;-)

    Reading the discussions on “Less Wrong” I found myself to agree both with Shmi Nux:

    “I find the arguments completely unconvincing, but Aaronson is always thought-provoking and fun to read (…) the time spent reading it is not wasted at all.”

    and with Daniel Varga:

    “If I think up what seems like an obvious objection, I will resist assuming that I have found a Weaksauce Weakness in the experts’ logic. Instead I may ask politely whether my argument is a valid one, and if not, where the flaw lies.”

    Ok, so please let me politely check my understanding of, first, Knightian Uncertainty, using an example.

    Supposed I play a game in which I must guess the probability that a bag contains more blue balls than it contains red balls. Using Baysian reasoning this is quite simple. I start with prior 0.5 for each… blabla… then posterior probability… blablabla.

    In the context of your essay, these probabilities are standards, non-Knightians, quite boring actually.

    After I had picked-up 14375 blue balls and 14429 red balls, the three next balls I could collect were white, yellow, and green. Shit.

    In the context of your essay, the probabilities that I’d get these three last balls are of the Knightian sort. My lack of information was not only about the red and blue balls I was expecting, something I could easily deal with, but also about which Baysian model I should used in the first place.

    This is Knightian because it’s something I couldn’t guess, no more that I can guess my probability of getting a black ball next time. Should I say the probability to get a new color is bounded? Or that all colors are allowed? or all colors that I can recognized? Should I change my priors if I happen to get a blue cube instead of a blue ball?

    So, my polite question is, is this situation a correct description of what you called Knightian Uncertainty in your essay?

  44. Shmi Nux Says:

    Scott: oh, I agree that your essay stands out in terms of trying hard to make testable predictions out of something as nebulous as free will :) As for David Deutsch, I simply argued that Deutsch – MWI advocacy > Deutsch. But I certainly agree that Deutsch > most of the rest of the field, though you might still have a fighting chance, who knows :)

    Mike: consider reading http://en.wikipedia.org/wiki/Principle_of_charity.

  45. Jay Says:

    [PS: sorry I inverted the names of Daniel Varga and Shmi Nux in #43]

  46. Mike Says:

    Shmi Nux@44,

    What a wonderfully obscure and eccentric link — are you sure your not John Sidles in disguise? :)

    Of course, I always begin with charity in my heart, but as a good Bayesian you well know it’s important to update based on the evidence.

    Anyway, as an avowed radical instrumentalist you shouldn’t take what I have to say as something that actually happened in the “objective” world, should you?

  47. Rahul Says:

    Scott #42:

    First, whatever else one says about it, this essay probably has more “testable predictions” than any other paper I’ve written!

    Which ones for example? Can someone list a few. I haven’t finished reading the paper but haven’t bumped into any testable predictions yet.

  48. Austin Frisch Says:

    Scott, thanks for the essay. I was expecting reasonably sound arguments for a worldview that I would still be able to reject for some reason, probably aesthetic. What I’m finding is that the freebit idea does have some ring of truth, enough that my own favorite pet theories are facing new competition in my head. Specifically, I’ve always had a very firm belief that free will is a natural and useful concept to humans, but not something that could be literally true. At the same time, I saw very serious problems with assuming the initial conditions of the universe as being drawn from a “normal” probability distribution. For example, how can we ever rule out the possibility that two candidate universes are isomorphic in some sense, a la ads/cft, or that one is embedded in another? Even drawing up a formal definition of what should count as an isomorphism/embedding seems very hard to me, maybe impossible… And then of course you have to decide what the consequences would be, prior wise, etc.

    Anyway, the contradiction seems glaring in retrospect, but honestly I don’t believe it had ever crossed my mind before reading your essay, and I think about this type of thing a lot. Makes me wonder what other non-compatable notions I’ve cooked up/stolen…

    Question, has writing this relieved your mind of any “philosophical burden” yet? ;)

  49. ppnl Says:

    Ok I’m about half way through the thing now and I have a question about a freebits. Sorry if it is silly as I’m struggling to grok this thing.

    Say you measure the polarization of some photon from deep space. This seems like ordinary probability since it is just the collapse of a local wave function. But say unknown to you the photon is one of an entangled pair of photons. Unknown to you the other half of the photon pair was measured by an alien scientist at a time and place outside your light cone. Unknown to you the probability of getting the result you got was 100% rather than the 50% you expected. And because the other measurement was made outside your light cone there is no way even in principle you could have known this.

    Is this a freebit and if not why not? There is a PMD of a sort but it is unobtainable before your prediction. Well since you can’t really define which measurement you made first maybe we should just call it an MD.

    Pushing your past macroscopic determinant back to the beginning of the universe seems arbitrary and unnecessary. Pushing it past your light-cone seems a good alternate definition that would serve your purpose.

  50. Gil Kalai Says:

    A few thoughts related to Scott’s paper.

    1) The “free bits” sound a little similar to “coinbits”. (Of course, free bits are a resource of Knightian uncertainty, while coinbits is a resource for universal money, but still :).)

    2) An idea which may support the existence of plenty of free bits and Knightian uncertainty is the “two quantum computers solution” according to which our universe can be described by two (or several) non-interacting, or, more precisely, extremely weakly interacting, quantum computers, based on two distinct tensor product structures on the same Hilbery space. We can have on the same Hilbert space two different independent tensor product structures, and assume that every state is a superposition of two states, each described by one of two quantum computers acting on these distinct tensor product structures. In this case, states achievable by one quantum computer will be nearly orthogonal to states achieved by the other. If we and all of nature known to us interact only with one of these tensor-product structures the qubits in the other serve as free bits.

    3) The game of life is mentioned at the first footnote so let me refer to recent question that I asked over TCS overflow, which I am curious about: Does a noisy game of life still support computation.

  51. LZ Says:

    Scott #38:
    > In fact, I regard the “hard problem of consciousness” as so far beyond us, that it’s not even clear that science or rational argument give us any sort of toehold.

    I don’ agree with this statement: I find that research in neuroscience has already shed some light on the problem of consciousness and it will continue to do so. That doesn’t mean it will settle the problem, but I think it can be push us, step after step, beyond the boundaries of our current understanding.
    Are you familiar with Patricia Churchland’s writings? I think she explains pretty well why progress in (neuro)science can matter in the consciousness debate.

  52. luca turin Says:

    Albrecht and Phillips address the question of quantum effects on neuron function and —in my opinion— make a convincing case for quantum fluctuations making a difference. http://arxiv.org/abs/1212.0953

  53. Alexander Vlasov Says:

    Gil Kalai #50, Game of Life has some inherited noise suppression due to irreversibility. It is interesting, that usual Game of Live after inversion of arrow of time may produce an illusion of some system with free will, because inversion of each step is not deterministic.

  54. Scott Says:

    Rahul #47:

      Which ones for example? Can someone list a few. I haven’t finished reading the paper but haven’t bumped into any testable predictions yet.

    Err, you might want to check out Section 9 (“Is the Freebit Picture Falsifiable?”). :-) See also Sections 3.3 and 3.4. Or for that matter, see comment #24 above by Andrew Foland (and my response in comment #27). Foland is asking about precisely the sort of thing (i.e., whether the surface of last scattering “screens off” all quantum information from the Big Bang) that, if demonstrated in detail, could kill the freebit picture.

    The tl;dr summary is that, for the freebit picture to work, at least three empirical conjectures have to be true:

    (1) Undecohered freebits have to be able to reach us all the way from the Big Bang, via one causal pathway or another (without being “screened off” by classical intermediaries, which is what Foland was talking about).

    (2) Once those freebits reach a human brain, they have to be able to influence neural events “surgically” (i.e., with few or no side-effects) and on humanly relevant timescales.

    (3) A quantum theory of gravity, once we have one, must not prohibit the initial state from containing freebits (as certain proposals, like Hartle-Hawking, probably would).

  55. Scott Says:

    Jay #43:

      So, my polite question is, is this situation a correct description of what you called Knightian Uncertainty in your essay?

    Any time you have a problem where you’re not given enough information even to calculate the probability of some event: yes, you could call that “Knightian uncertainty.” (The concept itself is neither deep, nor complicated, nor in any way novel; see Wikipedia for example.)

    Having said that, I confess that I didn’t follow the details of your example. Regardless of whether the number of balls of each color is known or unknown, if the balls are well-mixed, then it’s always going to be exceedingly unlikely that you draw tens of thousands of blue and red balls only, but then white, yellow, and green balls in immediate succession. So no, that’s not Knightian uncertainty, that’s just a freak coincidence! :-) Knightian uncertainty would just mean that you don’t know how many balls of a given color are in the bin, and aren’t even told a probability distribution over the number (though you might have upper and lower bounds or other partial information).

  56. Scott Says:

    luca turin #52: Thanks so much for that interesting pointer! It was reassuring to read a paper that makes such a mammoth claim for the role of QM in human life (i.e., that all probability is ultimately quantum in origin), that what I say looks timid and conservative by comparison. :-)

    Seriously: I don’t yet have an opinion about their argument for the neural sodium-ion channels being sensitive to quantum noise, and would be interested if any experts wanted to weigh in.

    But while it’s irrelevant to my essay, I have to say that I strongly disagree with Albrecht and Philip’s larger thesis, that even classical probability is only justified in a quantum world. For me, the crux is just that we can easily imagine a universe governed by classical physics alone—and we can tell several compelling stories about why, even in that hypothetical world, rational agents (if there were any) would want to quantify their epistemic uncertainty using probabilities whenever they could. (A major gap in the paper is that it never grapples with the standard “derivations” of probability theory due to Savage, Cox, etc.)

    It’s as if someone argued that, because you need quantum mechanics to get stable solid objects in our universe, therefore the very concept of solidity makes no sense without QM. The fallacy is that we could easily make up other laws of physics that allowed solid objects for “classical” reasons having nothing to do with QM.

  57. Scott Says:

    LZ #51:

      I find that research in neuroscience has already shed some light on the problem of consciousness and it will continue to do so. That doesn’t mean it will settle the problem, but I think it can be push us, step after step, beyond the boundaries of our current understanding.

    I suspect that we don’t actually disagree, but just mean different things by “the problem of consciousness.” As an outsider, I love reading about neuroscience; it’s obviously told us an incredible amount about the physical correlates of consciousness (through the studies of split-brain patients and amnesiacs, fMRI scans, the “global workspace” model, etc. etc.). I hope and expect that it will tell us more in the coming decades.

    But then there’s the logical point—eloquently driven home not only by philosophers like Chalmers, McGinn, and Rebecca Goldstein, but also by scientists like Steven Pinker and Christof Koch. Gather up the entire staggering output of the neuroscience of 500,000 years from now. Then why couldn’t exactly the same corpus have been written in a “zombie universe,” which fulfilled all the same causal relations but where no one “really” experienced anything?

    If you’ve spent any time arguing with mystics or (intellectually-minded) religious people, you’ll know that they keep bringing up this explanatory gap over and over and over, in order to prove that there are aspects of reality that science can’t touch by its very nature. Personally, I like to answer the mystics with a left-hook: I admit the reality of a vast conceptual chasm between neural firings and “the smell of daffodils.” I refuse to play the “scoffing materialist” role in their script. I even assure them that I, too have interiority (really, I’m not just saying it! ;-) ), even though the laws of physics (or any other laws I can possibly imagine) would seem just as happy to have me be a functionally-identical zombie.

    Then I flip the question around: “so, what do you know about this mind/body chasm that science doesn’t? which physical systems are conscious, and which aren’t? how do you know? why are your mystical ideas right, and this other person’s totally different mystical ideas wrong? how would you convince a skeptic less mystically gifted than yourself? I might not have all the answers, but at least I don’t claim to have them—and at least I have rationally-justifiable answers to some questions!”

  58. ppnl Says:

    LZ #51

    >I don’ agree with this statement: I find that research in neuroscience has already shed some light on the problem of consciousness and it will continue to do so. That doesn’t mean it will settle the problem, but I think it can be push us, step after step, beyond the boundaries of our current understanding.

    In what way? Careful, you are on the verge of failing my Turing test!

  59. Jay Says:

    Thx Scott, the details were not importants and your answer is as clear as one could expect. As you will have a lot to read, here’s the shortest answers I could make.

    The flowers:

    1) to replace a question by another, this is bright strategy!

    2) the structure of the writing is imho so entertaining and clear that it redefines what it is to write this kind of essay

    3) this may well be the last significant philosophical contributions challenging the possibility to upload our brain before it becomes an experimental question.

    The pot:

    1) the replacement question you have chosen makes little sense:
    -If you’re forced for the rest of your live to write poetry, few will say you’re free, even if the quality of your production is knightian unpredictable.
    -If someone could inform you on the exact consequences of all your future decisions, and you can decide this is exactly the future you want, you will probably consider yourself as free despite your decisions have been predicted.

    2) you did not provide any proof that a classic computer equiped with boring noise can’t exhibit Knightian Uncertainity. Actually it’s seems easy to provide counter-examples.

    3) your predictions are far-fetched where it could have been simple and easy. Notably, you should have predicted that whatever the mecanism that protects the freebits and allows them to act on the macroscopic state of the brain, it could not have evolved to protect from the strong and unnatural scrambling of the microstates as provided by an fMRI session.

    4) all questions that you argue hard to answer in case copies are possible, have a counterpart equally hard without copies.
    -just one example to clarify: someone either makes 10 green balls or 1 red ball. You’re daltonian and you happen to collect one ball. What is the probability it is red? This is a difficult question for the same reason your original question is difficult. We can call that “Knightian Uncertainty”, or we can call that an ill-defined question.

    Best,

  60. Scott Says:

    ppnl #49: You ask an excellent question. Personally, I’d say that, if a PMD is outside our cosmological horizon—so that it could never be measured, even in principle, by any Predictor later able to interact with us—then that PMD doesn’t “count,” and whatever photon polarization it theoretically “determines” can still be a freebit for us. Indeed, this view is pretty much demanded by the general philosophy of my essay: that “causal determination” doesn’t count for beans, unless there’s some way, consistent with the laws of physics, to “cash it out” into an actual prediction. For otherwise, this whole discussion never even makes it past the Calvinist theologian who claims that there can’t be any freebits, since everything that happens is “causally determined” by the unknowable Mind of God.

    As explained in comment #58, I’m not a “cartoon positivist,” who declares that anything that can’t be measured doesn’t exist. But I do want to hew as closely as possible to what can be measured when building up my ontology—even if that desire puts me at odds not only with the “mystics,” but also with many scientific rationalists. And yes, that approach makes me rather noncommital about the “reality” of whatever might be outside our ~10122-Planck-area cosmological horizon. Just like with parallel universes, give that stuff an explanatory role to play in something I can observe, and then we’ll talk! :-)

  61. Mike Says:

    Well, doesn’t the theory of parallel universes provide an “explanation” for single photon interference. I don’t mean simply “predicting” it — any old interpretive brand of standard QM does that. Let’s talk!! ;)

  62. LZ Says:

    @Scott #58: okay, now that you have clarified what you meant I only partially disagree with you :)

    I totally agree that “mystical” theories about the nature of consciousness are useless not because they are wrong (even if they just are) but because they have no explanatory or predictive power whatsoever and they are not falsifiable (even if they can be falsified, when it happens the mystics will probably include a new clause in their “theory”…)

    On what you call “the problem of consciousness” I’m instead inclined to disagree, not much because personally I think that science can still make progress on this question, but because I think we are not in a position to rule out an advance on, say, “evolutionary neuroscience” (I don’t know if it exists yet, but it should!) that can give us some hints of why we experience things. Maybe I’m wrong and science cannot help us with this question, but I don’t see a convincing evidence that it must be so. I don’t see evidence of the contrary either, but I think history is full of examples where science was thought not to able to shed light on some phenomenon and in the end it has. We cannot blame science for our lack of imagination.

  63. John Sidles Says:

    Scott proclaims “The general philosophy of my essay [is] that “causal determination” doesn’t count for beans, unless there’s some way, consistent with the laws of physics, to “cash it out” into an actual prediction.”
    —————————–
    Walter Kohn’s Nobel lecture “Electronic Structure of Matter” (1999) forcefully yet amusingly argues for what amounts (pragmatically) to this same point. These views are deserving of respect if for no other reason than Kohn’s unique status as the most-cited author in the history of quantum physics.

    Kohn’s key proposition (which includes equations) is quoted on Less Wrong, accompanied by an encomium for Eliezer Yudkowsky’s hilariously thought-provoking and well-reasoned (as it seems to me) “Harry Potter and the Methods of Rationality.”

    Without doubt, Yudkowsky’s wonderful Harry Potter would whole-heartedly approve of Walter Kohn’s iconoclastic methods, reasoning, and conclusions!

  64. Scott Says:

    Austin Frisch #48:

      I was expecting reasonably sound arguments for a worldview that I would still be able to reject for some reason, probably aesthetic. What I’m finding is that the freebit idea does have some ring of truth, enough that my own favorite pet theories are facing new competition in my head.

    Fantastic, you might just be my first customer! ;-)

    Both here and on LW, a common reaction has been some variant of: “thank you, Scott, for setting out these lunatic, egregiously-wrong ideas in such a fun and thought-provoking manner!”

    Which is a perfectly fine reaction in my book, in fact the best reaction that I realistically expected. But for at least one reader to “[find] … that the freebit idea does have some ring of truth”? Wow, talk about a cherry on the sundae!

      Question, has writing this relieved your mind of any “philosophical burden” yet?

    I apologize in advance for the metaphor, but it’s as if an excruciating constipation is over. Even if no one (including me) would ever unreservedly love the final product, I felt like I had to clear it out of my system—like I didn’t have the free will not to!

  65. wolfgang Says:

    I think a good example of Knightian uncertainty is the fact that we cannot know what theorems the best mathematician(s) will be able to prove by the end of the year.

    e.g. will they be able to lower the value of H for twin primes (which already dropped from 70 Mio to less than 70,000) below 1000 by te end of this year?

  66. Jay Says:

    Scott #56 luca turin #52

    About their argument for the neural sodium-ion channels being sensitive to quantum noise. Citing a review by Faisal et al., they argue that the intrinsic temporal uncertainty of a neural signal is ≈ 1ms, a time frame they think related to quantum fluctuations in the number of open neuron ion channels.

    I don’t understand why they think this number plead for a role of quantum noise. However this number is wrong. Actually Faisal and al. said that the precision is ≈ 1ms “or below”. Indeed most of the research papers on which this claim was made did not look for better accuracy because of technical limitations (and lack of interest).

    As far as I recall the best (although probably not typical) precision that have been reported is 0.05 ms, in the auditory system of some big birds (the difference between signals from both ear is used to localise potential preys, which makes the precision easy to assess).

  67. Scott Says:

    LZ #62:

      history is full of examples where science was thought not to able to shed light on some phenomenon and in the end it has. We cannot blame science for our lack of imagination.

    Yeah, that’s the standard argument. And the standard response is that consciousness is not “some phenomenon”; it’s the theater where all the phenomena are experienced!

    So it’s as if someone argued that, because movie special effects have gotten better and better, and can now do many things once considered impossible, eventually there might be a movie where, when you click “Play” on the MPG file, actual arms reach out of the computer monitor and literally drag you into the world of the movie. Well, sure, maybe, but at the least you’ll need some massive hardware upgrades! No amount of brilliance applied to producing the film reel / MPG file—which is what all the previous advances being discussed were about—is ever going to breach the boundary of the screen.

  68. Scott Says:

    Mike #61:

      Well, doesn’t the theory of parallel universes provide an “explanation” for single photon interference … Let’s talk!!

    Yes, let’s! :-) Precisely because there is an actual case for MWI’s explanatory power (as eloquently set out by Deutsch among others), I’ve always been happy to talk about MWI, the arguments for and against thinking about QM that way, and what one even means by speaking of “parallel worlds” in this context. For the reasons why I remain unsure where I stand, see my post Why Many-Worlds Is Not Like Copernicanism, or Section 8.1 of Why Philosophers Should Care About Computational Complexity.

  69. Mike Says:

    Scott@68,

    Thanks for your response. No comment on your general analysis of the MWI, I’m merely a laymen in these matters anyway. My point, which looking back now at what you originally wrote may have been your point as well, was that the “reality” of whatever might be outside our ~100122-Planck-area cosmological horizon, can be distinguished from the theory of parallel universes precisely because the MWI does have some explanatory role to play in something we can observe — and I chose single photon interference as one example. I know that brighter minds than mine still question many aspects of the theory, notwithstanding its ability to provide some helpful explanations.

  70. Nex Says:

    I also am in the camp that doesn’t find predictability particularly relevant to free will. A whole lot can be predicted about your actions anyway. From physical constraints like say inability to move your hand up and down at the same time to biological ones like the fact you wont kill your daughter. The question of whether your actions can be predicted with 100% reliability is irrelevant as barring trivial always-true and always-false cases nothing can.

    The real killer for the free will is Occam razor. Free-will is in the same camp of popular feel-good beliefs as all the religions. It is just as (if not more) ill-defined, completely superfluous and has zero evidence to back it up. Just as with gods you cannot rule it out completely but that certainly doesn’t mean anyone should take it any more seriously then the Flying Spaghetti Monster.

  71. Scott Says:

    Nex #70:

      that certainly doesn’t mean anyone should take it any more seriously then the Flying Spaghetti Monster

    To which the ancient reply can be given: what is this “should” of which you speak? :-)

  72. Scott Says:

    Mike #69: Eh, the fun thing about big philosophical issues like MWI is that anyone who studies the very basics is as entitled to an opinion as anyone else. Indeed, I personally see MWI vs Copenhagen less as an issue of truth vs. falsehood, but of what we expect a scientific theory to do for us and how we should talk about it (but of course others disagree).

  73. John Sidles Says:

    Scott “The fun thing about big philosophical issues like MWI is that anyone who studies the very basics is as entitled to an opinion as anyone else.”
    ———————————————–
    This assertion is a prototypical Shetl Optimized “Great Truth”, and so we can be reasonably confident of finding illuminating-yet-opposite claims in the literature.

    In particular, quantum aficionados in the mold of Eliezer Yudkowsky will have fun looking up “Noether’s Theorem” in the index to Michael Spivak’s well-regarded Physics for Mathematicians: Mechanics I (2010), because near to it we notice an irresistible index entry “Muggles, 576″, which turns out to be a link to:

    ————-
    Theorem The flow of any Hamiltonian vector field consists of canonical transformations

       Proof (Hogwarts version) … <natural exposition follows>

       Proof (Muggles version) … <index-laden exposition follows>
    ————-

    Remark It is striking that standard quantum texts like Dirac’s The Principles of Quantum Mechanics (1930), Feynman’s Lectures on Physics (1965), Nielsen and Chuang’s Quantum Computation and Quantum Information (2000)—and Scott Aaronson’s essay The Ghost in the Turing Machine (2013) too—all frame their analysis exclusively in terms of (what Michael Spivak aptly calls) Muggle mathematic methods! :)

    Conclusion  Michael Spivak’s gentle mathematical jokes and Eliezer Yudkowsky’s good-hearted (yet impeccably rational) Harry Potter and the Methods of Rationality both help us to appreciate that outdated Muggle-mathematical idioms of standard textbooks and philosophical analysis are a substantial impediment to 21st Century learning and rational discourse of all varieties—including philosophical discourse.

  74. Nex Says:

    Well, it’s the same should as in “you should not believe everything you read on the internet.”

  75. Andrew Foland Says:

    So if we take the free bit conjecture, and grant that there is some meaningful rate at which these freebits interact with the brain, shouldn’t they also interact with many other kinds of systems? 

    I don’t see that the conjectured “free bit interaction rate” could be much less than a few hertz per kilogram of brain. Unless the free bits are interacting with specific molecular levels, it seems like this ought to be pretty consistent (within O(1) factors) with the rates in non-brain matter.

    But it seems this would inject Knightian uncertainty into everything? 

    So couldn’t one deduce limits on the Knightian uncertainty in well-studied macroscopic quantum systems which are known to behave according to straight up QM?  For all I know, maybe the limits aren’t that strong, but I bet someone could do a quick experiment, or calculation based on an existing experiment, that would set interesting bounds.

    (Without having thought about it carefully, transition-edge bolometers seem like a similar situation to a neuron that kicks off an amplification cascade.)

  76. haig Says:

    As it stands, our identities and decision procedures are already so heavily determined by macroscopic physiology that I don’t see how any appeal to the uncertainty of relatively insignificant microscopic quantum fluctuations would radically change our conditions of freedom or self-identity, unless those fluctuations play a causal role in consciousness (eg Penrose).

    Even if brains incorporate freebits such that complete prediction is impossible, I would think a predictive mechanism could still be constructed that worked well enough to determine the outcome of any high level choice presented to the scrutinized brain based agent. I concede that if you asked the freebits themselves a question (ie perform a measurement) the outcome would not be able to be predicted and so those particles would have free will in the way you framed it according to Knightian uncertainty, but the way brains work at a system level, unless there is a way for quantum mechanics to play a functional role in cognition, I don’t see how this significantly affects current positions on free will and the rest.

  77. Motivational Short Stories by Fable Fantasy Says:

    It can be that a dog has freewill like us.

  78. ppnl Says:

    Scott #58:

    There may be one difference between the zombie universe and our universe. I could see complex organisms evolving in the zombie universe. I have no reason to think they couldn’t be as creative and intelligent as we are. But why would they evolve the ability to have metaphysical arguments about an internal sense of being that they do not possess? Would they really be having this discussion for example?

    So whatever gives us the ability to experience our own thoughts seems to imply a richer set of laws than the zombie universe has.

    And if you think about it our sense of free will is created by our ability to experience our own thoughts. Our “I” is exactly those experienced thoughts.

    If you believe in free will, and nothing here necessarily implies free will, you could say whatever physics drives our experience acts as a feedback mechanism that allows richer behavior than any formal deterministic program. The mind watches itself in some way that is more powerful than the way a program can watch itself.

    I ran out of time and have not finished your essay but I’m having trouble seeing what freebits add.

  79. Silas Barta Says:

    I was surprised both at a) the extensive use of Knightian uncertainty and b) the sparsity of information on it. By b) I mean that the Wikipedia article is light, and everything else about it is either the same, or a long treatise that tangentially mentions it.

    My question is: does a (currently-used) cryptographic hash function’s output count as case of Knightian uncertainty? In a sense, we are “truly uncertain” what global structure they (will be discovered to) have that allows you to attack it more efficiently than brute force. (Indeed, assigning probabilities to theorems seems like a canonical, intractable case of Knightian uncertainty.)

    At the same time, the hard-core Jaynes approach is justified that you should treat all outputs as equally likely.

  80. Shufflepants Says:

    Scott, in Section 5.2 page 48 where you make your case for the weather not having free will, your reasoning seems like special pleading.
    You mention that a butterfly would not have an appreciable effect on the probability of there being a storm one year in the future. But what about 100 years in the future, or 1 billion years in the future.
    Furthermore, why does it matter if the effect is not significant? If it goes from a 23% chance to any measurable difference at all, why doesn’t that count as an exercise of free will as much as a human decision would under this freebit notion?
    Maybe it doesn’t appreciably affect the chance of a storm, but what if it slightly affected the exact amount rain from that storm? Suppose it was a light storm and the butterfly caused just one extra drop to fall and it was just enough to keep a seedling alive that would have otherwise died, but goes on to become a large tree as a result which goes on to have any number of effects. It is not hard to come up with scenarios that perkolate up as large as you want.

    What is so special about human time scales and what humans find important?

    Perhaps a single freebit does not affect the weather very often, but if it, or the cumulative effects of all freebits interacting with the weather, affects it at all, why does this not imbue virtually everything with free will?

  81. Scott Says:

    Shufflepants #80: The doctrine of”panpsychism” holds that the reason humans are able to be conscious is that there’s a sort of proto-consciousness that pervades the entire universe — but only in the rare cases where natural selection leads to something as complicated as a human brain (or a chimp brain, etc.) is there anything particularly interesting for that proto-consciousness to do. A bit like how the Higgs field pervades the universe, but only in extremely rare circumstances (e.g., the LHC) do we ever see its boson. Personally, I think panpsychism is like democracy: a terrible idea except for all the alternatives.

    Analogously, if the freebit picture is accepted, then it seems to me that we’re forced into a sort of “pan-libertarianism.” That is, we would indeed have to say that at least the capacity for Knightian unpredictability pervades the weather and everything else in the universe, except those parts of it that are completely screened off by PMDs. It might be, however, that only in systems of sufficient complexity (dog and above? Cro-Magnon and above? :-) ) does that capacity ever manifest itself to do anything particularly interesting.

    This, of course, is related to the excellent questions of Andrew Foland #75, which I’ll give a separate answer when I’m back at my computer (I’m on my iPhone now).

  82. Fred Says:

    Debates about AI, consciousness, free-will, often seem to miss considering the main “job” of the brain.
    From an evolutionary point of view, the job of the brain is to simulate the organism’s environment as a way to predict the future and improve chances of survival.
    There is one core issue – the brain/organism is itself part of the environment and therefore has to take itself and other brains into account to improve the predictions. This leads to a potentially infinite recursion problem.
    Is intelligence/consciousness is proportional to the number of recursions?
    Maybe that fundamental issue of infinite recursion is the source of Knightnian uncertainty (without even bringing up quantum mechanics, etc)?
    In the end, since the brain purpose is to run a simulation, it would seem very surprising that it cannot itself be simulated.

  83. John Sidles Says:

    Fred ways: “In the end, since the brain’s  purpose  function is to run a efficient simulation, it would seem very surprising that it cannot itself be efficiently simulated.”
    ———————

    Fred, your remark is thought-provoking, and with a view toward strengthening it, I have intercalated the work “efficient” in two places, and replaced “purpose” with “function”.

    The strengthened remark expresses a fundamental systems engineering principle (that is largely the work of Norbert Weiner):

    Proposition Every (real-time) optimal controller contains a (real-time) optimal estimator.

    Question Do biological brains depart substantially from this principle? How does it constrain our estimates of other people’s future behavior … and our own?

  84. Michael Gogins Says:

    Obviously there can be no infinite recursion with respect to consciousness. Many philosophers have noticed this. One well known example is Sarte on Husserl’s transcendental ego.

    I am conscious that I am conscious without any infinite recursion. Such a recursion would be unphysical, so if it existed, I would not be physical. If the recursion does not exist, I am still conscious, so it is not required for consciousness.

    Nobody who was not conscious ever invented anything, so I am guessing that one of the main selective advantages of consciousness is our ability to invent things like fire, spears, or theoretical computer science.

    There are other species who do not invent things who are still conscious, like some of the other apes, or dolphins, or elephants. But they are social. Perhaps their societies are partly formed by invention, as ours are.

  85. Fred Says:

    #83
    You’re right, I didn’t mean strictly that an infinite recursion is necessary (and feasible), but that it’s impossible to perfectly simulate a system from the inside since the simulation would have to take itself into account (which leads to a recursion requiring more and more resources).

    Chess masters try to guess what their opponent will do, taking into account what their opponent guess they will do themselves, etc.

    The end results can be highly non-linear based on the number of steps in the recursion (feedback loops can be stable or unstable), and look from the outside like a strong source of uncertainty.

    (Hoffstadter makes interesting points about all this in “I am a strange loop”)

  86. Fred Says:

    As Scott points out, a lot of things about human behavior are fairly predictable, which is why the advertising business and psychology work. These would be based on “stable” recursions (the feedback loops are stable). But some feedback loops aren’t stable and practically impossible to predict from the outside (highly non-linear output with the number of steps), e.g. trying to outsmart a zen or chess master :)

  87. Dan Says:

    Hi Scott, long-time reader, first-time poster.

    If the freebit picture holds, do you think it would then be possible (at some point in the future) to build a machine that can find and measure them? Could we then program this machine to behave intelligently, depending on freebit measurements in an inextricable-enough way that your response to “the gerbil objection” doesn’t apply?

    And if so, would we then consider this machine to have free will?

    Anyway, I found the paper a truly interesting read. I never thought I could find a statement like “Our decisions aren’t determined by the past, the past is determined by our decisions!” to be so plausible.

  88. Rahul Says:

    Michael Gogins says:

    There are other species who do not invent things who are still conscious, like some of the other apes, or dolphins, or elephants.

    How do scientists find which species are conscious and which aren’t? Just curious.

  89. John Sidles Says:

    Rahul asks: “How do scientists find which species are conscious and which aren’t? Just curious.”
    ————————————
    Scientists check to see if the species has a sophisticated appreciation of physics and enjoys socializing with scientists.

    That is why many (not all) medical researchers begin to experience empathic feelings — concomitant with ethical qualms — at the level of “mouse.”

  90. Scott Says:

    Dan #87: I’ll answer your question by agreeing to an even more general proposition. Whatever physical properties of the brain someone holds to be associated with “free will,” if a machine were built with those same properties, then I think the person would be obliged to regard the machine as having free will also. (And likewise for consciousness.) Otherwise the person would simply be guilty of “meat chauvinism.”

    Furthermore, it’s hard to see how there could possibly be a fundamental obstruction to producing machines with the required properties. After all, humans have known one way to produce physical entities with all the complexity, Knightian unpredictability(??) and other properties of human brains for millions of years! And if there’s one way to build something, why shouldn’t there be a completely different way?

  91. Scott Says:

    Andrew Foland #75: I’m thrilled that you share my interest in trying to look for experiments that would, if not kill freebits, then at least place bounds on how they could work!

    But for me, the real issue is this. Suppose we set up a transition-edge bolometer, of the same weight and dimensions as a human brain, to look for freebits. And suppose our device picked up all sorts of things: CMB radiation, radiation generated by passing vehicles, a few cosmic rays, etc. And suppose that—as I predict would happen—the radiation that we saw passed all the standard statistical tests for randomness that we could think to apply. The question is, what do we do with that result?

    Someone could always come back and say: “well, first of all, there could well be a subtle pseudorandom pattern, even though you didn’t find it. But more importantly, why would you expect any interesting pattern in the radiation impinging on a transition-edge bolometer? I mean, for crying out loud, it’s a transition-edge bolometer! Of course it’s not going to use its ‘capacity for Knightian freedom’ to do anything particularly interesting! The freebits impinging on it are just going to default to random values, for want of anything better to do! Sure, freebits might have been pervading the universe since the beginning of time, but it was only after Darwinian evolution produced beings of sufficient complexity that they started playing a nontrivial role.”

    Now, you might justly reply: “OK, but doesn’t the freebit hypothesis then retreat into unfalsifiability? indeed, isn’t it suspicious if the freebits conveniently disappear—or rather, ‘collapse back down’ to something indistinguishable from random noise—when you set up an experiment specifically to detect them? Unless, I suppose, the experiment itself involved beings of human-level complexity?”

    So to me, it’s interesting—and part of the reason why I wrote the essay—that the freebit picture remains falsifiable despite that property. In line with your first comment, if one could show that there are no causal pathways by which coherent quantum information can reach us from the early universe (and influence brain processes “surgically” and on reasonable timescales), without getting screened off by PMDs, then the freebit picture would be dead. (Even the more “careful” version of the picture, which doesn’t expect to find Knightian uncertainty in any physical systems that are “too simple.”)

    Related to the above, one other thing in your comment that I should reply to:

      So couldn’t one deduce limits on the Knightian uncertainty in well-studied macroscopic quantum systems which are known to behave according to straight up QM?

    Even supposing the freebit picture were 100% right, we wouldn’t expect to see any deviation from “straight up QM.” That is, any time you know the initial state |ψ⟩, the probability of a projection onto |φ⟩ will still be exactly |⟨ψ|φ⟩|2. Freebits would show up, not as deviations from QM, but simply as situations where we didn’t know and couldn’t possibly have known all the relevant features of the initial state |ψ⟩. Furthermore, any of the macroscopic quantum systems that have already been studied would probably count as, by design, having all (or as much as possible) of their potential Knightian noise screened off by PMDs!

  92. Scott Says:

    Silas Barta #79:

      I was surprised both at a) the extensive use of Knightian uncertainty and b) the sparsity of information on it.

    I was also mildly annoyed at the paucity of good accounts of Knightian uncertainty. One factor might be that, if a given uncertainty is Knightian, then by that very fact there might not be much more to say about it, at least quantitatively! :-) But maybe the more important factor is that people use many other terms for the same thing: probability intervals, Dempster-Shafer, “unquantifiable uncertainty,” etc. etc.

      My question is: does a (currently-used) cryptographic hash function’s output count as case of Knightian uncertainty?

    I don’t understand your reasoning for why it would. If both the hash function f and the input x are known, then f(x) is completely determined, while if (say) x is random, then f(x) is just a clear-cut random variable.

    On the other hand, whether the hash function can be broken, or related questions like whether P=NP, are very plausible candidates for Knightian uncertainty (at least from our perspective—not from a fundamental physics one).

  93. Michael Gogins Says:

    Rahul #88:

    Conscious is inferred by placing or painting a marker on the creature’s back, then putting it in a place with a mirror. If the creature tries to look at its back or feel its back to see what is there, biologists infer that the creature knows where it is and that the thing in the mirror is a reflection of itself. This implies at least some degree of self-consciousness. This is called “the mirror test.”

    All great apes, bottlenose dolphins, orcas, elephants, and European magpies can pass the mirror test.

    I would not be surprised if even more creatures can pass this test.

    As you probably know there are a surprising number of creatures who fashion and use tools, also.

  94. Scott Says:

    Michael Gogins #93: Yes, the mirror test is one famous way to try to “measure consciousness empirically,” in creatures that can’t talk. Anyone who’s never seen a video of a chimp or other animal learning to infer a dot on its own forehead from its reflection in the mirror, should google for one now—it’s a moving sight. (Incidentally, apparently human toddlers only start to pass the mirror test at around 18 months; I can report that a 5-month-old of my acquaintance has great fun with mirrors but doesn’t seem to differentiate them from other toys.)

    Still, there remains a huge difference between “self-consciousness” (the empirical property of an agent including itself in its model of the world), and “consciousness” in the elusive philosophical sense. And to expand on what I said in comment #58, I personally think it’s counterproductive when neuroscientists and others to try to co-opt the word “consciousness” for this or that empirically-verifiable ability—just as it would be if we tried to co-opt the word “God” for “the laws of physics.”

    By all means, say that what Chalmers and McGinn and Nagel and religious people and mystics mean by “consciousness” either doesn’t exist, or can’t be intelligibly discussed, or is outside the scope of science—whatever you believe about it—and argue that we should instead be asking different questions, like which creatures can recognize themselves in the mirror (or self-referentially simulate environments that include themselves as components), and why those abilities might have evolved. But it seems to me that it would make things easier for everyone if we used terms other than “consciousness” for the latter abilities—for basically the same reasons why I referred in my essay to “Knightian freedom” or “Knightian unpredictability” rather than “free will”!

  95. Scott Says:

    ppnl #78:

      There may be one difference between the zombie universe and our universe. I could see complex organisms evolving in the zombie universe. I have no reason to think they couldn’t be as creative and intelligent as we are. But why would they evolve the ability to have metaphysical arguments about an internal sense of being that they do not possess? Would they really be having this discussion for example?

    I think there’s a very interesting point in this paragraph (which Chalmers also discussed in his book). Namely, suppose that even humans’ penchant for getting into arguments about consciousness and free will had a purely Darwinian or other naturalistic explanation—e.g., that humans who enjoy such arguments attract more mates (a debatable proposition, to be sure ;-) ). In that case, one would have a general way to ridicule any argument for consciousness or free will being more than illusions: “hey, even if you were a zombie automaton, you’d still be saying exactly the same things!”

    Admittedly, there’s an element of question-begging here—for as you correctly point out, it’s far from obvious a-priori that a zombie automaton would be likely to say the same things! But one conclusion I think we can draw is that, if we did have independent reasons to predict that zombie automatons would make all the same philosophical arguments that we make, then the “pro-free-will” position would be incredibly weak, and would be more-or-less blocked from making contact with a scientific worldview. And therefore, if you want to believe in free will in a “robust, libertarian” sense, then I think you have to believe that there are some aspects of human behavior that aren’t fully explainable in Darwinian terms.

    You can still be a huge fan of evolutionary psychology, as I am. You can still believe, as I do, that it provides the best framework ever put forward for understanding what most people do, most of the time—and that most intellectuals grossly underestimate its importance in human life. But it seems you also have to believe that, at some point in evolution, humans (or some humans? :-) ) managed to “break evolution’s shackles,” and do things not fully explicable in terms of maximizing Darwinian fitness—like, say, committing suicide, or joining the priesthood, or going to grad school.

      If you believe in free will, and nothing here necessarily implies free will, you could say whatever physics drives our experience acts as a feedback mechanism that allows richer behavior than any formal deterministic program. The mind watches itself in some way that is more powerful than the way a program can watch itself.

      I ran out of time and have not finished your essay but I’m having trouble seeing what freebits add.

    The trouble is, how on earth would a “feedback mechanism” allow “richer behavior than any formal deterministic program”? We can easily write programs that interact with their environments, engage in complicated feedback loops, etc. etc.—but they’re still programs! Of course, if you think the human mind is just a complicated program, then that observation doesn’t present any problem for you. But if you don’t think so, then it does. In other words, it seems to me that you are forced in pretty radical directions, if you want to hold out any possibility for the mind being anything other than a program. Indeed, I’d say that that’s the central point that John Searle doesn’t understand and that Roger Penrose does, much as I disagree with Penrose on any number of other issues.

  96. John Sidles Says:

    Scott asserts (lightly amended #90 ) “I’ll answer your question by agreeing to an even more general proposition. Whatever physical properties of the universe someone holds to be associated with “ free will  freebits,” if a  machine  simulation were built with those same properties, then I think the person would be obliged to regard the  machine  simulation as having  free will  freebits also.”
    ——————————–
    Here the natural isomorphism ⟨universe, free will⟩ ⇒ ⟨simulation, freebits⟩ shows us a straightforward path by which geometers can induce rationalists like Scott to change their minds regarding even the opinions that they cherish most dearly. The cherished opinion to be changed is stated in Scott’s Ghost in the Quantum Turing Machine (p. 46) as follows:
    ——————–
    “One obvious way to enforce a macro/micro distinction would be via a dynamical collapse theory. … I personally cannot believe that Nature would solve the problem of the ‘transition between microfacts and macrofacts’ in such a seemingly ad hoc way, a way that does so much violence to the clean rules of linear quantum mechanics.”
    ——————–
    Thus a well-posed (mathematical) path for changing Scott’s opinion is to pullback the dynamical flows of Hilbert space onto curved (and lower-dimension) varietal submanifolds, and then to (mathematically) demonstrate that the geometrically induced alterations in dynamical flow are (FAPP) indistinguishable from freebit degrees of freedom.

    The irreversible and irresistible opinion-changing power of this mathematical program shows us why the leading heretics of mathematics have historically included numerous geometers among their company … geometers who have reminded us of inconvenient truths like “not every number is rational” and “we don’t really need that Parallel Postulate” and “the Ergodic Postulate isn’t strictly true.”

    That is why the geometer Bill Thurston is right to remind us, in his essay On Proof and Progress in Mathematics (Terry Tao’s weblog sidebar has a permanent link to it)
    ——————–
    “Mathematics is one of the most intellectually gratifying of human activities. Because we have a high standard for clear and convincing thinking and because we place a high value on listening to and trying to understand each other, we don’t engage in interminable arguments and endless redoing of our mathematics. We are prepared to be convinced by others. Intellectually, mathematics moves very quickly. Entire mathematical landscapes change and change again in amazing ways during a single career.”
    ——————–
    Will coming decades lead to a 21st Century appreciation that “we don’t really need clean rules of linear quantum mechanics”? Is this wider mathematical appreciation already underway, broadly and irreversibly? These are terrific topics for young researchers to contemplate, and in this regard, please let me specially commend the emerging generation of 21st Century STEM textbooks, in which basic texts like Mikio Nakahara’s Geometry, Topology and Physics provide the requisite mathematical toolset for appreciating the vast new STEM frontiers that are surveyed in Joseph Landsberg’s Tensors: Geometry and Applications.

    Not the least of the many virtues of Scott’s essay, is that — in effect — the essay comprises a mathematically well-posed roadmap for catalyzing wider agreement that (what Scott calls) “the clean rules of linear quantum mechanics” have become (in Jaffe and Quinn’s useful phrase of arXiv:math/9307227) an “unredeemed claim [that has] become a roadblock rather than an inspiration”.

    Among the greatest virtues of Scott’s writings (as it seems to me) is that his writings clearly state (and even explicitly promise!) that should it come to pass that the ongoing Thurston-style changes in the 21st Century’s mathematical landscapes induce increasingly many scientists, mathematicians and (especially!) engineers to peer beyond the 20th Century narrow focus upon “lean rules of linear quantum mechanics” … then (without any reasonable doubt!) Scott himself will become one of the leading and most ardently enthusiastic pioneers of the 21st Century’s new (and vastly broadened!) quantum frontiers. Good!

  97. IRTFFWM Says:

    Scott, I see a couple of my comments (temporarily number #12 and #13) got rescued from the spam filter. However I fear they’ll mess up the numbering system by adding 2 to all the later comments, and confusing peoples refernces to numbered comments. I got to post the same comment again (was numbered #21, now numbered #23) so it’s okay to delete those duplicates. I’ll make another real comment later.

  98. Scott Says:

    IRTFFWM: I don’t actually see that the numbering was messed up! (I marked your comments as “not spam,” but seeing that you’d already posted duplicates, I didn’t approve them to actually appear.)

  99. Ben Standeven Says:

    Some things I don’t understand in the paper:

    “One might worry about the “converse” case: probabilistic un
    certainty over different states of Knightian uncertainty. However, I believe this case can be “expanded out” into Knightian uncertainty about probabilistic uncertainty, like so:

    “[(A OR B) + (C OR D)]/2 = (A + C)/2 OR (A + D)/2 OR (B + C)/2 OR (B + D)/2.

    So, what do “OR” and “+” mean here?

    “In other words, the elapsed time between (a) the amplification of a quantum event and (b) the neural firings influenced by that amplification, must not be so long that the idea of a connection between the two retreats in to absurdity.”

    Surely neurologists already have a consensus prediction for the typical error doubling rates of human brains? A quick websearch suggests that we can’t hope for precise predictions of the rate, since the rate will depend on the details of the brain wiring and the nature and parameters of the neuron model. But I presume we would only need an order of magnitude value here.

    “Closely related to that requirement, the quantum event must not affect countless other classical features of the world, separately from its effects on the brain activity.”

    I don’t understand what you mean here. If one quantum event had more than one consequence, wouldn’t that violate the No-Cloning Theorem?

    “Now, someone who accepted the freebit picture would say that the superintelligence’s inability to calculate p is no accident. For whatever quantum fluctuation separated Earth
    A from Earth B could perfectly well have been a freebit.

    You mean two freebits, right? One on Earth A and one on Earth B. I suppose we could assume them to be entangled in some Bell states, though.

    “In that case, before you made the decision, the right representation of your physical state would have been a Knightian combination of you_A and you_B.

    Huh? It looks to me as though the “right” representation of your physical state would be a tensor product of two entangled Knightian combinations of “yous,” with the two tensor factors corresponding to the two Earths. After you make a decision, the superintelligence can throw out the other Earth’s states from the calculation, since it is not causally linked to this Earth.

    “What “side effects” do the pathways produce, separate from their effects on cognition?”

    What do you mean by “side effects” here?

    Some things I unfortunately do understand:

    “Moreover, there’s also a second empirical prediction of the freebit picture, one that doesn’t involve the notion of a “reasonable timescale.” Recall, from Section 3.3, the concept of a past macroscopic determinant (PMD): a set of “classical” facts (for example, the configuration of a
    laser) to the causal past of a quantum state ρ that, if known, completely determine ρ. Now consider an omniscient demon, […]

    “However, now imagine that all such photons can be “grounded” in PMDs.”

    Unfortunately, for those of us who are not omniscient demons this is not an empirical prediction; there is no way to tell if a photon is grounded in a PMD, since a proponent of the freebit picture could always deny that the photon’s ground really is a PMD. And you seem to be doing exactly this with the CMBR, so this is not just a hypothetical problem.

    “Meanwhile, an array of motion sensors regularly captures information about the gerbil’s movements and transmits it across the room to the computer, which uses the information as a source of random bits for the AI. […]

    “The problem should now be obvious. By assumption, we have a system that acts with human-level intelligence (i.e., it passes the Turing test), and that’s also subject to Knightian uncertainty, arising from amplified quantum fluctuations in a mammalian nervous system.

    This won’t work. The freestates that affect the gerbil’s movement have already been used up and are no longer free. The easiest way to see this is to simply note that we can assume the motion sensors produce a recording, and this recording can then be used by another computer to exactly reproduce that AI’s behavior. So its behavior is not uncertain at all.

    “For example, it might say that a homeowner will default on a mortgage with some probability between 0.1 and 0.3, but within that interval, be unwilling to quantify its uncertainty further.”

    What difference does it make whether there’s a definite probability? Since 0 and 1 are outside the confidence range here, it apparently follows that the homeowner does not have the power to choose to default or not default; he or she can only choose whether the probability of defaulting is closer to 10% or to 30%. That doesn’t sound like free will at all!

    “But as far as I can see, the question of determinism
    versus indeterminism has almost nothing to do with what compatibilists actually believe. After all, most compatibilists happily accept quantum mechanics, with its strong indeterminist implications (see Question 2.8), but regard it as having almost no bearing on their position.
    […]
    “In this essay, I’ll simply define “compatibilism” to be the belief that free will is compatible with a broadly mechanistic worldview—that is, with a universe governed by impersonal mathematical laws of some kind.”

    That won’t work either, because there are plenty of compatibilists who accept nonmechanistic ideas like Stoicism too.

  100. IRTFFWM2 Says:

    Scott #98, nevermind, ignore IRTFFWM #97, it was apparently observer dependent. I cleared my cache/cookies, and I no longer see those two messages, and the numbering is back to how it was.

  101. Ben Standeven Says:

    “”[(A OR B) + (C OR D)]/2 = (A + C)/2 OR (A + D)/2 OR (B + C)/2 OR (B + D)/2.”

    “So, what do “OR” and “+” mean here? ”

    Now that I think about it, + is probably probability superposition, and OR is the convex hull operator.

    So:
    “Huh? It looks to me as though the “right” representation of your physical state would be a tensor product of two entangled Knightian combinations of “yous,” with the two tensor factors corresponding to the two Earths. After you make a decision, the superintelligence can throw out the other Earth’s states from the calculation, since it is not causally linked to this Earth.”

    I guess that a tensor product of entangled Knightian combinations is also a Knightian combination of tensor products of entangled pairs.
    Also the superintelligence won’t throw out the “other” Earth, because it still doesn’t know which Earth is the “real” one. But the you-state is now a product of unentangled pairs, so from your perspective, you could throw out the other factor.

  102. Will you or will you not? | Wavewatching Says:

    […] melee on his blog,  Scott Aaronson's latest paper, "The Ghost in the Quantum Turing Machine", is a welcome change of pace.  Yet, given the subject matter I was mentally preparing for a similar experience as with […]

  103. Gil Kalai Says:

    Regarding Knightian uncertainty, it is worth to note that chaotic classic behavior does exhibits Knightian uncertainty. Namely, that without full knowledge of initial conditions (even probabilistically) we cannot give a fair prediction for the outcome (even probabilistically). The counter argument (mentioned in the paper) that the underlying theory is fully deterministic is interesting (it largely applies to the quantum setting as well,) but, at best,  it only says that we need to better understand the physical foundations of chaotic behavior, and not that we can dismiss its relevance.

    Scott (#35):  “Gil #33: All I claim is that, if you want to hold that sort of probabilistic determinism to be compatible with free will, then you might as well go “hardcore compatibilist,” and hold even deterministic determinism to be compatible with free will.”

    This is a very strong claim.

    “Otherwise, you’re forced to some strange conclusions.”

    Scott, your conclusions are neither “forced” nor being “strange.”

    For example:

    “… Likewise, suppose there were a million copies of me;”

    No, you cannot simply suppose everything you want. If having a million copies of you lead to strange conclusions maybe we simply cannot suppose that. (But let’s suppose that.)

    “…then the dinosaurs could be essentially certain that 500,000 copies (plus or minus a few thousand) would make one choice and 500,000 would make the other.”

    Nothing in what I said (#33) forces this conclusion. (But you can assume it if you wish.)


    “And still I’m supposed to accept that each individual copy has ‘free will’?”

    Yes, why not? If we can be sure, based on statistics, that in the Boston area (say)  3000 men (plus minus some error term that we can also estimate,) will be charged next year with a serious criminal offence, does this mean that individual men have no free will?

  104. Scott Says:

    Gil #103:

      If we can be sure, based on statistics, that in the Boston area (say) 3000 men (plus minus some error term that we can also estimate,) will be charged next year with a serious criminal offence, does this mean that individual men have no free will?

    That’s a very interesting question, and my personal answer to it is yes! If the fraction of men who commit crimes in a given year were as reliably predictable as the fraction of tritium atoms that decay in that year, then I’d say that the former was no more subject to “free will” than the latter.

    In reality, of course, crime rates do fluctuate unpredictably from year to year. (That’s even ignoring the extreme cases, like the ~50% crime drop in NYC in the 1990s, which essentially none of the experts foresaw.) Of course crime rates (like voting patterns and everything else) are somewhat predictable, but what isn’t? In the fluctuations, there seems to be more than enough scope for Knightian unpredictability.

    (You might say: if the fluctuations are tiny, does that mean there’s only a tiny scope for Knightian unpredictability? No, I think even a small amount of Knightian unpredictability in the aggregate statistics is compatible with plenty of Knightian unpredictability in each individual case. But that’s a point that I should develop more elsewhere.)

  105. Abalieno Says:

    Scott, I’d really, really need that you clarify a bit more a part of FAQ 2.6, especially the conclusion on page 19.

    It seems to me such a leap of faith. Why you see that description as a “defeat of science”? Why you say it would rule out scientific understanding?

    The fact that a closed system, like an human being seen in a deterministic way, is able to incorporate information in order to modify itself, and so achieving huge progress as well as predicting its future experiences in order to get there, all this seems to me an even stronger affirmation of science. Not a defeat.

    Seen this way human beings are machines that INTEGRATE their program. They aren’t “finite”. In order to be modeled, you need the closed system as well as the environment, since the environment is then taken in the system to integrate it. Taken as a whole this is still a mechanistic model, but it comprises BOTH the closed system and what’s outside. Science is the tool that makes this integration possible. So it’s absolutely indispensable to make all this possible.

    And because it is indispensable it doesn’t lose any of its importance, nor it goes against determinism.

    As long the “observer” doesn’t exit the universe, that state can be achieved by an human being. And so an human being on that stage is able to see all the paths he went through.

    These hierarchies are ever expanding, and are limited by staying in the system of physical reality. They can’t achieve “free will” since they can’t incorporate any information that actually allows them to break the rules of the system, so the hard rule is that they’ll never escape the deterministic system. But they still can achieve GREAT knowledge and progress. Including increasingly accurate self-knowledge (see the stuff about the Omega Point).

    And all this is EXTREMELY relevant, as long an external position is not achievable (and it is not, because an external observer can only be theorized, but not achieved).

  106. Scott Says:

    Ben Standeven: You had a lot of questions/objections, but let me try to take them in the order I remember.

    1. “Now that I think about it, + is probably probability superposition, and OR is the convex hull operator.”

    Yes, that’s correct.

    2. The reason I don’t take a “tensor product” of youA and youB is simple: since they’re so far away that (because of the cosmological constant) no observer could ever see both of them, I don’t even consider them to live in the same Hilbert space! Instead, you just have your Hilbert space, consisting of the 10122 or so qubits in your cosmological horizon. And if you like, the density operator (or freestate?) that you choose for that Hilbert space can incorporate your uncertainty about whether you’re “really” youA or youB. (To put it differently, I’m implicitly thinking about an extremely large universe in the “operational” way that, e.g. Bousso and Susskind advocated.)

    3. If you have references about neural doubling times, send them to me! I’ll happily take a look and then comment.

    4. No, a single quantum event having a huge number of effects doesn’t violate the No-Cloning Theorem—in fact it happens all the time, and is precisely what we mean by “decoherence”! From the Many-Worlds perspective, all that’s happening is that a state like α|0⟩+β|1⟩ is getting mapped to a state like α|0…0⟩+β|1…1⟩ (i.e., a bunch of “Controlled-NOTs” are getting applied).

    5. By “side effects,” I meant changes to the state of the world, whose causal chains from the freebit never even pass through the change to the brain state. If there are such side effects, then the issue is that a superintelligent predictor could predict the person just by monitoring the side effects, without even looking at the person.

    6. I think it’s just straightforwardly true that, if you can exercise “free will” in deciding whether to do something or not do it, then you can also exercise “free will” in deciding whether to do it with 10% probability or with 30% probability. To see that, let’s change the numbers: suppose you get to choose whether to do it with 1% probability or with 99% probability. Well then, of course that’s “free”! It’s obvious that external factors can always come in to frustrate your will: you might decide to go to a party but then discover that your car won’t start, or decide to stay home but then get dragged to the party by goons. So, the only difference between the 10%/30% and the 1%/99% cases seems like the quantitative one of how much scope there is for will, and how much for the external frustrating factors.

    7. Yes, it can be extremely hard to tell whether something is or isn’t a PMD! (Or better, how much scope it has for being a non-PMD.) It might require detailed knowledge and understanding of the system in question—exactly the sort of knowledge I was requesting about the CMB last scattering surface. But while the question is hard, I don’t see that it’s non-empirical. If someone can give a compelling physics argument for why a qubit couldn’t possibly “pierce” the SLS while maintaining its coherence, then I’ll concede that I was wrong to speculate otherwise.

    8. No matter how I defined words like “compatibilist,” some people would have complained! I suppose I could have invented terms of art like “probabilistic-only compatibilist” or “even-deterministic compatibilist,” but that seemed too unwieldy. I did define my terms right near the beginning, which seemed like the best compromise. (Why don’t you try writing an 85-page paper that saunters onto a scorched, millennia-old philosophical battlefield—the machine guns trained on any newcomer from all sides. You’ll find it’s not as easy as it sounds! :-D )

  107. Scott Says:

    Abalieno #105: Firstly, I’m grateful for your mention of the Omega Point theory, and the other stuff toward the end of your comment! It’s a relief when I brace myself for another pummeling by a thoroughgoing scientific skeptic, only to find out by the end that the questioner believes things that are 50 times weirder than anything I’ve toyed with. ;-)

    More seriously, “the ultimate victory of the mechanistic worldview also being its defeat” is just a melodramatic turn of phrase, which you can feel free to ignore if you like. But it’s easy to illustrate what’s meant by it through an example.

    Suppose someone says: I want to eliminate all metaphysical weirdness from my worldview. There’s no God, no purpose to the universe, no souls, no miracles, not even any consciousness or free will. All there is, is the unfolding of an immense computation (which some call “the physical universe”), in which I’m simply one evanescent pattern of bits. There’s nothing to me beyond that pattern of bits, and the same pattern could be copied and instantiated in a robot, a virtual-reality simulation, or any other physical system, in which case you would’ve literally copied me.

    OK, so far, so good!

    But wait: in the far future, shouldn’t technology have advanced to the point where we can create enormous numbers of fully-immersive virtual realities (“Matrices”)—which would in general have further Matrices inside of those Matrices, etc. etc.? If so, then as a Bayesian, why shouldn’t you predict that you yourself almost certainly live in one of those Matrices? (See Bostrom’s Simulation Argument page for a detailed fleshing out of the argument.) In that case, presumably there would be a purpose to the universe: namely, the purpose of whoever started the simulation! But leaving that aside, what exactly counts as a simulation? Does a debug trace with pen and paper count? What about running the simulation in reverse, or in just one branch of a quantum computation, or in homomorphically-encrypted form?

    Now, nothing I’ve said does anything to show that this line of reasoning is false. Maybe the truth is that we are living in a simulation within a simulation, and we just have to live with that! But what I think it does show is that we’ve failed at our original goal: that of eliminating all “metaphysical weirdness” from our worldview. If anything, our attempt to do so has only increased the metaphysical-weirdness level!

    So, we have a few options: we could bite the simulated bullet, as (say) Nick Bostrom and Robin Hanson do. Or we could say that, given the immensity of the unknowns, it’s probably a bad idea to try to push these sorts of ideas to their logical conclusions, and leave it at that (but we’ll always remain a bit unsatisfied). Or we could go back and revisit one or more of the assumptions that led us here, like the assumption that there’s “something that it’s like” to be a copyable piece of code in a simulation.

  108. Gil Kalai Says:

    Scott (#104): “That’s a very interesting question, and my personal answer to it is yes! If the fraction of men who commit crimes in a given year were as reliably predictable as the fraction of tritium atoms that decay in that year, then I’d say that the former was no more subject to “free will” than the latter.”

    Hmm, that’s interesting. I suppose that the line that you are drawing is asymptotic in nature, and let me see if I got you right:

    1) If the number of criminals in a population of N people is 5% plus minus a constant times square-root N then no “free will” can be involved.

    2) If the number of criminals is 5% plus minus 0.1 % (no matter what N is) then this reflects Knightian unpredictability (which is a necessary condition to “free will”), and must come from free bits.

    Are 1) and 2) correctly reflect your point of view, Scott?

    How do you treat the case when the fluctuations are, say, like N^{2/3}, is this a reflection of Knightian unpredictability as well? And what about fluctuations of order N^{1/3}?

  109. Scott Says:

    Gil #108: I was very careful to say, if the proportion of men who get arrested were to become as reliably predictable as the proportion of tritium atoms that decay. Now, in the latter case, we simply have the sum of a huge number of i.i.d. random variables, and we know the expectation to enormous precision. So in particular, we know not only the expectation and the variance, but all the higher moments as well (indeed, we know that the distribution is precisely binomial). To whatever extent the criminal statistics deviate from that ideal—i.e., the ideal of being able to calculate the entire distribution from first principles—to that extent I’d say there’s scope for Knightian unpredictability. (Of course, whether there’s “free will” is a much harder question: as you correctly point out, I regard Knightian unpredictability as necessary but not sufficient.)

    It would be fun to think about the quantitative aspects: e.g., if there’s such-and-such amount of Knightian unpredictability in the aggregate statistics, then how much Knightian unpredictability can that be consistent with at the individual level, and under which assumptions? But I don’t want to opine about that question until I’ve thought about it some more—and right now I have a talk to prepare! :-) So for now, let’s call it an open research problem in quantitative free-will studies.

    (Btw, see you at the conference tomorrow! Just arrived at the Ramada an hour ago.)

  110. John Sidles Says:

    —————-
    Scott previously asserted (Proposition #90) “[I agree] to an even more general proposition. Whatever physical properties of the brain someone holds to be associated with “free will,” if a machine were built with those same properties, then I think the person would be obliged to regard the machine as having free will also.”

    Scott now asserts (Proposition #104) “If the fraction of men who commit crimes in a given year were as reliably predictable as the fraction of tritium atoms that decay in that year, then I’d say that the former was no more subject to “free will” than the latter.”
    —————-
    The above two statements are individually plausible and superficially compatible — and yet key common-sense assumptions that are known to be associated to the world-view of Proposition #90 grossly violate key common-sense assumptions that known to be associated to the world-view of Proposition #104.

    Common-sense Proposition #001  On the time-scales that are relevant to biology, a single spin-1/2 atomic nuclei (like tritium) is wholly characterized by the two degrees freedom that are associated the Bloch sphere. In contrast, a single human being is characterized sufficiently many degrees of freedom, that estimates even of the number and/or character of these internal degrees of freedom possess Knightian uncertainty (e.g. how many neural graph-nodes, and what degrees of freedom for each graph-node, and what dynamical model of each graph-node, suffice in aggregate to characterize an individual human brain’s connectome?).

    Common-sense Proposition #002  Biological behaviors in general, and human biological behaviors in particular, and the class of human behaviors called “crimes” most particularly of all, are associated to exponentially more degrees of freedom than the elementary behavior of tritium atoms … and hence are exponentially more difficult to measure and/or predict than radioactive decay … so much so, that the uncertainty associated to the determination of human guilt-versus-innocence in regard to “crimes” is Knightian (e.g. was the dyamicist Eberhard Hopf culpably wrong to accept a faculty appointment to the University of Leipzig in 1936?).

    Judge Learned Hand summarized the Knightian uncertainties that are intrinsically associated to notions of justice in a celebrated aphorism:
    ————————
    Learned Hand’s Proposition  “Justice is the tolerable accommodation of the conflicting interests of society, and I don’t believe there is any royal road to attain such accommodations concretely.”
    ————————
    Conclusion  The uncertainties associated to our understanding of radioactive decay (tritium decay in particular) and the uncertainties associated to our understanding of human behavior (criminal behavior in particular) both encompass elements of uncertainty that assuredly are Knightian … but “royal road” attempts to ascribe both kinds of Knightian uncertainty to “freebits” grossly violate key elements of our present understanding both of radioactive nuclei and of human cognition.

    Questions  Does the notion of “freebits” resolve any uncertainties, Knightian or otherwise, with regard to our understanding of cognition as a dynamic physical process? If so, what are these uncertainties? More broadly, does the notion of “freebits” provide any illumination at all, in principle or in practice, in regard to the profound Knightian uncertainties that are associated to the notion of cognition regarded as a justice-maximizing process?

  111. Gil Says:

    See you tomorrow, Scott!

  112. Abalieno Says:

    Scott #107: I personally don’t really “believe”. I simply swap a model I currently hold with another that seems to me more accurate or more convincing.

    In particular, I really don’t know much about the Omega Point, I simply think it encapsulates rather well that general idea you yourself expressed in that comment: the idea that eventually we reach far enough to be almost equivalent with a “God”.

    That’s why I mentioned the Omega Point: merely the theoretical idea of maximum complexity and consciousness that represent the full understanding of reality. Just that.

    I think this position can not be achieved. It’s just a direction you move toward, but can’t be reached because the world itself you live in can’t be breached. Say that you are able for some reason to talk with “God”, and that you are then able to use what you just learned in your own life. What I’ve just described is a “violation” of the closed system. It’s the only possibility of “free will”, because by talking with God I was able to reach information that wasn’t directly available within the system. Hence the system is not closed, hence determinism is broken. The same happens if you give the brain a “black box” type of power that can tap on knowledge that exists outside the system (it seems to me the one you describe in that document). Another breach, a kind of black hole in a brain that once again breaches the closed system, and so escapes determinism. Are these black holes and breaches necessary to explain the world we live in? I think they aren’t.

    Now, what I “believe” is a kind of inverted scenario. My assumption is the mechanistic view. If you THEN want to make me believe in God, soul, free will and so on, then you have to PROVE those to me. Because it seems to me that the mechanistic view is more elegant and far more coherent.

    Back to your example: being everything computation means of course that everything can be simulated, given those absolute powers (which obviously is not trivial). If you want to read about interesting theories about how consciousness is created by its hardware (and how it’s all a big Magic Show) you can read about the Blind Brain Theory: http://rsbakker.wordpress.com/ (though there’s so much material, I could maybe link some more precise blog post, if you’re interested)

    And yes, I absolutely agree, but not refuse, the idea of simulation within simulation. I absolutely REJECT that this is metaphysics. The simulation idea is compatible with science and the mechanistic view. It’s not an abstraction and doesn’t rely on hand waving.

    The simple idea is that we just don’t know if our idea of “time” is valid, in the sense that we are now moving toward that goal of full knowledge, OR if we’ve been already there once, or more than once, and our world is only the result of what we already achieved in our “future”. We just don’t know.

    But even if “we don’t know”, we still can’t refuse the power of science and how integrating information into our program is having an incredible impact in our lives. This science, no matter where we are in that simulation model, is EXTREMELY RELEVANT.

    In your 2.5 FAQ you seem to say that our program is “fixed” and so nothing can be achieved. This is true from the external point of view (because it simply means there’s no free will, seen from the outside), but it does not take in consideration the problem of the relativity of the point of view. For US that stuff is relevant. The program is not “fixed” because it continuously integrates the environment. It changes. It evolves.

    If you look at this from the external point, nothing is changing. But if you consider the semi-closed system of a human being, there’s a DRAMATIC change. Because you are integrating knowledge you didn’t have, because your program is changing. They are all changes RELATIVE to the point of view, and still all changes that are “slaves” to the system. But this doesn’t make those changes any less important.

    At least if you can’t claim of being able to REALLY go there to that external point of view. In that case, yes, we would be just puny humans to you.

  113. Nex Says:

    Scott: “But it seems you also have to believe that, at some point in evolution, humans (or some humans? ) managed to “break evolution’s shackles,” and do things not fully explicable in terms of maximizing Darwinian fitness—like, say, committing suicide, or joining the priesthood, or going to grad school.”

    But humans did not break any shackles. We exhibit no behavior that would be incompatible with evolution. One only has to realize evolution is not perfect and doesn’t much care about an individual to see how many non beneficial traits can arise. If some change/mechanism/behavior offers benefit for the majority and is detrimental to the minority it will still be selected for.

    This is true even at the most basic level of DNA mutations. Most of them are detrimental to individuals carrying them, but there is no other way to sample the genetic landscape to isolate beneficial ones so this is simply the cost of evolution.

    In the same way many detrimental (or neutral) traits present in humans are simply a byproduct of sampling the psychological landscape to isolate those changes which confer advantage.

    That doesn’t explain more widespread detrimental traits whose frequency implies they were selected for, but those can in turn be explained as side effects of otherwise beneficial traits which manifest themselves is a small enough population to make the net result positive. Sickle cell anemia is one example of such a complex trait, single copy of mutated gene offers malaria resistance, two copies (which happen much rarer) make one sick.

    Personally I would classify discussions about consciousness, the origin of the universe and similar metaphysics similarly. They are byproducts of the rational and inquisitive mind. Obviously rational thinking offers a great advantage in terms of evolutionary success and even if it makes some individuals waste time on empty metaphysical discussions it is still a very small price to pay. Given time evolution might even stump out this drawback.

    There is also nothing special in the fact we humans can ponder our place in the universe. We simply have a model of reality and this model includes our image of ourselves. Nothing profound or special about it. A martian rover sending back a processed image of itself on the martian landscape is doing pretty much the same thing.

    So personally I don’t see any valid difference between the zombie and our universe.

    Also considering your point about ““the ultimate victory of the mechanistic worldview also being its defeat” I completely subscribe to us-being-nothing-more-then-a-pattern-of-bits view and I don’t see any flaw in it. What is the ultimate fundamental hardware on which those bits are stored and processed is unknowable and irrelevant.

    Even if we were a simulation run by some entity for some particular purpose, that wouldn’t impart any meaningful purpose to our existence. Yes, you could say our life has a purpose for that entity just as say a life of a hamster bred to test toxicity of some substance has a purpose for the experimenter. But that purpose is meaningless to the hamster and it’s only a “local” purpose anyway, the existence of the whole – an experimenter and a hamster still has no purpose. In other words it’s the same exercise in futility as explaining the creation of the universe by inventing some creator, all it does is move the question to what created the creator.

  114. John Sidles Says:

    Nex asserts: Humans did not break any [evolutionary] shackles. […] This is true even at the most basic level of DNA mutations. Most of them are detrimental to individuals carrying them, but there is no other way to sample the genetic landscape to isolate beneficial ones so this is simply the cost of evolution.
    —————————–
    LOL … Nex, are you entirely certain of this general principle?

    A life-form that evaded this restriction would enjoy substantial evolutionary advantages … and who knows, such a process might even become very popular!

    Proposition  Comedy (in its many forms) is to the evolution of cognition as sex (in its many forms) is to the evolution of species: both processes commonly are undignified (even risky) yet withall both are remarkably effective.

    Corollary I  Contradictions speed the evolution of cognition as diploid genomes speed the evolution of species.

    Corollary II  The recombination of definitions brings surprising new capabilities to mathematicians as the recombination of genes brings surprising new capabilities to species.

  115. Scott Says:

    Nex #113: Well, I’m not going to argue too strongly with you. From my perspective, your view is a perfectly-reasonable local maximum in the space of worldviews. Indeed, it’s the local max that I myself occupied and enthusiastically defended for more than a decade, before I decided to head downwards in search of something I liked better, something that would give me a clearer answer to (e.g.) what I would experience if my brain state were perfectly duplicated. And while I have some ideas (set out in the essay), I certainly can’t claim to have reached that other thing, so someone still on the hilltop is perfectly justified to scoff at me.

    However, there was one thing I wanted to comment on. The idea of humans using compassion and rationality to “break the shackles of evolution” is something that I took, in part, from that notorious anti-evolution mystic, Richard Dawkins. E.g., from an interview:

      It’s perfectly consistent to say this is the way it is–natural selection is out there and it is a very unpleasant process. Nature is red in tooth and claw. But I don’t want to live in that kind of a world. I want to change the world in which I live in such a way that natural selection no longer applies.

    I’ve read other things by Dawkins where he makes even stronger statements, but I’m at a conference right now and can’t find them (maybe someone else can?).

    On a different topic, Dawkins has also said repeatedly that he regards consciousness as the biggest mystery in science, and that he has a lot of trouble believing that a computer that passed the Turing Test would actually be conscious, even though he seems forced to that position by his other intellectual commitments. He’s even referred positively to Roger Penrose’s speculations, which are at least an order of magnitude more radical than anything I’ve entertained.

    Now, I don’t want to argue from authority: it’s entirely possible that Dawkins is simply too soft and muddle-headed on these issues, and needs to be set straight. All I can say is that, when formulating my own worldview—trying to toy with crazy ideas that could take us beyond what we already know while still staying within the bounds of scientific rationality—I don’t feel an overwhelming need to out-Dawkins Dawkins! :-)

  116. Abalieno Says:

    if you want a radical take on consciousness that is every elegant and still goes against the grain, then I suggest you again looking at Bakker’s blog.

    Here’s one post that is relatively short and gives a decent summary:
    http://rsbakker.wordpress.com/2013/04/17/lamps-instead-of-ladies-the-hard-problem-explained-2/

    But you can start pretty much everywhere.

  117. Scott Says:

    Abalieno #116: I read the link you provided; thanks. It seemed like a nice, well-written presentation of a 100% standard Dennett/Ryle/etc perspective: namely, that there is no hard problem of consciousness; there’s only the different problem of explaining why our brains generate the illusion of a hard problem of consciousness. If there’s something novel or radical in this account, then I missed it.

  118. Michael Gogins Says:

    Scott #115:

    Whether or not one enjoys the gore, pain, and indifference of natural selection, I don’t see how one’s attitude towards it, no matter one’s powers, excepts one from its workings. If I can save the lives of people who would otherwise die, or eat vegetables instead of meat, my offspring will still be more numerous or less numerous than some other being’s, and that’s natural selection.

    It’s clear from the sheer mass of human tissue on this planet that human beings are rather successful in evolutionary terms. Since we are eusocial, it is arguable that morality, compassion, and so on actually have a selective advantage.

    But of course, assuming for the sake of argument that there is such a selective advantage, that has absolutely nothing to do with the value of morality and compassion on their own terms.

    From the standpoint of natural selection, we go good in order that we may the more reproduce. From the standpoint of morality, we fight and reproduce — and only in certain good ways — that we may advance the good.

    You have a nice attitude about the “hard problem” of sheer experience. “The good” is just as hard.

    There’s an unquantifiable source of variation “behind” decisions; that the future is unknowable means it’s impossible to empirically decide between these two views of the good.

  119. Scott Says:

    Michael #118: Well, if you define “natural selection” in a broad enough way (“that which survives, survives”), then it trivially always operates. But if you define it a bit more narrowly, then I think it’s already stopped operating in much of the biosphere. At the least, we now have heavily artificial selection — where, e.g., whether the giant panda survives now depends almost entirely on whether humans find them cute enough, and whether the Japanese people survives partly depends on whether Japanese women find a work/family balance that allows them to have more kids. I.e., if the watchmaker is “blind,” it’s now “blind” in a very different sense than it was for most of the earth’s history! :-)

  120. Jr Says:

    On page 42 you say: “Of course, with the discovery of special relativity, we learned that the choice of t coordinate is no more unique than the choice of x, y, z coordinates;
    indeed, an event in a faraway galaxy that you judge as years in the future, might well be judged
    as years in the past by someone walking past you on the street.”

    You are assuming here that they you will be adopting different coordinate systems. Nothing forces you to that, anymore than you can say that Scott and I must disagree about the choice of origin of our coordinate system if we were living in Euclidean space.

  121. Scott Says:

    Jr #120: Yeah, yeah, I know. Key word was “might.” ;-)

  122. Michael Gogins Says:

    Scott #118:

    But it’s still natural selection — unless we are not a part of nature.

  123. Scott Says:

    Michael #122: Well, now it’s just a relatively-boring quibble over terminology. Darwin himself distinguished between natural and artificial selection, using the latter as a central pillar of evidence for the former. I suppose one way to define natural (as opposed to artificial) selection would be the nonrandom survival of randomly varying replicators, in the absence of intentionality.

  124. Abalieno Says:

    Scott, about #117: “If there’s something novel or radical in this account, then I missed it.”

    This other Scott tells me: “He’s wrong about “Lamps,” but then we all assimilate the new to the old. Give him “Mary”: the challenge is laid out there far more specifically than anything you would find in Dennett.”

    So here’s Mary: http://rsbakker.wordpress.com/2013/05/27/the-something-about-mary/

    Though this time it’s a longer post, but hopefully it’s clearer about the kind of radical revolution Bakker sees (while not liking it, he’s not after theories that “please” him or that he hopes are proved true).

    P.S.
    Why I’m at it I’ll also give his own view of your argument up here in the comments about Matrices inside Matrices and the purpose of the universe. I hope you don’t resent the quote ;)

    Why do I need to rule out any of the possibilities he poses? All I need to do is account for intentional phenomena in a manner consistent with the explanatory paradigm of the life sciences. I don’t need to ‘go quantum’ or anything of the sort. He’s the one that needs to reach, and therefore he’s the one with the burden to discharge.

    The argument you give is a perfect example: how does the possibility of our reality ‘being a simulation’ introduce the possibility of our reality ‘having a purpose.’ If the base reality has that same structure as our reality then it is mechanistic (which is not to say ‘deterministic’), and ‘purpose’ is as much ecology specific heuristic there as here. But this simply underscores the point: Why assume that this ‘base reality’ shares anything with our ‘simulated reality’ at all? It actually seems to fall into the category of assuming ‘God’ must be anthropomorphic, only in this case we assume (on the basis of the only-game-in-town-effect) that the base reality must be in some sense ‘cosmomorphic.’ The real question is ‘Why assume anything?’ The empty can rattles the loudest, I suppose.

    This guy is simply giving people who are desperately trying to naturally square the circle of intentionality another quiver of hope. The apologist always argumentatively swims downstream.

  125. Mike Says:

    Abalieno@124,

    I must say that you’ve pretty much lost me, but I’ll attribute that to my own substantial intellectual shortcomings and not to you or the point(s) you are trying to make.

    However, regarding the “Mary” challenge that kicks off your post, while I’m undoubtedly not the first person to point this out, I believe that the issue at the heart of the Mary problem is fairly straightforward and has to do with how the question is posed.

    If Mary has “all” of the physical information there is to obtain, including the physical information embodied by her brain in the act of experiencing the actual colors, then she learns nothing new on emerging from the room. If she doesn’t have the physical information physical information embodied by her brain in the act of experiencing the colors, then she simply doesn’t have “all” of the physical information there is to obtain.

  126. Fred Says:

    It seems a lot of the discussion revolves around the effects of quantum mechanics on the classical world – whether the human brain is somehow an amplifier of such effects?
    And I’m not sure I understood the arguments against the objection consisting in isolating brains from freebits by living in an isolated system (like an underground mine). What happens to free-will in that case?

    Lately I’ve been wondering about taking this isolation one step further – with a VR headset like the Oculus Rift we’re able to “live” in purely deterministic worlds (less and less distinguishable from the real world), where the only sources of randomness are the users’ brains.

    Another thing is that one of the functions of the brain (to support simulation of the environment) is also to provide a memory of macrofacts. Not clear if there can be free-will without such memories.

  127. Abalieno Says:

    Mike #125:
    I don’t write for Three Pound Brain blog, I only read it.

    And in order to understand how Scott Bakker utilized the Mary experiment to introduce his own theory you’d have to read it all. It’s not a superficial thing.

    If you have no idea about Metzinger or Dennett theories you probably won’t even begin to understand those types of analysis on consciousness.

    Bakker believes that in the same way we discovered that the Earth is peripheral to the universe, instead of the center, and in the same way a human being is peripheral to evolution, instead of the center, so will happen to consciousness. Instead of the center it will be discovered as peripheral to the activities of the greater brain.

    And obviously he doesn’t stop there. He explains WHY the brain developed this way, and HOW and WHY consciousness appears to us the way we “feel” it (qualia).

  128. luca turin Says:

    Scott #56:
    As a kinda sorta expert on ion channels [I was an electrophysiologist in a previous life], I would say that their argument on ion channels is entirely sound. In addition to water effects, there may be other quantum effects in ion channels as well, see for example http://iopscience.iop.org/1367-2630/12/8/085001

  129. Monoids, weighted automata and algorithmic philosophy of science | Theory, Evolution, and Games Group Says:

    […] Aaronson’s Why Philosophers Should Care About Computational Complexity (2011) and recent meditation on free-will […]

  130. Jay Says:

    luca #128

    Can you give a crude estimate on the minimal number of ion channels on which a stimulus must act so as to make a difference?

  131. Luca Turin Says:

    Jay #130 Sure: an ion channel passes, say 10 pA of current, and membrane capacitance is 1µF cm2. Near threshold the sensitivity of the Hodgkin-Huxley equations is very high, but say 1 mV makes the difference between firing and not firing. The change needs to be reasonably fast otherwise channels inactivate, say within 1ms. So that’s >1 V/s from 10 pA -reaches for HP11-C- , needs <10 pF of membrane to happen, which according to my before-coffee calculations is a sphere ≈15 microns in radius, i.e. a decently-sized cell.

  132. mkatkov Says:

    Scott #29

    That is in the case of discrete input to the program. What if I give you few charged ions frozen into glass, to have the freedom of continuum.

    And on your side you have computation device (QC) evolution of which depends on the particular position of frozen charges. And we want it operate close to unstable fix point, such that small changes in position of ions produce large difference in evolution.

  133. Fred Says:

    John Sidles #83
    Thanks John, that’s pretty a pretty interesting perspective indeed.
    Brings me back to my control theory classes.

  134. Jay Says:

    Luca #131

    Sorry, but if before-coffee calculations implicate that most cells can’t trigger any action potential, coffee is required. ;-)

    The point was that, yes, description of the behavior of a single channel may well require quantum mechanics. I think this is why you said that their argument on ion channels is “entirely sound”.

    But for a freebit-like state to make a difference, one need to postulate that its coherence remains despite it should act on several hundreds or thousands of voltage-gated channels -even if we consider the most sensitive part of the neuron, namely axon hillock. This is why I disagree with “entirely”.

  135. John Sidles Says:

    Jay says: “Description of the behavior of a single channel may well require quantum mechanics.”
    ——————–
    Experiments by Hecht, Shlaer and Pirenne in 1942 established that (1) single cells in the retina indeed sense single photons, and (2) nine photons detected in near-synchrony suffice for the conscious experience of visual observation.

    Conclusion Phenomena that are broadly equivalent to the phenomena ascribed to freebit — that is, equivalent both quantum-dynamically and neuro-physiologically — have long been known.

  136. luca turin Says:

    Jay #134 Nope, that’s not what I meant: the way I read it, the Albrecht- Philips paper basically asks whether a single ion channel can be a [quantum] random-number generator which decides whether or not a neuron goes over firing threshold or not. My decaffeinated calculation says yes. There is no need for any quantum coherence. Voltage is the controlling variable that then turns on hundreds or thousands of, say, sodium channels.

  137. Jay Says:

    Luca #136: Ok I don’t get your idea then.

    John #135: You right it is known. Let me explain it again.

    All one need to do is to rest in complete dark for say one hour, so that the rods inside the retina can adapt to darkness, then to gently open one eye to a dark part of the sky (don’t look at the moon!), then to wait for a freebit to enter obliquely (not in straight line, cones are not sensitive enough even after adaptation), then to redo the process 50-100 times (unfortunatly most freebit will never make it), then, hurray, you will have a direct connection with the CMB!

    Please repeat every night, or you’ll loose your freedom to comment on clowns. Obviously, I’m due.

  138. Jon Lennox Says:

    Jay #137: The Cosmic Microwave Background consists of microwaves (go figure). If your rod cells are sensitive to microwaves, please let a research physiologist know, they have a Nobel in Physiology and Medicine waiting…

  139. Jay Says:

    http://rationalwiki.org/wiki/Poe's_Law

  140. John Wilkinson Says:

    Michael 122, Scott 123…

    For me, taking a broad definition of natural selection doesn’t imply that natural selection is equivalent to the truism: ‘that which survives, survives’.

    Natural selection is pointing out that that which has survived has, on occasion, done so by virtue of individual characteristics which made its survival more likely which can, on occasion, be passed onto the next generation.

    Darwin did distinguish natural and artificial selection – but that doesn’t necessarily imply the two are mutually exclusive. For me, artificial selection is a narrow subset of natural selection – specifically selection carried out by humans intentionally (selective breeding). I don’t think Darwin intended it to include instances such as ‘that giant panda is cute’, or indeed any other cases where a member of a species (for example a hippo) decides for any reason to take pity on another member of a species (for example a deer – refer to Youtube if curious). These are all equally natural causes which will offer varying degrees of success in the future (appealing to hippos may have limited value but appealing to humans may offer substantially more, at least in the short term). It may very well be the case that our continued existence as a species depends on a particular alien race finding us sufficiently likeable to not warrant eradication. It may therefore turn out to be the single most important evolutionary advance species on Earth can make – to be ‘cute’.

    I do not think that this discussion is a trivial debate over terminology – it’s one of the more important questions, particularly given our increasing capability to have a great impact on any number of species. I find it an extremely arrogant trait amongst humans that we consider ourselves and our actions ‘artificial’ in many cases. Our electricity pylons and nuclear power stations are no different in that respect to otters building dams – it’s a natural progression and part of our evolution. Similarly, our choices to breed particular animals are no less natural than any other species’ choices over which animals to hunt and which to leave alone over the millennia, or any other behaviour which has an impact on other species.

    The other problem is that many people associate evolution with ‘improvement’ (instead of continued reproduction) and we have a particular perception of what constitutes ‘improvement’ which may not necessarily align with the reality or the ‘rules’ of the universe. As a result, it can become increasingly unsatisfactory for some people to perceive events such as selective breeding as a successful evolution of the species in question (for example – they evolved to be appealing to our dietary requirements, thereby ensuring their continued existence for as long as we survive). Arguably, however, it is just as evolutionary as any other progression made by any other species. In the case of the Japanese work/family balance – again it’s a perfectly legitimate evolutionary test in which the priority of the species to reproduce is tested. If particular races continue to have higher birth rates than others they will likely end up being the more numerous, and therefore more successful race in evolutionary terms.

    Back when I was younger I used to believe that evolution had stopped in many areas of the world, particularly amongst humans; to me it’s obviously an error when people describe us as becoming more intelligent as a species. We aren’t becoming more intelligent, for the simple reason that having greater intellect doesn’t currently give anyone a particular advantage when it comes to survival or reproduction (or perhaps only in a few isolated cases). In fact, it may even be the case that intellectuals have fewer children, on average, either by virtue of their priorities lying elsewhere or perhaps they are not quite as adept socially as they are intellectually. The problem with my former line of thinking is the assumption that becoming more intelligent is evolutionary. It isn’t, necessarily. From the perspective of an alien race, our intelligence could be perceived as a threat, particularly when we reach the point where we are able to increase our own intelligence. Similarly, in the case of a global nuclear war, our intelligence (which comes at the expense of a complex physical make-up) could be our downfall, where other less intelligent life forms (for example bacteria) would continue to thrive. In that sense, bacteria may well be better evolved than we are.

    The conclusion is that we make numerous assumptions when it comes to evolution – we believe we know more than the blind watchmaker – but we’re every bit as blind, as he continues to be. We have our perception of what constitutes evolution and we struggle with forms of evolution we consider ‘artificial’ or even devolution. The reality is that every characteristic and every behaviour continue and will always continue to contribute towards our evolution, and we’ll only know which are the most successful when they stand the test of time.

  141. Anon Says:

    i agree with you on the general hypothesis to make a “far” distinction between unpredictability and predictability instead of trying to state a general causation tunnel where the parameters of the field change yet remain computable or can be extrapolated to be generally structural.

    the next step would be which is a complicated mission is to check how this zero-point type GR (outside structurality) black hole type phenomena interacts with the complicated field (humans,animals). otherwise from the inside how the quantum phenomena bypasses these (limits) while maintaining them for larger collections of objects.

    the more exciting part is how this data (free-bit) intersects with more linearized roles in the brain, more baynesian, more functional probability fields in order to displace, and to inherit the spin distinctions or the locality in order to intervene intelligently with it at the most complicated levels. the quantity of free-bit and the field-mechanics of (free-bits) matter much because if there is a large quantity of them available to integrate a sort of supersymetric/asymetric, if there are available in even maybe 2% portions, depending on the particles which of course lattices and makes the function orthogonal at all intersections, then we have this sort of intelligence from the (free-bit) networks, that outputs complex boundary layers like chaos theory yet in fact carries this animorphism or more practically a basic mirroring-time of the particle interactions a moment of surrender from-to larger parts of existence of fields.

    it might raise questions about what the “soul inscription” is, is it just a highly complex deep harmonic within the being that is not taken into account except from ego cogito itself.

  142. Dave Says:

    Hi Scott – Very nice read! I know you didn’t want to discuss complexity in this essay, but I do think it would be interesting to mention in this context the distinction between determinism and predictability as it relates to computational complexity. I am thinking of Wolfram’s computational equivalence principal – i.e. that even if a person’s will is pre-determined, it may not be predictable.

    Granted this has nothing to do with any sort of quantum effects necessarily, but I do think it would be worth mentioning, especially as it avoids some of the pitfalls of this kind of thing, like self-referential issues.

  143. Scott Says:

    Dave #142: I explained in Section 2.10 why I don’t think complexity is really relevant to what I’m talking about—because it’s simply not true that if you had a complete specification of the state of (say) a human brain, then the fastest way to predict what the brain would do would still just be to use the brain itself. Instead, you could simply load the brain’s state onto faster hardware (currently-existing computers might even suffice!), then use that to compute the predictions much faster than the brain itself.

    Therefore, while it’s true that “determined doesn’t imply predictable,” as far as I can see the only realistic way to break the link between the two, is if the information needed to determine future behavior isn’t available to the would-be predictor even in principle.

  144. Jackie Says:

    Scott, you recently outlined how Knightian uncertainty could allow systems, including humans, a degree of freedom. In “Ask Me Anything” you suggested this may relate somehow to paradoxes in decision theory such as the Ellsberg Paradox. Diederick Aerts (“Quantum Structure in Cognition”, 2009) and Khrennikov (“Application of GKSL Equation in Cognitive Psychology”, 2010) appear to find that paradoxes of human cognition can be explained if the data is modeled with the quantum formalism. If a system returns data that creates a classical paradox yet is consistent with a quantum model then shouldn’t we conclude that the system is quantum? Or are these authors making some error?
    Thanks.

  145. Dave Says:

    Scott #143: Thanks for your reply! But wouldn’t that be to disregard the elephant in the room of simulating the rest of the universe as well? You would never be able to tell me what company I am going to invest in next year without simulating everything else that is ever going to happen. Even if you could speed up the simulation of my brain (and everyone else’s even), you’d still have to figure out what the rest of the particles are going to do (including even quantum effects for the weather, dwave :), etc.).

  146. Jackie Says:

    Scott, in “GQTM” section 2.7, you call a physical system “predictable” if there’s any possible technology, consistent with the laws of physics, that would allow an observer to calculate probabilities for outcomes of all possible future measurements … This doesn’t even seem possible for a grossly classical system such as the Earth which has an uncertain fate due to the long-term chaotic behavior of the Solar System. It would seem you need to restrict predictions to a temporal context, a necessary part of quantum thinking, right? Yet, even if we restrict to a bounded region of a chaotic landscape any transitional boundaries within that landscape will still be a source of potential surprises mapping to solutions outside the restricted landscape. How would this type of chaos be different from a freebit? Also, the brain is constantly rewiring itself: synapses connect and disconnect neurons die, and even new neurons born. This doesn’t even take into account possible disorders, lesions, or injury. If a machine is reprogrammed then it is only generally the same. Since ion action is at the quantum scale then any brain copy or simulation would immediately diverge from the original – certainly before we could even ask “which one is me?” since our awareness of something requires at least about 100ms. Thank you for your reply.
    Thanks.

  147. Scott Says:

    Dave #145: In Section 12, I point out that one can relatively-easily deal with the “rest of the universe problem” by assuming that the human to be predicted is hooked up to devices that transmit all her sensory inputs (from her eyes, ears, etc.) to the predictor, at the same time as the human’s brain itself receives them. Then one asks the question whether the predictor can converge on well-calibrated probabilities if given that data feed.

  148. Scott Says:

    Jackie #146: Actually, it’s not at all obvious to me that, for questions like the future trajectory of an unstable orbit (like, say, that of Saturn’s moon Hyperion), you couldn’t give well-calibrated probabilistic predictions by simply simulating the orbit a huge number of times with slightly-different initial conditions. See Section 5.2 for what I see as the relevant differences between the brain and “ordinary” chaotic systems.

    But OK, suppose classical chaotic effects did make it “obviously impossible” to duplicate the relevant information in a brain, as many people seem to think. In that case, we’d get the extremely interesting conclusion that uploading our brains to digital computers, as envisioned by the Singulatarians, should be impossible! And of course, other people have criticized me starting from the premise that brain-uploading should obviously be possible. If there’s a central thesis of my essay, it’s simply that those two groups can’t both be right, let alone obviously right. :-)

  149. Dave Says:

    Scott 147 – thanks again, but I guess my point is that once the brain is exposed to any input – freebit or otherwise – it becomes unpredictable, because you can’t predict either.

    I guess if you cut off all the senses to the brain, you could ask can i predict what the brain will be thinking 10 minutes from now. But the same argument could be made about the freebits.

    So again the point is that freebits are just as unpredictable as any input to the brain. Can I predict what I will be thinking in the next instant (before I get any more inputs)? Maybe, but I won’t get any freebits before then either.

  150. Anon Says:

    im not sure if im on to something here, but i have two ideas for proving general “unpredictability” thereom. since the probability distribution graphs can have exteriors lets say particular points near the upper right corner. measuring thousands of these images is an NP-hard+ problem to see if the exteriors have a correlary pattern function in the more natural un-spiked scaling distribution. this kind of leads to a predictability-unpredictability wavefunction really if there is some type of near orbital function in the background, but this can only be inductive from reasoning not totally reduced.

    the other idea which would be like a Witten-Tao-Perelman type of thereom would be first proving a totally disconnected topological space in 3+1 with “realistic” isomorphism to GR or QM equations. which means thats the space would have to converge with lets say poincare UV radiation at some frequency thats only mathematically computable as an example.
    http://en.wikipedia.org/wiki/Totally_disconnected_space

    which still leads to this structural-unpredictability duality which is the chaos-structure wavefunction, its possible when measuring past current boundries, except it cant be a transfinite space where the calculation is infinity breached, the exteriors in the probability distribution basically have to converge with the displacement of totally disconnected space for one case where the boundary (lets say 10^35) is actually proportional with the connected space isomorphism. this isnt really non-commutativity, but instead something inbetween at a particular scaling except the thing escapes in loops, recursions, twistor states.

    the last way would be to prove transfinite space from physics equations then it would be implicitly solved. which would be basically using the infinity gaps to build a structure instead.

  151. Scott Says:

    Anon #150: a “Witten-Tao-Perelman type of thereom [sic]”? Is that like a Bach-Beethoven-Beatles type of song?

  152. JohnS Says:

    In section 12 of your very intriguing paper, I find myself quite uncomfortable accepting your I(S) as a separate unanalyzed primitive concept.

    If some test of some predictor fails, how can one tell whether the predictor mightn’t just the same be a faithful model, with the failure due to inadequate precision in the measurement of one of the inputs? It seems impossible, even in principle, to separate a test of the predictor from a test of the adequecy of a proposed input set.

Leave a Reply