“Could a Quantum Computer Have Subjective Experience?”

Author’s Note: Below is the prepared version of a talk that I gave two weeks ago at the workshop Quantum Foundations of a Classical Universe, which was held at IBM’s TJ Watson Research Center in Yorktown Heights, NY.  My talk is for entertainment purposes only; it should not be taken seriously by anyone.  If you reply in a way that makes clear you did take it seriously (“I’m shocked and outraged that someone who dares to call himself a scientist would … [blah blah]”), I will log your IP address, hunt you down at night, and force you to put forward an account of consciousness and decoherence that deals with all the paradoxes discussed below—and then reply at length to all criticisms of your account.

If you’d like to see titles, abstracts, and slides for all the talks from the workshop—including by Charles Bennett, Sean Carroll, James Hartle, Adrian Kent, Stefan Leichenauer, Ken Olum, Don Page, Jason Pollack, Jess Riedel, Mark Srednicki, Wojciech Zurek, and Michael Zwolak—click here.  You’re also welcome to discuss these other nice talks in the comments section, though I might or might not be able to answer questions about them.  Apparently videos of all the talks will be available before long (Jess Riedel has announced that videos are now available).

(Note that, as is probably true for other talks as well, the video of my talk differs substantially from the prepared version—it mostly just consists of interruptions and my responses to them!  On the other hand, I did try to work some of the more salient points from the discussion into the text below.)

Thanks so much to Charles Bennett and Jess Riedel for organizing the workshop, and to all the participants for great discussions.


I didn’t prepare slides for this talk—given the topic, what slides would I use exactly?  “Spoiler alert”: I don’t have any rigorous results about the possibility of sentient quantum computers, to state and prove on slides.  I thought of giving a technical talk on quantum computing theory, but then I realized that I don’t really have technical results that bear directly on the subject of the workshop, which is how the classical world we experience emerges from the quantum laws of physics.  So, given the choice between a technical talk that doesn’t really address the questions we’re supposed to be discussing, or a handwavy philosophical talk that at least tries to address them, I opted for the latter, so help me God.

Let me start with a story that John Preskill told me years ago.  In the far future, humans have solved not only the problem of building scalable quantum computers, but also the problem of human-level AI.  They’ve built a Turing-Test-passing quantum computer.  The first thing they do, to make sure this is actually a quantum computer, is ask it to use Shor’s algorithm to factor a 10,000-digit number.  So the quantum computer factors the number.  Then they ask it, “while you were factoring that number, what did it feel like?  did you feel yourself branching into lots of parallel copies, which then recohered?  or did you remain a single consciousness—a ‘unitary’ consciousness, as it were?  can you tell us from introspection which interpretation of quantum mechanics is the true one?”  The quantum computer ponders this for a while and then finally says, “you know, I might’ve known before, but now I just … can’t remember.”

I like to tell this story when people ask me whether the interpretation of quantum mechanics has any empirical consequences.

Look, I understand the impulse to say “let’s discuss the measure problem, or the measurement problem, or derivations of the Born rule, or Boltzmann brains, or observer-counting, or whatever, but let’s take consciousness off the table.”  (Compare: “let’s debate this state law in Nebraska that says that, before getting an abortion, a woman has to be shown pictures of cute babies.  But let’s take the question of whether or not fetuses have human consciousness—i.e., the actual thing that’s driving our disagreement about that and every other subsidiary question—off the table, since that one is too hard.”)  The problem, of course, is that even after you’ve taken the elephant off the table (to mix metaphors), it keeps climbing back onto the table, often in disguises.  So, for better or worse, my impulse tends to be the opposite: to confront the elephant directly.

Having said that, I still need to defend the claim that (a) the questions we’re discussing, centered around quantum mechanics, Many Worlds, and decoherence, and (b) the question of which physical systems should be considered “conscious,” have anything to do with each other.  Many people would say that the connection doesn’t go any deeper than: “quantum mechanics is mysterious, consciousness is also mysterious, ergo maybe they’re related somehow.”  But I’m not sure that’s entirely true.  One thing that crystallized my thinking about this was a remark made in a lecture by Peter Byrne, who wrote a biography of Hugh Everett.  Byrne was discussing the question, why did it take so many decades for Everett’s Many-Worlds Interpretation to become popular?  Of course, there are people who deny quantum mechanics itself, or who have basic misunderstandings about it, but let’s leave those people aside.  Why did people like Bohr and Heisenberg dismiss Everett?  More broadly: why wasn’t it just obvious to physicists from the beginning that “branching worlds” is a picture that the math militates toward, probably the simplest, easiest story one can tell around the Schrödinger equation?  Even if early quantum physicists rejected the Many-Worlds picture, why didn’t they at least discuss and debate it?

Here was Byrne’s answer: he said, before you can really be on board with Everett, you first need to be on board with Daniel Dennett (the philosopher).  He meant: you first need to accept that a “mind” is just some particular computational process.  At the bottom of everything is the physical state of the universe, evolving via the equations of physics, and if you want to know where consciousness is, you need to go into that state, and look for where computations are taking place that are sufficiently complicated, or globally-integrated, or self-referential, or … something, and that’s where the consciousness resides.  And crucially, if following the equations tells you that after a decoherence event, one computation splits up into two computations, in different branches of the wavefunction, that thereafter don’t interact—congratulations!  You’ve now got two consciousnesses.

And if everything above strikes you as so obvious as not to be worth stating … well, that’s a sign of how much things changed in the latter half of the 20th century.  Before then, many thinkers would’ve been more likely to say, with Descartes: no, my starting point is not the physical world.  I don’t even know a priori that there is a physical world.  My starting point is my own consciousness, which is the one thing besides math that I can be certain about.  And the point of a scientific theory is to explain features of my experience—ultimately, if you like, to predict the probability that I’m going to see X or Y if I do A or B.  (If I don’t have prescientific knowledge of myself, as a single, unified entity that persists in time, makes choices, and later observes their consequences, then I can’t even get started doing science.)  I’m happy to postulate a world external to myself, filled with unseen entities like electrons behaving in arbitrarily unfamiliar ways, if it will help me understand my experience—but postulating other versions of me is, at best, irrelevant metaphysics.  This is a viewpoint that could lead you Copenhagenism, or to its newer variants like quantum Bayesianism.

I’m guessing that many people in this room side with Dennett, and (not coincidentally, I’d say) also with Everett.  I certainly have sympathies in that direction too.  In fact, I spent seven or eight years of my life as a Dennett/Everett hardcore believer.  But, while I don’t want to talk anyone out of the Dennett/Everett view, I’d like to take you on a tour of what I see as some of the extremely interesting questions that that view leaves unanswered.  I’m not talking about “deep questions of meaning,” but about something much more straightforward: what exactly does a computational process have to do to qualify as “conscious”?

Of course, there are already tremendous difficulties here, even if we ignore quantum mechanics entirely.  Ken Olum was over much of this ground in his talk yesterday (see here for a relevant paper by Davenport and Olum).  You’ve all heard the ones about, would you agree to be painlessly euthanized, provided that a complete description of your brain would be sent to Mars as an email attachment, and a “perfect copy” of you would be reconstituted there?  Would you demand that the copy on Mars be up and running before the original was euthanized?  But what do we mean by “before”—in whose frame of reference?

Some people say: sure, none of this is a problem!  If I’d been brought up since childhood taking family vacations where we all emailed ourselves to Mars and had our original bodies euthanized, I wouldn’t think anything of it.  But the philosophers of mind are barely getting started.

There’s this old chestnut, what if each person on earth simulated one neuron of your brain, by passing pieces of paper around.  It took them several years just to simulate a single second of your thought processes.  Would that bring your subjectivity into being?  Would you accept it as a replacement for your current body?  If so, then what if your brain were simulated, not neuron-by-neuron, but by a gigantic lookup table?  That is, what if there were a huge database, much larger than the observable universe (but let’s not worry about that), that hardwired what your brain’s response was to every sequence of stimuli that your sense-organs could possibly receive.  Would that bring about your consciousness?  Let’s keep pushing: if it would, would it make a difference if anyone actually consulted the lookup table?  Why can’t it bring about your consciousness just by sitting there doing nothing?

To these standard thought experiments, we can add more.  Let’s suppose that, purely for error-correction purposes, the computer that’s simulating your brain runs the code three times, and takes the majority vote of the outcomes.  Would that bring three “copies” of your consciousness into being?  Does it make a difference if the three copies are widely separated in space or time—say, on different planets, or in different centuries?  Is it possible that the massive redundancy taking place in your brain right now is bringing multiple copies of you into being?

Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?

When we consider the possibility of a conscious quantum computer, in some sense we inherit all the previous puzzles about conscious classical computers, but then also add a few new ones.  So, let’s say I run a quantum subroutine that simulates your brain, by applying some unitary transformation U.  But then, of course, I want to “uncompute” to get rid of garbage (and thereby enable interference between different branches), so I apply U-1.  Question: when I apply U-1, does your simulated brain experience the same thoughts and feelings a second time?  Is the second experience “the same as” the first, or does it differ somehow, by virtue of being reversed in time?  Or, since U-1U is just a convoluted implementation of the identity function, are there no experiences at all here?

Here’s a better one: many of you have heard of the Vaidman bomb.  This is a famous thought experiment in quantum mechanics where there’s a package, and we’d like to “query” it to find out whether it contains a bomb—but if we query it and there is a bomb, it will explode, killing everyone in the room.  What’s the solution?  Well, suppose we could go into a superposition of querying the bomb and not querying it, with only ε amplitude on querying the bomb, and √(1-ε2) amplitude on not querying it.  And suppose we repeat this over and over—each time, moving ε amplitude onto the “query the bomb” state if there’s no bomb there, but moving ε2 probability onto the “query the bomb” state if there is a bomb (since the explosion decoheres the superposition).  Then after 1/ε repetitions, we’ll have order 1 probability of being in the “query the bomb” state if there’s no bomb.  By contrast, if there is a bomb, then the total probability we’ve ever entered that state is (1/ε)×ε2 = ε.  So, either way, we learn whether there’s a bomb, and the probability that we set the bomb off can be made arbitrarily small.  (Incidentally, this is extremely closely related to how Grover’s algorithm works.)

OK, now how about the Vaidman brain?  We’ve got a quantum subroutine simulating your brain, and we want to ask it a yes-or-no question.  We do so by querying that subroutine with ε amplitude 1/ε times, in such a way that if your answer is “yes,” then we’ve only ever activated the subroutine with total probability ε.  Yet you still manage to communicate your “yes” answer to the outside world.  So, should we say that you were conscious only in the ε fraction of the wavefunction where the simulation happened, or that the entire system was conscious?  (The answer could matter a lot for anthropic purposes.)

You might say, sure, maybe these questions are puzzling, but what’s the alternative?  Either we have to say that consciousness is a byproduct of any computation of the right complexity, or integration, or recursiveness (or something) happening anywhere in the wavefunction of the universe, or else we’re back to saying that beings like us are conscious, and all these other things aren’t, because God gave the souls to us, so na-na-na.  Or I suppose we could say, like the philosopher John Searle, that we’re conscious, and the lookup table and homomorphically-encrypted brain and Vaidman brain and all these other apparitions aren’t, because we alone have “biological causal powers.”  And what do those causal powers consist of?  Hey, you’re not supposed to ask that!  Just accept that we have them.  Or we could say, like Roger Penrose, that we’re conscious and the other things aren’t because we alone have microtubules that are sensitive to uncomputable effects from quantum gravity.  But neither of those two options ever struck me as much of an improvement.

Yet I submit to you that, between these extremes, there’s another position we can stake out—one that I certainly don’t know to be correct, but that would solve so many different puzzles if it were correct that, for that reason alone, it seems to me to merit more attention than it usually receives.  (In an effort to give the view that attention, a couple years ago I wrote an 85-page essay called The Ghost in the Quantum Turing Machine, which one or two people told me they actually read all the way through.)  If, after a lifetime of worrying (on weekends) about stuff like whether a giant lookup table would be conscious, I now seem to be arguing for this particular view, it’s less out of conviction in its truth than out of a sense of intellectual obligation: to whatever extent people care about these slippery questions at all, to whatever extent they think various alternative views deserve a hearing, I believe this one does as well.

The intermediate position that I’d like to explore says the following.  Yes, consciousness is a property of any suitably-organized chunk of matter.  But, in addition to performing complex computations, or passing the Turing Test, or other information-theoretic conditions that I don’t know (and don’t claim to know), there’s at least one crucial further thing that a chunk of matter has to do before we should consider it conscious.  Namely, it has to participate fully in the Arrow of Time.  More specifically, it has to produce irreversible decoherence as an intrinsic part of its operation.  It has to be continually taking microscopic fluctuations, and irreversibly amplifying them into stable, copyable, macroscopic classical records.

Before I go further, let me be extremely clear about what this view is not saying.  Firstly, it’s not saying that the brain is a quantum computer, in any interesting sense—let alone a quantum-gravitational computer, like Roger Penrose wants!  Indeed, I see no evidence, from neuroscience or any other field, that the cognitive information processing done by the brain is anything but classical.  The view I’m discussing doesn’t challenge conventional neuroscience on that account.

Secondly, this view doesn’t say that consciousness is in any sense necessary for decoherence, or for the emergence of a classical world.  I’ve never understood how one could hold such a belief, while still being a scientific realist.  After all, there are trillions of decoherence events happening every second in stars and asteroids and uninhabited planets.  Do those events not “count as real” until a human registers them?  (Or at least a frog, or an AI?)  The view I’m discussing only asserts the converse: that decoherence is necessary for consciousness.  (By analogy, presumably everyone agrees that some amount of computation is necessary for an interesting consciousness, but that doesn’t mean consciousness is necessary for computation.)

Thirdly, the view I’m discussing doesn’t say that “quantum magic” is the explanation for consciousness.  It’s silent on the explanation for consciousness (to whatever extent that question makes sense); it seeks only to draw a defensible line between the systems we want to regard as conscious and the systems we don’t—to address what I recently called the Pretty-Hard Problem.  And the (partial) answer it suggests doesn’t seem any more “magical” to me than any other proposed answer to the same question.  For example, if one said that consciousness arises from any computation that’s sufficiently “integrated” (or something), I could reply: what’s the “magical force” that imbues those particular computations with consciousness, and not other computations I can specify?  Or if one said (like Searle) that consciousness arises from the biology of the brain, I could reply: so what’s the “magic” of carbon-based biology, that could never be replicated in silicon?  Or even if one threw up one’s hands and said everything was conscious, I could reply: what’s the magical power that imbues my stapler with a mind?  Each of these views, along with the view that stresses the importance of decoherence and the arrow of time, is worth considering.  In my opinion, each should be judged according to how well it holds up under the most grueling battery of paradigm-cases, thought experiments, and reductios ad absurdum we can devise.

So, why might one conjecture that decoherence, and participation in the arrow of time, were necessary conditions for consciousness?  I suppose I could offer some argument about our subjective experience of the passage of time being a crucial component of our consciousness, and the passage of time being bound up with the Second Law.  Truthfully, though, I don’t have any a-priori argument that I find convincing.  All I can do is show you how many apparent paradoxes get resolved if you make this one speculative leap.

For starters, if you think about exactly how our chunk of matter is going to amplify microscopic fluctuations, it could depend on details like the precise spin orientations of various subatomic particles in the chunk.  But that has an interesting consequence: if you’re an outside observer who doesn’t know the chunk’s quantum state, it might be difficult or impossible for you to predict what the chunk is going to do next—even just to give decent statistical predictions, like you can for a hydrogen atom.  And of course, you can’t in general perform a measurement that will tell you the chunk’s quantum state, without violating the No-Cloning Theorem.  For the same reason, there’s in general no physical procedure that you can apply to the chunk to duplicate it exactly: that is, to produce a second chunk that you can be confident will behave identically (or almost identically) to the first, even just in a statistical sense.  (Again, this isn’t assuming any long-range quantum coherence in the chunk: only microscopic coherence that then gets amplified.)

It might be objected that there are all sorts of physical systems that “amplify microscopic fluctuations,” but that aren’t anything like what I described, at least not in any interesting sense: for example, a Geiger counter, or a photodetector, or any sort of quantum-mechanical random-number generator.  You can make, if not an exact copy of a Geiger counter, surely one that’s close enough for practical purposes.  And, even though the two counters will record different sequences of clicks when pointed at identical sources, the statistical distribution of clicks will be the same (and precisely calculable), and surely that’s all that matters.  So, what separates these examples from the sorts of examples I want to discuss?

What separates them is the undisputed existence of what I’ll call a clean digital abstraction layer.  By that, I mean a macroscopic approximation to a physical system that an external observer can produce, in principle, without destroying the system; that can be used to predict what the system will do to excellent accuracy (given knowledge of the environment); and that “sees” quantum-mechanical uncertainty—to whatever extent it does—as just a well-characterized source of random noise.  If a system has such an abstraction layer, then we can regard any quantum noise as simply part of the “environment” that the system observes, rather than part of the system itself.  I’ll take it as clear that such clean abstraction layers exist for a Geiger counter, a photodetector, or a computer with a quantum random number generator.  By contrast, for (say) an animal brain, I regard it as currently an open question whether such an abstraction layer exists or not.  If, someday, it becomes routine for nanobots to swarm through people’s brains and make exact copies of them—after which the “original” brains can be superbly predicted in all circumstances, except for some niggling differences that are traceable back to different quantum-mechanical dice rolls—at that point, perhaps educated opinion will have shifted to the point where we all agree the brain does have a clean digital abstraction layer.  But from where we stand today, it seems entirely possible to agree that the brain is a physical system obeying the laws of physics, while doubting that the nanobots would work as advertised.  It seems possible that—as speculated by Bohr, Compton, Eddington, and even Alan Turing—if you want to get it right you’ll need more than just the neural wiring graph, the synaptic strengths, and the approximate neurotransmitter levels.  Maybe you also need (e.g.) the internal states of the neurons, the configurations of sodium-ion channels, or other data that you simply can’t get without irreparably damaging the original brain—not only as a contingent matter of technology but as a fundamental matter of physics.

(As a side note, I should stress that obviously, even without invasive nanobots, our brains are constantly changing, but we normally don’t say as a result that we become completely different people at each instant!  To my way of thinking, though, this transtemporal identity is fundamentally different from a hypothetical identity between different “copies” of you, in the sense we’re talking about.  For one thing, all your transtemporal doppelgängers are connected by a single, linear chain of causation.  For another, outside movies like Bill and Ted’s Excellent Adventure, you can’t meet your transtemporal doppelgängers and have a conversation with them, nor can scientists do experiments on some of them, then apply what they learned to others that remained unaffected by their experiments.)

So, on this view, a conscious chunk of matter would be one that not only acts irreversibly, but that might well be unclonable for fundamental physical reasons.  If so, that would neatly resolve many of the puzzles that I discussed before.  So for example, there’s now a straightforward reason why you shouldn’t consent to being killed, while your copy gets recreated on Mars from an email attachment.  Namely, that copy will have a microstate with no direct causal link to your “original” microstate—so while it might behave similarly to you in many ways, you shouldn’t expect that your consciousness will “transfer” to it.  If you wanted to get your exact microstate to Mars, you could do that in principle using quantum teleportation—but as we all know, quantum teleportation inherently destroys the original copy, so there’s no longer any philosophical problem!  (Or, of course, you could just get on a spaceship bound for Mars: from a philosophical standpoint, it amounts to the same thing.)

Similarly, in the case where the simulation of your brain was run three times for error-correcting purposes: that could bring about three consciousnesses if, and only if, the three simulations were tied to different sets of decoherence events.  The giant lookup table and the Earth-sized brain simulation wouldn’t bring about any consciousness, unless they were implemented in such a way that they no longer had a clean digital abstraction layer.  What about the homomorphically-encrypted brain simulation?  That might no longer work, simply because we can’t assume that the microscopic fluctuations that get amplified are homomorphically encrypted.  Those are “in the clear,” which inevitably leaks information.  As for the quantum computer that simulates your thought processes and then perfectly reverses the simulation, or that queries you like a Vaidman bomb—in order to implement such things, we’d of course need to use quantum fault-tolerance, so that the simulation of you stayed in an encoded subspace and didn’t decohere.  But under our assumption, that would mean the simulation wasn’t conscious.

Now, it might seem to some of you like I’m suggesting something deeply immoral.  After all, the view I’m considering implies that, even if a system passed the Turing Test, and behaved identically to a human, even if it eloquently pleaded for its life, if it wasn’t irreversibly decohering microscopic events then it wouldn’t be conscious, so it would be fine to kill it, torture it, whatever you want.

But wait a minute: if a system isn’t doing anything irreversible, then what exactly does it mean to “kill” it?  If it’s a classical computation, then at least in principle, you could always just restore from backup.  You could even rewind and not only erase the memories of, but “uncompute” (“untorture”?) whatever tortures you had performed.  If it’s a quantum computation, you could always invert the unitary transformation U that corresponded to killing the thing (then reapply U and invert it again for good measure, if you wanted).  Only for irreversible systems are there moral acts with irreversible consequences.

This is related to something that’s bothered me for years in quantum foundations.  When people discuss Schrödinger’s cat, they always—always—insert some joke about, “obviously, this experiment wouldn’t pass the Ethical Review Board.  Nowadays, we try to avoid animal cruelty in our quantum gedankenexperiments.”  But actually, I claim that there’s no animal cruelty at all in the Schrödinger’s cat experiment.  And here’s why: in order to prove that the cat was ever in a coherent superposition of |Alive〉 and |Dead〉, you need to be able to measure it in a basis like {|Alive〉+|Dead〉,|Alive〉-|Dead〉}.  But if you can do that, you must have such precise control over all the cat’s degrees of freedom that you can also rotate unitarily between the |Alive〉 and |Dead〉 states.  (To see this, let U be the unitary that you applied to the |Alive〉 branch, and V the unitary that you applied to the |Dead〉 branch, to bring them into coherence with each other; then consider applying U-1V.)  But if you can do that, then in what sense should we say that the cat in the |Dead〉 state was ever “dead” at all?  Normally, when we speak of “killing,” we mean doing something irreversible—not rotating to some point in a Hilbert space that we could just as easily rotate away from.

(There followed discussion among some audience members about the question of whether, if you destroyed all records of some terrible atrocity, like the Holocaust, everywhere in the physical world, you would thereby cause the atrocity “never to have happened.”  Many people seemed surprised by my willingness to accept that implication of what I was saying.  By way of explaining, I tried to stress just how far our everyday, intuitive notion of “destroying all records of something” falls short of what would actually be involved here: when we think of “destroying records,” we think about burning books, destroying the artifacts in museums, silencing witnesses, etc.  But even if all those things were done and many others, still the exact configurations of the air, the soil, and photons heading away from the earth at the speed of light would retain their silent testimony to the Holocaust’s reality.  “Erasing all records” in the physics sense would be something almost unimaginably more extreme: it would mean inverting the entire physical evolution in the vicinity of the earth, stopping time’s arrow and running history itself backwards.  Such ‘unhappening’ of what’s happened is something that we lack any experience of, at least outside of certain quantum interference experiments—though in the case of the Holocaust, one could be forgiven for wishing it were possible.)

OK, so much for philosophy of mind and morality; what about the interpretation of quantum mechanics?  If we think about consciousness in the way I’ve suggested, then who’s right: the Copenhagenists or the Many-Worlders?  You could make a case for either.  The Many-Worlders would be right that we could always, if we chose, think of decoherence events as “splitting” our universe into multiple branches, each with different versions of ourselves, that thereafter don’t interact.  On the other hand, the Copenhagenists would be right that, even in principle, we could never do any experiment where this “splitting” of our minds would have any empirical consequence.  On this view, if you can control a system well enough that you can actually observe interference between the different branches, then it follows that you shouldn’t regard the system as conscious, because it’s not doing anything irreversible.

In my essay, the implication that concerned me the most was the one for “free will.”  If being conscious entails amplifying microscopic events in an irreversible and unclonable way, then someone looking at a conscious system from the outside might not, in general, be able to predict what it’s going to do next, not even probabilistically.  In other words, its decisions might be subject to at least some “Knightian uncertainty”: uncertainty that we can’t even quantify in a mutually-agreed way using probabilities, in the same sense that we can quantify our uncertainty about (say) the time of a radioactive decay.  And personally, this is actually the sort of “freedom” that interests me the most.  I don’t really care if my choices are predictable by God, or by a hypothetical Laplace demon: that is, if they would be predictable (at least probabilistically), given complete knowledge of the microstate of the universe.  By definition, there’s essentially no way for my choices not to be predictable in that weak and unempirical sense!  On the other hand, I’d prefer that my choices not be completely predictable by other people.  If someone could put some sheets of paper into a sealed envelope, then I spoke extemporaneously for an hour, and then the person opened the envelope to reveal an exact transcript of everything I said, that’s the sort of thing that really would cause me to doubt in what sense “I” existed as a locus of thought.  But you’d have to actually do the experiment (or convince me that it could be done): it doesn’t count just to talk about it, or to extrapolate from fMRI experiments that predict which of two buttons a subject is going to press with 60% accuracy a few seconds in advance.

But since we’ve got some cosmologists in the house, let me now turn to discussing the implications of this view for Boltzmann brains.

(For those tuning in from home: a Boltzmann brain is a hypothetical chance fluctuation in the late universe, which would include a conscious observer with all the perceptions that a human being—say, you—is having right now, right down to false memories and false beliefs of having arisen via Darwinian evolution.  On statistical grounds, the overwhelming majority of Boltzmann brains last just long enough to have a single thought—like, say, the one you’re having right now—before they encounter the vacuum and freeze to death.  If you measured some part of the vacuum state toward which our universe seems to be heading, asking “is there a Boltzmann brain here?,” quantum mechanics predicts that the probability would be ridiculously astronomically small, but nonzero.  But, so the argument goes, if the vacuum lasts for infinite time, then as long as the probability is nonzero, it doesn’t matter how tiny it is: you’ll still get infinitely many Boltzmann brains indistinguishable from any given observer; and for that reason, any observer should consider herself infinitely likelier to be a Boltzmann brain than to be the “real,” original version.  For the record, even among the strange people at the IBM workshop, no one actually worried about being a Boltzmann brain.  The question, rather, is whether, if a cosmological model predicts Boltzmann brains, then that’s reason enough to reject the model, or whether we can live with such a prediction, since we have independent grounds for knowing that we can’t be Boltzmann brains.)

At this point, you can probably guess where this is going.  If decoherence, entropy production, full participation in the arrow of time are necessary conditions for consciousness, then it would follow, in particular, that a Boltzmann brain is not conscious.  So we certainly wouldn’t be Boltzmann brains, even under a cosmological model that predicts infinitely more of them than of us.  We can wipe our hands; the problem is solved!

I find it extremely interesting that, in their recent work, Kim Boddy, Sean Carroll, and Jason Pollack reached a similar conclusion, but from a completely different starting point.  They said: look, under reasonable assumptions, the late universe is just going to stay forever in an energy eigenstate—just sitting there doing nothing.  It’s true that, if someone came along and measured the energy eigenstate, asking “is there a Boltzmann brain here?,” then with a tiny but nonzero probability the answer would be yes.  But since no one is there measuring, what licenses us to interpret the nonzero overlap in amplitude with the Boltzmann brain state, as a nonzero probability of there being a Boltzmann brain?  I think they, too, are implicitly suggesting: if there’s no decoherence, no arrow of time, then we’re not authorized to say that anything is happening that “counts” for anthropic purposes.

Let me now mention an obvious objection.  (In fact, when I gave the talk, this objection was raised much earlier.)  You might say, “look, if you really think irreversible decoherence is a necessary condition for consciousness, then you might find yourself forced to say that there’s no consciousness, because there might not be any such thing as irreversible decoherence!  Imagine that our entire solar system were enclosed in an anti de Sitter (AdS) boundary, like in Greg Egan’s science-fiction novel Quarantine.  Inside the box, there would just be unitary evolution in some Hilbert space: maybe even a finite-dimensional Hilbert space.  In which case, all these ‘irreversible amplifications’ that you lay so much stress on wouldn’t be irreversible at all: eventually all the Everett branches would recohere; in fact they’d decohere and recohere infinitely many times.  So by your lights, how could anything be conscious inside the box?”

My response to this involves one last speculation.  I speculate that the fact that we don’t appear to live in AdS space—that we appear to live in (something evolving toward) a de Sitter space, with a positive cosmological constant—might be deep and important and relevant.  I speculate that, in our universe, “irreversible decoherence” means: the records of what you did are now heading toward our de Sitter horizon at the speed of light, and for that reason alone—even if for no others—you can’t put Humpty Dumpty back together again.  (Here I should point out, as several workshop attendees did to me, that Bousso and Susskind explored something similar in their paper The Multiverse Interpretation of Quantum Mechanics.)

Does this mean that, if cosmologists discover tomorrow that the cosmological constant is negative, or will become negative, then it will turn out that none of us were ever conscious?  No, that’s stupid.  What it would suggest is that the attempt I’m now making on the Pretty-Hard Problem had smacked into a wall (an AdS wall?), so that I, and anyone else who stressed in-principle irreversibility, should go back to the drawing board.  (By analogy, if some prescription for getting rid of Boltzmann brains fails, that doesn’t mean we are Boltzmann brains; it just means we need a new prescription.  Tempting as it is to skewer our opponents’ positions with these sorts of strawman inferences, I hope we can give each other the courtesy of presuming a bare minimum of sense.)

Another question: am I saying that, in order to be absolutely certain of whether some entity satisfied the postulated precondition for consciousness, one might, in general, need to look billions of years into the future, to see whether the “decoherence” produced by the entity was really irreversible?  Yes (pause to gulp bullet).  I am saying that.  On the other hand, I don’t think it’s nearly as bad as it sounds.  After all, the category of “consciousness” might be morally relevant, or relevant for anthropic reasoning, but presumably we all agree that it’s unlikely to play any causal role in the fundamental laws of physics.  So it’s not as if we’ve introduced any teleology into the laws of physics by this move.

Let me end by pointing out what I’ll call the “Tegmarkian slippery slope.”  It feels scientific and rational—from the perspective of many of us, even banal—to say that, if we’re conscious, then any sufficiently-accurate computer simulation of us would also be.  But I tried to convince you that this view depends, for its aura of obviousness, on our agreeing not to probe too closely exactly what would count as a “sufficiently-accurate” simulation.  E.g., does it count if the simulation is done in heavily-encrypted form, or encoded as a giant lookup table?  Does it matter if anyone actually runs the simulation, or consults the lookup table?  Now, all the way at the bottom of the slope is Max Tegmark, who asks: to produce consciousness, what does it matter if the simulation is physically instantiated at all?  Why isn’t it enough for the simulation to “exist” mathematically?  Or, better yet: if you’re worried about your infinitely-many Boltzmann brain copies, then why not worry equally about the infinitely many descriptions of your life history that are presumably encoded in the decimal expansion of π?  Why not hold workshops about how to avoid the prediction that we’re infinitely likelier to be “living in π” than to be our “real” selves?

From this extreme, even most scientific rationalists recoil.  They say, no, even if we don’t yet know exactly what’s meant by “physical instantiation,” we agree that you only get consciousness if the computer program is physically instantiated somehow.  But now I have the opening I want.  I can say: once we agree that physical existence is a prerequisite for consciousness, why not participation in the Arrow of Time?  After all, our ordinary ways of talking about sentient beings—outside of quantum mechanics, cosmology, and maybe theology—don’t even distinguish between the concepts “exists” and “exists and participates in the Arrow of Time.”  And to say we have no experience of reversible, clonable, coherently-executable, atemporal consciousnesses is a massive understatement.

Of course, we should avoid the sort of arbitrary prejudice that Turing warned against in Computing Machinery and Intelligence.  Just because we lack experience with extraterrestrial consciousnesses, doesn’t mean it would be OK to murder an intelligent extraterrestrial if we met one tomorrow.  In just the same way, just because we lack experience with clonable, atemporal consciousnesses, doesn’t mean it would be OK to … wait!  As we said before, clonability, and aloofness from time’s arrow, call severely into question what it even means to “murder” something.  So maybe this case isn’t as straightforward as the extraterrestrials after all.

At this point, I’ve probably laid out enough craziness, so let me stop and open things up for discussion.

206 Responses to ““Could a Quantum Computer Have Subjective Experience?””

  1. wolfgang Says:

    I believe I was thinking along similar lines,
    see e.g. my comment
    https://scottaaronson-production.mystagingwebsite.com/?p=1823#comment-108792

    However here is a problem I am struggling with.

    Consider a large quantum computer, fully unitary, which has no consciousness according to this line of argument.
    But then a single photon escapes and disappears into the deSitter universe … and this single photon is all of a sudden the reason for the quantum computer to acquire consciousness?

  2. Joshua Zelinsky Says:

    I am beginning to think that the entire idea of “consciousness” may just not be coherent. It may be an intuition that made sense in the context that humans evolved in but that has no substantial meaning that can be made precise to a satisfactory degree. Unfortunately, since many issues of morality do hinge on what is or is not conscious this isn’t really a helpful thing to say.

    I do like that this emphasis on the Arrow of Time and decoherence does seem to fit closely with at least my intuition (and I suspect the intuition of many others) on what should or should not be called conscious.

  3. Lukasz Grabowski Says:

    What are your thoughts on Gromov’s ergobrains paper?
    (http://www.ihes.fr/~gromov/PDF/ergobrain.pdf) Or first 3 lines of it?

  4. Scott Says:

    Lukasz #3: Sorry, I haven’t seen that paper before, and I’m not going to judge it based on the first 3 lines. I’ll put it on my stack to maybe read eventually. Thanks for the ref!

  5. Scott Says:

    wolfgang #1: Well, I never suggested that decoherence is a sufficient condition for consciousness: certainly it isn’t (since qubits decohere). Even decoherence combined with (say) the ability to pass the Turing Test might not be sufficient: I honestly have no idea, nor do I know what further properties might be needed. My main goal here was just to give a principled ground for classifying certain things (e.g., Boltzmann brains, giant lookup tables, the Vaidman AI) as definitely not conscious, in a way that wouldn’t fail “Turing’s Morality Test” by opening the door to arbitrary discrimination (“I don’t like the way you look, so I think I’m gonna call you non-sentient”).

    (Incidentally, while I know this wasn’t the point of your comment, in any plausible fleshing-out of what I wrote, a single escaped photon surely wouldn’t make any difference, since it carries away only a few qubits of information.)

  6. Infinity Says:

    Hi Scott,

    These are very interesting views. I will have to play with your ideas for a whule before I grasp them to my satisfaction. Meanwhile can you please describe the picture of “what consciousness is” in your head? I know you are trying to formulate the problem quantitatively. But vaguely what is consciousness for you? You have adressed what is definitely not conscious here but can you give any non-trivial example of conscious things?

  7. Daniel Freeman Says:

    I like this much more as a cardinality scheme for consciousness than as a bright-line for consciousness.

    Suppose, for the argument, that I have a sufficiently large quantum computer (call it the F-Wave III) capable of simulating two people having a conversation at standard temperature and pressure in a well insulated box.

    Say I run a “closed box”, reversible simulation of these people (copies of myself, say) talking to one another.

    To the copy of myself talking to the other copy, I don’t see how I can’t say its subjective experience of conversation is any phenomenally different than my own subjective experience of conversation.

    That said, I essentially agree with all your points in this post, regarding our ethical commitments to fully reversible simulations. If I had encoded some torture-simulation instead of a conversation, I don’t see how carrying out the torture simulation on the F-Wave III could be any more morally despicable than simply writing down the initial conditions of such a torture simulation.

    Thus, these simulations of me are [F-Wave III]-Conscious.

    Now, suppose I wire a coincidence detector to the F-Wave III. When the conversation simulation runs, if the coincidence detector measures [insert your favorite quantum uncertain event here], a weak measurement is performed on a carbon atom in the left eyeball of one of my simulations.

    Or maybe the temperature of the room is raised by a degree.

    Or maybe it raises the temperature by ten degrees.

    Or maybe it’s a button that I can press to raise the temperature by ten degrees.

    Or maybe it’s a monitor and microphone by which I can communicate with the simulations of myself.

    Point being, the degree to which I’m coupled to my simulations has measurable implications for which Arrow of Time the simulations are actually beholden to. After all, the F-Wave III doesn’t have infinite computational power–it can’t simulate me talking to simulations of myself. Only the G-Wave I can do that :P.

  8. wolfgang Says:

    @Scott

    I wonder if we can turn this into a real physics problem:
    1) Assume a large-scale quantum computer is possible (thinking deep thoughts, but not really self-conscious as long as its evolution is fully unitary).
    2) Assume there is a channel which allows enough photons to escape in such a way to enable consciousness.
    3) However, at the end of this channel we place a mirror – if it is in the consciousness-OFF position the photons are reflected back into the machine and unitarity is restored, but in the consciousness-ON position the photons escape into the deSitter universe.
    4) As you can guess we use a radioactive device to set the mirror into c-ON or c-OFF position with 50% probability.

    Will the quantum computer now experience i) a superposition of
    consciousness and unconsciousness or ii) will it always
    have a “normal” conscious experience or iii) will it have a conscious experience in 50% of the cases ?

    What does your proposal suggest (I would guess the last answer iii) ?

  9. Resuna Says:

    “I still need to defend the claim that (a) the questions we’re discussing, centered around quantum mechanics, Many Worlds, and decoherence, and (b) the question of which physical systems should be considered “conscious,” have anything to do with each other.”

    There is no reason to assume that they do. The whole question is probably meaningless. There’s no reason to treat the (as yet almost completely understood) computations called “consciousness” any differently from any other computations.

    All the existential questions about parallel execution of conscious computations exist in the absence of quantum mechanics. What happens if you calculate the state of your brain N times, with very slightly different initial states, and choose the best (happiest, most correct, whatever) resulting state. What happens to the versions of you that didn’t get selected? Are they the same person? Did you die N times? Does it matter?

    What if you run the calculation is a perfectly classical sense N times, with the same initial state, so the end state of the computation is the same. Did the same identical consciousness die N times?

    The morality is difficult. It could be horrible. That’s not a problem with the theory, the objective morality of the real world is horrible. The universe we’re in is built on uncountable terrors, we wouldn’t exist without uncountable trillions of conscious beings dying horribly, but arguing that terror and torture is only real if there is evidence of its occurrence is worse. It seems cowardly to me.

    I’ve already addressed the statistical argument for Boltzman Brains in another thread, and I feel that the whole argument rests on a deep confusion between local and global entropy. You’d do better to worry about Roko’s Basilisk.

    Finally, in reply to “From this extreme, even most scientific rationalists recoil. They say, no, even if we don’t yet know exactly what’s meant by “physical instantiation,” we agree that you only get consciousness if the computer program is physically instantiated somehow.” I give you Greg Egan’s novel “Permutation City”.

  10. Joshua Zelinsky Says:

    Resuna #9,

    Do you have any evidence that Egan takes the ideas in Permutation City seriously?

  11. Scott Says:

    Infinity #6:

      Meanwhile can you please describe the picture of “what consciousness is” in your head? … You have adressed what is definitely not conscious here but can you give any non-trivial example of conscious things?

    Well, I’d like to think I’m conscious (much of the time), and I grant you the courtesy of being conscious as well. Human infants, primates, whales, other mammals, birds, and possibly some other animals also seem to most observers to have varying degrees of consciousness, though it’s hard to form clear ideas about what their consciousness is like.

    In one sentence, to say that “X is conscious” is to say that there’s something that it’s like to be X. You might (or might not) be able to find a better definition by dipping into the vast philosophical literature on the topic—for three very different perspectives, try David Chalmers, Daniel Dennett, and William James. As for “what it’s like to be conscious,” maybe the closest one can give to an answer is to refer the asker to the world’s trove of imaginative fiction. (Rebecca Goldstein, my favorite contemporary novelist, actually works the mind/body problem explicitly into her fiction, but even storytellers who don’t go to that extreme graze the problem indirectly.)

  12. Leon Says:

    Simulating consciousness seems similar to simulating intelligence/understanding, and the paper-passing simulation chestnut sounds a lot like Leibniz’s mill and the Chinese Room. A great paper on the understanding/intelligence side of the issue is

    Saygin, A.P., Cicekli, I., and Akman, V. (2000). “Turing Test: 50 Years Later.” Minds and Machines 10, 463–518,

    especially in its discussion of the “Total Turing Test”, “Total Total Turing Test”, “Truly Total Turing Test”, and “Seagull Test” (for flight).

    What do you think are the differences or similarities between (the problems of simulating) consciousness and ‘understanding’?

  13. Jamie Says:

    I’m shocked and outraged that someone who dares to call himself a scientist would write something like this!

  14. N Says:

    You say that we could make an arbitrarily-accurate classical simulation of a person, who would still not be conscious because they don’t amplify quantum effects. My problem is this: say you simulated a copy of someone who was very interested in consciousness. If you asked the simulation about it, they would say that they were conscious, they could perceive qualia, etc. So we would have to conclude that the reasons they have for talking about consciousness are totally causally disconnected from whether they are conscious or not.

  15. Stephen Says:

    I think the Tegmark-style view that you have alluded to becomes much more palatable if put in an algorithmic information theory framework. Suppose you paraphrase Tegmark’s position as “all mathematical structures are real” (and correspondingly potentially conscious). If you formalize “all mathematical structures” as “all program tapes” of a two-tape Turing machine (in which the program tape is read-only), then the programs form a prefix-free set of strings. (You don’t count the bits that never get read by the Turing machine head as part of the program string.) By choosing the bits of the program tape by independent fair coinflips, one thus obtains a legitimate probability distribution over programs: \( Pr(p) = 2^{-|p|} \) where |p| is the length of the program. The prefix-free property of the strings guarantees that these probabilities add to one. (To get a probability for an output instead of a program, sum over all programs producing it.)

    I haven’t thought through this point of view as thoroughly as I would like, but it seems to me that having probability measure over the set of mathematical structures one is talking about improves Tegmark’s formulation a lot.

    Possibly my description is unclear. Here is a different approach, which offloads more of the descriptional work to the literature: take Leonid Levin’s refinement of Solomonoff’s universal prior and interpret it as metaphysics rather than epistemology. (I think Levin’s refinement using two-tape machines is important because it allows the \( 2^{-|p|} \) measure to fall out naturally rather than being postulated in a mysterious ad hoc manner.)

    Incidentally, in an amusing coincidence, I was recently thinking about “practical” applications of Vaidman brains. (However, the catchy name that you coined for them hadn’t occurred to me.) Suppose you wish to determine whether a course of action would yield pleasant or unpleasant outcomes. One way to find out would be to simulate the action and its perception by all the people it affects. However, if the simulation were sufficiently detailed then the simulated people would be just as conscious as real ones. (At least, one expects so, if one subscribes to the Dennet view.) Then the simulation has not fulfilled its purpose of warning you of harmful actions without actually causing the harm! To escape from this trap, use Vaidman’s quantum bomb-testing strategy instead of a classical simulation.

  16. Tom killean Says:

    Kernel indeterminacy is really a persistent problem here -as long as we have unresolved notions of causality we leave everything on the table when it comes to consciousness. As much as Vaidman gets at a key problem we need at least one rigorous primitive that connects consciousness to information independent of t – in terms of computability quantum conjectures are not going to be enough to vet notions of consciousness more complex than Turing unless causality takes on more complexity than t-1,t+1 dependencies

  17. Ken Miller Says:

    I may be in over my head here, so please forgive me if this is dumb, but: it seems to me that your postulate is that consciousness can only exist as part of a classical world. What do we mean by a classical world? We generally mean that something happened, as opposed to something else — the electron went through this slit, and not through that one. And we all know that if the two possibilities do not decohere — if they can ever converge on the same final physical state — that neither one happened, but only some quantum combination of both in which they interfere to give the probability of that final state. So a classical event happens if and only if that event decoheres from all the alternatives in just the sense you’re talking about, the arrow of time marching forward, no possibility of convergence on the same final state as any excluded alternative. So, aren’t you just hypothesizing that consciousness is a set of intrinsically classical events (which of course is not the same as saying intrinsically classical events are conscious)? And can’t we just summarize that by saying your hypothesis is that consciousness is an intrinsically classical phenomenon?

    Interested in your thoughts on this. Thanks.

  18. ppnl Says:

    I’m not sure what a system without a clean digital abstraction layer would look like. how would you construct it? How would you recognize one if you had one?

    And I’m not sure how to make sense of knightian uncertainty. Could you program a quantum computer to display knightian uncertainty?

  19. Hernan Eche Says:

    In the mood of the post, let’s play to
    “What would Leibniz do?”

    While reading “clean digital abstraction layer” I think about this Leibniz Quote:

    “Thus the organic body of each living being is a kind of divine machine or natural automaton, which infinitely surpasses all artificial automata. For a machine made by the skill of man is not a machine in each of its parts. For instance, the tooth of a brass wheel has parts or fragments which for us are not artificial products, and which do not have the special characteristics of the machine, for they give no indication of the use for which the wheel was intended. But the machines of nature, namely, living bodies, are still machines in their smallest parts ad infinitum. It is this that constitutes the difference between nature and art, that is to say, between the divine art and ours.”

    Perhaps what Leibniz would say about our artificial automata (with its classical joysticks with predictable interface) is that they are toys, that can’t be compared with nature art, (nature has no classical “clean digital abstraction layer”)

    What would he say about quantum?
    -Just unknowable superdeterminism.

    What would Leibniz said about scalable quantum computers? In the sense of the quote, perhaps he would say : “Did you say it has a classical predictable abstraction layer interface? Ok, so that’s also an artificial toy.”

    Then that classical interface, is the measurement, and Leibniz could always say : “it’s artificial”… except, that QC don’t halt, or that its results were unpredictable, but that don’t seem to be a “computer”, in the sense we could have power over it, and clasically “check” its results… (ok, it still could be a computer but only for universe purposes)

    Now I ask, “What could it mean to do a correct repeatable factoring from a 10000-digit number, but to push that classical predictable interface (measurement prediction) deeper?”

    It seems that sclable QC would imply you could get as many bits as the “user” can/want to see, it is, to make quantum as classical as you want. Leibniz wouldn’t like that.

  20. Nathan Montgomery Says:

    Firstly, I think all your points were very clear and it was a great read. I haven’t read it as thoroughly as I would like yet, but I couldn’t resist typing some of my thoughts out.

    Concerning what seems (to me) to be underlying part of the questions, I think memory is an inherent structure within ‘consciousness’, and without memory you can’t have ‘it’. Such that a quantum computer ‘doing nothing’ to somehow compute all of the pieces necessary to do give any response to the outside world would at least need to remember doing that nothing and be able to recount it’s experience (somehow) in order to be considered consciousness.

    But then we would really have to prove that (at least) memory of the experience can be reconstructed perfectly with a quantum process. The implication seems to just circle around to whether experiential memory can be perfectly replicated by a quantum computer, so we don’t get far, but at maybe shifting the question can help somehow?

    Memory in some forms is an at least mildly understood portion of neuroscience. For instance, hippocampal place cells seem to map regions of space onto a set of neurons participating in hippocampal activity. These cells often show activity during sleep, and are believed to play a role in the creation of long term memory.

    So, to throw in at least something to biological question, does that process require anything quantum? It doesn’t seem to, but models for it are not perfect so there is always room for micro-tubules (not really). Some currently working theories though do propose multiple maps are made through different regions of the brain, and that these are combined in some way, though no models stick out as being ‘that one really perfect match for how the brain makes memories’ that I know of.

    So, maybe there are multiple ‘self maps’ being generated in some part of the processes giving rise to consciousness? It doesn’t rightly matter as this could all still run on a perfectly ‘classical’ brain. In fact many of the best understood parts of the hippocampal process involve very macroscale receptor dynamics. Eg, NMDA receptor behavior. Of course we do know there is dependence on genetic processes, which could depend on something less macroscale (somehow), but still very likely not quantum scale dependent computationally.

    But does that process require something irreversible? That seems like a harder question, though intuitively I feel like it has to, else how would we be conscious of the temporal ordering of our memories? I will settle for saying it’s at least possible.

    Seems even if nothing else biological about consciousness matches your ‘irreversible consciousness’ hypothesis, memory at least could. Of course that hinges on assuming we (or at least things with brains that have memories) are consciousness.

    Interestingly, those hippocampal place cells I mentioned earlier seem to replay events in reverse order during sleep. Stepping out a bit on a limb, let’s imagine they are active during dreams. Does memory and therefore consciousness then depend on dreams? I don’t know of any strong evidence for it directly, but maybe? If so, could it be that the attempt to reverse the process of our (irreversible) consciousness is what gives rise to another part of consciousness? Far out man.

    Anyways, I really like these ideas, thanks for sharing more of your musings Scott.

    PS – Apologies for no citations, but really what self respecting scientifically minded individual would ever consider participating in such a silly discussion…

  21. Scott Says:

    Resuna #9:

      arguing that terror and torture is only real if there is evidence of its occurrence is worse. It seems cowardly to me.

    It occurred to me that the above reaction, while understandable, might contain a major misconception about what I believe.

    Let’s suppose the earth is destroyed by a supernova; all that’s left of it (and the only records of its ever having existed) is some cosmic debris, and photons mostly headed toward the deSitter horizon. As a practical matter, no aliens will ever recover evidence of the human race’s atrocities, or for that matter its achievements, or there having been a human race. (In principle, the aliens could pick up radio signals that left the earth before the supernova—but let’s suppose that, by the time those signals reach where the aliens live, they’re too faint to be picked up.)

    Even then, according to the view discussed here, aliens watching the supernova from afar would have to reason as follows: “if there were inhabited planets in the vicinity of that supernova, we’ll probably never know about them—but, in some sense, ‘the universe knows.’ The records of whatever happened on those planets are, presumably, unalterably stamped into the universe’s state, written into photons too faint to be seen that will now travel like silent witnesses into the infinite void. [OK, maybe the aliens are better writers.] And because ‘the universe knows’ about such planets, if they existed, we must say that all that happened on them, really happened: their joys were real, their sufferings were real, their destruction was real.”

    The one situation where this view would lead you to say that terror and suffering “aren’t real” is where there’s not merely no evidence of its having occurred, but the existence of any evidence would violate physical law. So for example, if the suffering occurred in a single branch of a double-slit experiment that then recohered with the second branch. And in those cases, the view holds, not that there were sentient beings who suffered but it’s OK to ignore their suffering because we’ll never have evidence of it, but rather that nothing in our experience (or in the thought experiments we can devise) should cause us to say that there was suffering, as opposed to a quantum-computational simulation of suffering. In any case, if there was suffering, and if we knew enough to set up this double-slit experiment in the first place, then we can now make it up to the victims by simulating those same victims being resurrected and experiencing indescribable joy. For good measure, in both of the slits.

  22. Scott Says:

    N #14:

      My problem is this: say you simulated a copy of someone who was very interested in consciousness. If you asked the simulation about it, they would say that they were conscious, they could perceive qualia, etc. So we would have to conclude that the reasons they have for talking about consciousness are totally causally disconnected from whether they are conscious or not.

    I understand the intuitive force of that objection. Let me see if I can meet it head-on.

    Suppose I wrote a chatbot, not much more sophisticated than ELIZA or Eugene Goostman, that (when asked about the subject) said it was conscious, it could perceive qualia, etc. Presumably we’d all agree that the chatbot wasn’t conscious, and that its reasons for claiming to be conscious were causally disconnected from the question of whether it was conscious or not. But we’d also agree that this didn’t provide a very good reductio of anything. Why not? Because it’s just a chatbot, goddammit! It can be “unmasked as robotic” by simple expedients like asking it whether Mount Everest is bigger than a shoebox, or asking it the same question 3 times.

    Now, with the full brain emulation that you’re talking about, “unmasking it as robotic” would be much more nontrivial. Indeed, let’s assume that the emulation passes a strong Turing Test, so that (by definition) it could never be unmasked as robotic, by any conversation of any length in which the emulation was accessed only as a black box.

    Even then, the view of my essay stresses that unmasking the emulation as robotic would be as simple as having a second copy of its code running on another computer—so that one could see for oneself that the same questions (together with the same random numbers and other stimuli) provoked the exact same responses, and could even do a memory dump to trace exactly how those responses were produced. Much like with certain constructions in theoretical cryptography, black-box access to the “adversary” (in this case, the brain emulation) doesn’t suffice, but access to the adversary’s code is enough.

    One might object that the same thing (making an exact copy and then tracing through the code) ought to be possible in principle for an organic human brain, but that’s exactly the point that this view questions. If it were possible, then I’d readily admit that this view was no longer tenable. But you wouldn’t even need to ask me; you could just run my simulation instead. 🙂

  23. Adam M Says:

    Your blog might be the most reliably interesting thing on the whole internet.

    That said, I have a question: is your suggestion — the requirement of computation + decoherence — much different or more plausible than just saying that consciousness is the result of a real physical process (just like most other things in the universe), instead of something that gets magically conjured up by performing an abstract computation?

    The nice head-scratching paradoxes you mention mostly stem from this notion of consciousness as the result of computation. But what evidence is there that consciousness has anything to do with computation in the first place?

  24. Mitchell Porter Says:

    N #14:

    So we would have to conclude that the reasons they have for talking about consciousness are totally causally disconnected from whether they are conscious or not.

    You are committing a peculiar fallacy.

    “Suppose we simulated how a particular building would burn if it caught fire, and we then used smoke machines to produce an actual column of smoke, exactly like that which would be produced if the building really was burning. In that case, the fire brigade would still show up and try to put out the fire. So we would have to conclude that whether the building emits smoke has nothing to do with whether it is actually burning.”

    The capacity of an unconscious simulation of consciousness to still reproduce the behavioral effects attributed to consciousness, does not imply that consciousness isn’t a causal factor when it is actually present.

  25. William Says:

    Scott, the argument that decoherence is a matter of entropy leads me to wonder if you’ve thought of the intersection of these arguments with Wissner-Gross’ consciousness-as-entropy-maxmizing?

  26. Jochen Says:

    Whenever I encounter the ‘conscious lookup table’, I find that I can’t quite figure out how it’s supposed to work. I mean, the basic stipulation is clear: there’s for any given length of conversation a finite number of possible queries and responses, which are explicitly coded into the lookup table. But what does the table answer when I ask, ‘what was the last thing I said’? It seems to me, in order to do so, it would have to possess at least some internal state which changes based upon the conversation’s progress, and the addition of such complexities makes the intuition that the table couldn’t be conscious much less clear, to my mind.

    Plus, for any lookup table, I think it’s a mistake not to take its history into account: somebody or something must have programmed it, and, in order to make its responses resemble that of an intelligent being, that programmer must itself have been intelligent. But then, it seems to me that the conversation with a lookup table really is a conversation with its programmer, by means of stored responses—an intelligence test of the programmer by proxy, if you will.

    But still, I think the proposition that some algorithm is sufficient to explain our phenomenology-production process (i.e. whatever it is that starts with the physical substrate and ‘outputs’ phenomenological experience) seems doubtful. The reason is that you can modify the usual Gödel/Lucas/Penrose argument for such a case: take a class of experiences that we will call T-experiences, which are the experiences we have whenever we know something is true. Like if I wear red socks, and I know I wear red socks, we have a T-experience regarding the proposition ‘I wear red socks’.

    Now suppose some machine—the phenomenal machine P—instantiates the algorithm that gives rise to my phenomenology. Then, given a suitable description (i.e. Gödel number) of my epistemic situation—thus being able to determine what propositions I could recognize as being true—and some proposition x, it should be able to print out TruX(x) if and only if I have a T-experience regarding x.

    But now let G be: P does not print TruX(G). Then, if we assume for a contradiction that P does print TruX(G), we can conclude that we have a T-experience regarding G (as P instantiates the theory describing our phenomenology); hence, we also have a T-experience regarding ‘T does produce TruX(G)’; but we can have a T-experience of G, if and only if P does not print TruX(G); since we can conclude that, we thus have a T-experience regarding ‘P does not print TruX(G)’. But then, we have two conflicting T-experiences, which is a contradiction.

    Thus, we can now conclude that P never prints TruX(G). Hence, we have a T-experience regarding that fact; but P never captures this T-experience. Hence, not all of our experience—in particular, not all of our T-experiences—can be captured by P. Thus, no algorithmic theory can exist that accounts for our phenomenology production.

    The only reason I propose this argument is that if it’s right, we should expect difficulties with the hard problem—if we make the supposition that our explanatory capacities essentially are algorithmical, but the process generating our phenomenology isn’t, then we should expect to be unable to solve the hard problem, but it doesn’t entail anything at all ontologically extravagant, such as e.g. dualism. There’s simply stuff our brains can do, that they can’t explain, because it doesn’t occur algorithmically. And it’s not even extravagant to suppose that there are processes in nature that occur nonalgorithmically: if quantum mechanics is genuinely random, since it’s impossible to generate randomness algorithmically, we have an example for a nonalgorithmic process, without any need for Penrose/Hameroff quantum gravitational exotica.

  27. Aspiring theorist Says:

    A practical question: are there any macroscopic objects in our physical universe that do _not_ “participate in the arrow of time” in the sense that you require as a pre-requisite for consciousness? It does not seem possible to reverse in time any action that I may make to a stapler (to take your example of a non-conscious object) in the sense that you require (i.e. removing all traces of the action from the physical universe).

    Of course I understand that you are positing this only as a necessary, not sufficient condition for consciousness, but it seems to me to be a vacuous condition, satisfied by all non-trivial physical objects.

  28. Lee Says:

    I wonder if you are being fair to your former colleague John Searle when you suggest that he forbids all discussion of what exactly the “biological casual powers” of the brain are. My impression is exactly the opposite; that that is precisely the question he thinks deserves investigation. We have this grey goo — so how exactly does it make us conscious? It is almost entirely a biological question, I believe he believes.

  29. Scott Says:

    Daniel Freeman #7:

      I like this much more as a cardinality scheme for consciousness than as a bright-line for consciousness.

    That’s a very fair reaction.

    Actually, now that you remind me, several people came up to me after my talk to say that they would agree with the view I’d outlined, if only I had “relativized” it to our epistemic situation rather than trying to make absolute statements.

    So for example, they said, for all we know our whole universe (with its apparent Arrow of Time, deSitter boundary, etc.) is a computer simulation being run by aliens in some metaverse. From the aliens’ perspective, we’re perfectly predictable, copyable, reversible automata—so according to this view, the aliens shouldn’t regard us as conscious. On the other hand, if we have no access to the metaverse, then we should still regard each other as conscious. Likewise, if we build such a simulation, we shouldn’t regard the beings in it as conscious even though they should regard each other as conscious.

    So, just like you said, on this account there are no objective facts about who’s conscious and who isn’t, only facts about who should regard whom as conscious.

    While this might surprise some people, I’m totally fine with this “relativized, Gödelian” way of thinking about the view. Indeed, I confess that it doesn’t even make much difference to me whether one thinks of it in the relativized way or not. The way I see it, there are only two possibilities:

    (a) The aliens in the metaverse can communicate with us. In that case, it would turn out that it wasn’t a “metaverse” after all, just a different part of our universe. And if the aliens can demonstrate to us their perfect abilities to copy, predict, and rewind us, then we ought to agree with them in regarding ourselves as automata. If they can’t demonstrate such abilities, then for present purposes, we’re back to the situation where they never made contact at all.

    (b) The aliens in the metaverse can’t communicate with us. In that case, their existence has no clear empirical consequence for us, much like deistic theological beliefs. So, in this case as in the other one, we’re perfectly justified to live our lives, and conduct all our reasoning—including moral reasoning, and (closely related to that) reasoning about which entities are or aren’t conscious—as if the aliens didn’t exist (as, for all we know, they don’t).

  30. Jess Riedel Says:

    By the way, the links to the individual videos are now available on the Talks page for the workshop. You can find all the videos in one place here. (We’re working on improving the resolution, but no promises.)

  31. Scott Says:

    Jess #30: Thanks!! (And thanks again for co-organizing.) I’ll add a note in the OP.

  32. Scott Says:

    Stephen #15:

      Incidentally, in an amusing coincidence, I was recently thinking about “practical” applications of Vaidman brains. (However, the catchy name that you coined for them hadn’t occurred to me.) Suppose you wish to determine whether a course of action would yield pleasant or unpleasant outcomes. One way to find out would be to simulate the action and its perception by all the people it affects. However, if the simulation were sufficiently detailed then the simulated people would be just as conscious as real ones. (At least, one expects so, if one subscribes to the Dennet view.) Then the simulation has not fulfilled its purpose of warning you of harmful actions without actually causing the harm! To escape from this trap, use Vaidman’s quantum bomb-testing strategy instead of a classical simulation.

    I like your “practical” proposal! It occurs to me that one way of reaching the view I discussed, would be to start with your proposal, and then ask questions like:

    “Do we really need to invest so much effort to run the Vaidman bomb-testing strategy? After all, it takes 1/ε iterations if we want an ε probability of anyone suffering! And besides, how are we even supposed to decide on the right value of ε—i.e., on the tradeoff between ethics and computational efficiency? Should we appoint some ethical review board to set upper bounds on the allowable ε? Also, do we need to factor in how many simulated beings will suffer if the ‘bomb’ is set off? For example, even if ε were 0.000000001, would it still be immoral to run the simulation if 10 billion conscious beings would suffer in the ‘live’ branch?Also, what if it turns out that the amplitude is irrelevant; all that matters is whether the simulation is invoked in some branch? In that case, running 1/ε iterations of Vaidman’s bomb strategy would be vastly worse than just running a single simulation with probability 1: our victims would be reliving their suffering 1/ε times. Maybe we’re thinking about this the wrong way anyway. Aren’t there various classical shortcuts we could take in the simulation, that would give us the answer we wanted without any probability of causing suffering? E.g., just have the simulation continuously monitor to see if simulated beings are suffering, and ‘zombify’ them if it turns out that they are? But wait: does ‘zombification’ have to mean removing those beings from the simulation, or otherwise changing their external behavior—thereby compromising the simulation’s accuracy from that point forward? Isn’t there something the simulation could do to keep the external behavior the same, and just remove the ‘subjective suffering’ part? But wait: if eliminating the suffering were as easy as adding some bit of code to the simulation, then how confident should we be that suffering was present before we added the code? Maybe we’re thinking about this the wrong way…”

  33. Scott Says:

    Ken Miller #17:

      So, aren’t you just hypothesizing that consciousness is a set of intrinsically classical events (which of course is not the same as saying intrinsically classical events are conscious)? And can’t we just summarize that by saying your hypothesis is that consciousness is an intrinsically classical phenomenon?

    Yes, this hypothesis implies that a fully-coherent quantum-mechanical process wouldn’t be conscious. But crucially, it also implies that a fully-decoherent classical process, one that (because of its complete classicality) could be copied and predicted extremely accurately by external observers, without much damaging the original, wouldn’t be conscious either. On this hypothesis, consciousness is something that can only arise when you have “coupling” between the two regimes, with the process amplifying microscopic fluctuations to macroscopic scale as an intrinsic part of its operation.

    So for example, existing digital computers generate plenty of decoherence and physical irreversibility (as you can check by feeling the heat from your laptop…). But their irreversibility can be “cleanly decoupled” from their information-processing, in the following sense. There exist classical, macroscopic variables (like the voltage levels in the transistors) that don’t depend at all on microscopic fluctuations (except possibly in well-characterized ways, if the computer has a quantum random number generator); and measuring those variables—which an external observer could clearly do, at least in principle, without damaging the computer—is sufficient to make a behaviorally-identical clone, one good enough to predict all the computer’s future responses to any set of stimuli.

    As I said in the essay, it’s possible that everything above is also true of the brain. And if it were ever shown to be true, I’d then say that the hardcore computationalist view had been vindicated, and that the view I’m exploring here was wrong. But I don’t think it’s obvious that the brain does have the requisite “clean digital abstraction layer”: indeed, I regard that as currently a wide-open problem, one of the biggest and most fascinating I can think of at the intersection of science and philosophy. The neural firings, changes in axon strengths, etc. obviously all take place at a macroscopic, classical scale (and Tegmark has given strong arguments that quantum coherence is extremely unlikely to be relevant there). On the other hand, those things are constantly buffeted around by intracellular and other microscopic processes in complicated, not yet fully-understood ways (which neuroscience often models as random noise, e.g. in the Hodgkin-Huxley equations), something with no obvious analog in the case of transistors. In summary, I don’t think anyone can say right now at exactly what level of detail you’d need to scan someone’s brain, in order to create a new instance of that person’s consciousness (i.e., so that the person should consent to being euthanized, if they knew the copy would be made).

  34. Douglas Knight Says:

    Even if early quantum physicists rejected the Many-Worlds picture, why didn’t they at least discuss and debate it?

    One example is Wigner’s Friend, which is slightly post-Everett, but probably not influenced by him. Why did it take 25 years to go from Schrödinger’s Cat to Wigner’s Friend?

  35. francis Says:

    I don’t think that the general idea that decoherence decides which parts of the universe are possibly conscious and which parts aren’t works. Whether decoherence occurs depends on the division of the universe into subsystems and this division is arbitrary.

    So either we have to introduce a physical mechanism which singles out a certain division or the idea makes only sense relative to a conscious observer who makes the division. This wouldn’t leave Dennet and Descartes on equal footing.

    Also, as you noted, we need a source of fundamental irreversibility which is not present in quantum theory so far. In the standard derivation of decoherence we make certain idealizations about the environment. We don’t get exactly irreversible decoherence from unitary dynamics.

    Still, I enjoyed your post. It is quite thought provoking and I am still unsure about whether QM tells us more than classical mechanics in these matters.

  36. Scott Says:

    ppnl #18:

      I’m not sure what a system without a clean digital abstraction layer would look like. how would you construct it? How would you recognize one if you had one?

    Unfortunately, it’s much easier to prove that a clean digital abstraction layer is present, than to prove that it’s absent! Having said that, in my GIQTM essay (Sections 3 and 9), I do talk at some length about what could conceivably be discovered in neuroscience, physics, and cosmology, that could increase our belief that systems like human brains lacked a CDAL.

    And of course, if it turned out that brains lacked a CDAL, and if we engineered artificially-intelligent computers whose components were buffeted around by microscopic accidents in more-or-less the same way neurons were, so that an external observer couldn’t even build a decent probabilistic model of a particular one of these computers without taking it apart and destroying it, then we should say that those computers lacked a CDAL as well; and the possibility would obviously arise that they were conscious. (Note that, by definition, these aren’t computers that we could build millions of functionally-identical copies of on an assembly line: each one we built would necessarily be at least somewhat different from the others, like fancy artisanal furniture.)

      And I’m not sure how to make sense of knightian uncertainty. Could you program a quantum computer to display knightian uncertainty?

    If you programmed something, and if, by tracing through the code, you can calculate for yourself the probability that the program (when run) will generate a given output (or can use a different computer to calculate it for you), then by definition, it isn’t Knightian uncertainty. Sections 3, 3.1, 8, and 13 of GIQTM have a lot more about how to make sense of this concept, and give examples where it at least seems to arise.

  37. Michael Bacon Says:

    Aspiring theorist@27 asks:

    ” . . [A]are there any macroscopic objects in our physical universe that do _not_ ‘participate in the arrow of time’ in the sense that you require as a pre-requisite for consciousness? ”

    This was one of my first questions as well.

  38. Scott Says:

    Adam #23:

      Your blog might be the most reliably interesting thing on the whole internet.

    Shucks, thanks! 🙂 (But I’m going to assume you’re not counting the Lolcat Bible.)

      That said, I have a question: is your suggestion — the requirement of computation + decoherence — much different or more plausible than just saying that consciousness is the result of a real physical process (just like most other things in the universe), instead of something that gets magically conjured up by performing an abstract computation?

    Yes, I think it’s different, for the following reason. As soon as you instantiate an abstract computation on (say) a silicon chip, that, too becomes a “real physical process.” So then, if you want to maintain that your brain generates consciousness while the computer doesn’t (even if, say, it passes the Turing Test), then it’s simply no longer relevant to talk about “real physical processes” versus “abstract computations.” Now we’re just comparing one physical process against another one. If you want to separate them, you’d better be prepared to point to a physical difference between the two physical processes, and explain why it matters.

    This, of course, is where John Searle starts talking about the “biological causal powers” of the brain: on his view, you’re conscious and the computer isn’t because only you have those “causal powers.” But to my mind, that’s not any kind of answer; it’s just a restatement of the question. More pointedly: if we’re going to say you’re conscious because of your “biological causal powers,” why shouldn’t we likewise say that the computer is conscious because of its “electronic causal powers”? Given the postulated indistinguishability of their behavior, we still haven’t said anything to separate the two physical systems that doesn’t amount to question-begging prejudice.

    Now, the view I’m exploring takes the above as its starting point, but then observes that there’s at least one apparent physical difference between the brain and the computer that, if it’s real, would (at the least) have enormous moral relevance. This difference is simply that the computer’s state can be perfectly scanned and copied by external observers, and (thus) the computer’s responses to all future stimuli perfectly predicted, without damaging the original computer at all. Maybe the same can be done for human brains, but we don’t know that to be the case. This has moral relevance because, as I said, it’s not obvious what it even means to “murder” or “torture” something if you can just restore it later from backup, or rewind whatever you did.

    Now the speculative leap, if you like, is to elevate this practical and moral difference to a more “fundamental” status—and to say that, if system A is perfectly copyable, predictable, superposable, and rewindable by system B, then for those very reasons, maybe B shouldn’t regard A as conscious at all.

    At this point, some readers might accuse me of breaching the firewall between “is” and “ought”: that is, of trying to learn objective facts about what sorts of systems are or aren’t conscious, from what ultimately amount to moral considerations (“if we destroyed a system of this kind, could we in principle restore it later from backup”?). But here’s something to chew on: if we divide all questions into “is-questions” (about the state of the world) and “ought-questions” (about what we should value), it seems to me one could make a strong case that questions about which entities are conscious, despite their grammatical form, belong on the “ought” side. For, on the one hand, these questions are hopelessly intertwined with all sorts of other “ought-questions” (as Joshua Zelinsky #2 pointed out); and on the other hand, it’s not clear that they have any implications whatsoever for any of the ordinary “is-questions.”

  39. Scott Says:

    William #25:

      the argument that decoherence is a matter of entropy leads me to wonder if you’ve thought of the intersection of these arguments with Wissner-Gross’ consciousness-as-entropy-maxmizing?

    No, sorry, not familiar with that. Will add it to my reading stack (which, alas, already has near-maximal entropy…).

  40. Scott Says:

    Jochen #26:

    1. The way the lookup table works is that you give it, as input, a record of the the entire interaction you’ve had with the lookup table thus far: not only your current question, but also all your previous questions, and (if any randomness is involved) also the lookup table’s responses to them. I.e., you use that entire record as your index into the table, when looking up the AI’s next move.

    2. Yes, of course if the lookup table were meticulously curated by a human programmer (Doug Lenat?), there would also be intelligence residing in the programmer. But imagine instead that we had a recursively self-improving AI, which started out as a tiny seed (e.g., a few pages of neural-net code), and thereafter “taught itself everything it knows” by trawling the Internet and interacting with other AIs and people. Would you agree, in that case, that the AI’s intelligence was “its own,” rather than that of whoever coded the seed? (In the same sense that your intelligence is “your own,” rather than your parents’?)

    Now imagine that the AI creates a perfect copy of itself, and that the original copy is thereafter destroyed. Would you agree that the surviving copy should still be called “intelligent”? I.e., that the change in physical substrate is hardly more relevant than if the AI had moved to a new town?

    Now imagine that the surviving copy is actually a giant lookup table. Does that change things?

    3. I’ve explained my difficulties with the Penrose/Lucas argument in a bunch of other places (e.g., “Why Philosophers Should Care About Computational Complexity” and Quantum Computing Since Democritus), so I won’t retread that ground here. Suffice it to say that, when people use the word “uncomputable,” they generally mean either (a) decision problems that can’t be solved by deterministic Turing machines, or (b) more general tasks that can’t be performed even by probabilistic Turing machines. So in particular, “output a random bit” (or “output a Kolmogorov-random string”) is not a good example of an “uncomputable problem,” since it becomes “computable” as soon as the Turing machine is given the tiniest fighting chance (e.g., by giving it random bits, or by asking it to output the probability of each possible outcome). Penrose, to his credit, uses the word “uncomputable” the same way everyone else does: when he says it, he literally means that he thinks we can solve mathematical problems that aren’t solvable by a Turing machine (or even Turing machine with oracle for the halting problem), not merely that we can spit out quasi-random bits, just like any computer with a quantum random number generator could do (indeed, enormously faster and more reliably than us).

  41. Scott Says:

    Aspiring theorist #27:

      are there any macroscopic objects in our physical universe that do _not_ “participate in the arrow of time” in the sense that you require as a pre-requisite for consciousness?

    Yes. See my response to Ken Miller (comment #33). By an object’s “participating in the arrow of time,” I mean more than that it amplifies microscopic fluctuations and generates entropy; I mean that it does so in a way that can’t be “cleanly separated” from its functional behavior. In particular, there should be no way for an external agent to copy, predict, superpose, or rewind the system by focusing on its “classical abstraction layer” only, without the microscopic stuff getting in the way.

    Admittedly (as I’ve said), it’s far from obvious whether there are any objects in our physical universe for which this clean separation doesn’t exist! One can conceive of examples, and the brain might be an example, but I’d say that we simply don’t know.

    On the other hand, it’s easy to find objects for which the clean separation does exist: for example, any existing digital computer.

  42. Jay Says:

    Lot of food for tought. 🙂 Two things please

    1) could you rephrase “It seems possible that—as speculated by Bohr (…) sodium-ion channels, and other data (…) a fundamental matter of physics.” I’m not sure of the verb and if the “and” meant a “but”.

    2) suppose we could perform the teleportation experiment you would refuse because the copy won’t actually be you. Are you predicting the copy won’t be conscious, or that it will be conscious but able to detect he’s not the original, or that it will be conscious and unable to notice there’s a difference?

  43. Jack Sarfatti Says:

    Was there any discussion of entanglement signal nonlocality in the sense of Antony Valentini’s “non sub quantum equilibrium”
    http://arxiv.org/abs/quant-ph/0203049
    I think this is necessary for subjective experience and that this is in fact the case for us – our minds.

  44. Scott Says:

    Lee #28:

      I wonder if you are being fair to your former colleague John Searle when you suggest that he forbids all discussion of what exactly the “biological casual powers” of the brain are. My impression is exactly the opposite; that that is precisely the question he thinks deserves investigation. We have this grey goo — so how exactly does it make us conscious? It is almost entirely a biological question, I believe he believes.

    From my current vantage point, I admit that I may have been unfair to Searle in the past, and I apologize if I was. But I’m not sure that I’m still being unfair to him now. 🙂

    A central point where I part ways from him is this: for me, the question “what’s special about the grey goo?” isn’t, and can’t be, “almost entirely a biological question.” For the question has two parts: we want to know, not merely what about the grey goo makes it conscious, but also what about (e.g.) a collection of silicon chips with the same connection pattern and behavior, could make the latter fail to be conscious. This and other comparisons are the whole substance of the question. More generally, given any arrangement of matter, we’d like to know principles for deciding whether there’s any consciousness there, and if so how many, what kinds, etc. It seems to me that, by itself, no amount of dissecting grey goo can tell us the answers to such questions, for the same reason why you can’t learn the causes of war by looking only at warring societies, and never at any peaceful ones.

    If forced to choose, I’d say this isn’t a biology question but a physics question. If brains are conscious while their perfect digital simulations aren’t, then there must be some physical difference between the two that’s responsible for this, and our situation is not that we have a surfeit of good candidates for relevant physical differences and are trying to pick one: rather, it’s that most of us have trouble imagining how any of the known differences could possibly be relevant. I.e., why not simply assume that, if we have mysterious “biological causal powers” that make us conscious, then behaviorally-identical simulations of us will have equally-mysterious “electronic causal powers”? Indeed, why wouldn’t it be downright evil not to grant that assumption: like saying that Chinese people might act pretty smart, but they’re not conscious because they lack “Caucasian causal powers”?

    I give Penrose enormous credit for at least recognizing what sort of thing would constitute an acceptable answer to these questions, but of course, I don’t think his answer works. That’s why I find it of great interest to explore whether are any possible answers that

    (a) are more scientifically conservative than Penrose’s, but
    (b) do more than just restate the question using terms like “causal power.”

  45. Jochen Says:

    Now imagine that the surviving copy is actually a giant lookup table. Does that change things?

    I don’t see why it would. I would still, effectively, be querying the intelligence of the AI that programmed the original lookup table (I don’t want to argue that computers can’t be conscious; I do believe they can be, every bit as much as we can). (Thanks for clearing up my misconception on the lookup table setup, by the way.)

    Regarding the Gödelian argument, I think there are useful functions you can perform using randomness that you can’t perform without it. Think of a cops-and-robbers game: a robber hides in either of two houses, a cop is tasked with catching him. At each step, both cop and robber can either choose to stay, or switch houses. For any deterministic strategy, there will always be robbers that elude the cop: they just have to be where the cop is not (in this sense ‘Gödeling’ the system). However, if the cop has access to a source of true randomness, the game fundamentally changes: the robber will always be caught, even if he likewise has a source of randomness (with probability 1). So the injection of randomness transforms the game from always-loose (for a certain robber) to always-win from the cop’s perspective. (I take the example from Taner Edis’ “How Gödel’s Theorem Supports the Possibility of Machine Intelligence“.)

    In a sense, although you can explain the overall performance of the cop, for any given instance, you can’t give an explanation for her strategy: at each injection of randomness, there’s ‘no reason’ for her choice. I just wonder if our capacity of phenomenology production might incorporate something similar, though much more subtle, such that we could likewise give no account of how it happens that meat gives rise to this strange feeling of ‘what it’s like’ to be us. The Gödelian argument, by showing some part of phenomenology production that can’t be accounted for algorithmically, seems to support this. Of course, this doesn’t mean that Turing machines, say, can’t experience the same richness of, uh, experience—just that if they can, it’ll be as mysterious to them as it is to us (though ultimately there’s no ontological, but merely an epistemic gap, albeit a necessary one).

  46. Scott Says:

    francis #35:

      Whether decoherence occurs depends on the division of the universe into subsystems and this division is arbitrary.

    I don’t think it’s completely arbitrary (ironically, here I’m taking the position usually advanced by MWI’ers, while you’re taking the one usually put forward by MWI critics! 🙂 ). What makes the division non-arbitrary is that the Hamiltonian governing the evolution of the world is local in the position basis. That picks out the position basis as somewhat special, and (it seems to me) makes it reasonable to say that, if the density matrices of position-localized states are quickly losing their off-diagonal entries (because of interaction with the stuff in the surrounding positions), then “decoherence is happening.” Of course, such decoherence might be reversed sometime in the distant future; in ordinary situations, whether it will or won’t strikes me as ultimately a question of cosmology.

  47. Scott Says:

    Jay #42:

      1) could you rephrase “It seems possible that—as speculated by Bohr (…) sodium-ion channels, and other data (…) a fundamental matter of physics.” I’m not sure of the verb and if the “and” meant a “but”.

    Thanks; rephrased!

      2) suppose we could perform the teleportation experiment you would refuse because the copy won’t actually be you. Are you predicting the copy won’t be conscious, or that it will be conscious but able to detect he’s not the original, or that it will be conscious and unable to notice there’s a difference?

    This view would suggest that, if the relevant parts of the quantum state of my brain weren’t transported to Mars (either by quantum teleportation or just by shipping them there), then at any rate, “I” shouldn’t expect to die on Earth and then “wake up” as the copy of me on Mars. Whether the being on Mars had its own consciousness—similar to but discontinuous with mine—would depend on the details of the Mars-being. I.e., whatever general criteria we think relevant to deciding whether something is conscious (does it pass the Turing Test? does it participate fully in the arrow of time? is it perfectly predictable by outside observers?) would simply get applied to the Mars-being. That the latter was produced by using me as a “template” wouldn’t even enter into it.

  48. Scott Says:

    Jack Sarfatti #43: No, I don’t recall any discussion of Valentini, or of Bohmian mechanics more generally. I’d say that almost all the participants were different flavors of Everettians / decoherentists / consistent-historians, with the disagreements—though often heated—assuming something vaguely Everettesque as a backdrop, and certainly not assuming any change to the normal rules of QM.

  49. Shmi Nux Says:

    I have linked to this post on Less Wrong. Feel free to reply there if you find any interesting comments, say, by tomorrow.

  50. Adam M Says:

    Scott #35:

    But I’m going to assume you’re not counting the Lolcat Bible.

    Argh. Just argh.

    If you want to separate them, you’d better be prepared to point to a physical difference between the two physical processes, and explain why it matters.

    Of course. But, as I pointed out in my comment here, there are dramatic and obvious physical differences between a brain and a computer. You’ve highlighted one quasi-physical difference in your post: at the level that the consciousness would be supposed to exist (ie, at the software level that simulates a human, not at the much messier silicon level) the computer is deterministic and reversible. And, at the more fundamental level of electrons and semiconductor gates, the computer really bears no resemblance to a brain at all.

    In general, I think that your focus on decoherence (or, to perhaps dodge some trickiness regarding what ‘decoherence’ does or doesn’t imply, I should say the ‘quantum/classical transition’ instead) is really spot on, and is a powerful way to look at where consciousness might or might not exist. The fact that this rules out deterministic processes both quantum and classical I think fits very well with my (admittedly unreliable) intuitions, as does the useful idea of a clean digital abstraction layer, which I think clarifies why (classical, deterministic) computers will probably never be substantially consciousness, no matter how complex their hardware or software.

    But, this still leaves my main question unanswered: the decoherence part of your dyad of necessary conditions makes good sense to me; the computation part doesn’t. Once again: is there any evidence that consciousness has anything to do with computation (except in the completely trivial sense in which all physical processes are computations being carried in a computer composed of the whole universe)?

    My suspicion is that the various paradoxes that emerge from this computational assumption are only tips of a massive iceberg of incoherence. It’s not clear to me that computation can even be defined without either 1) presupposing consciousness or 2) adopting a brand of platonic mysticism into which even a masterfully bullet-swallowing tenured professor would fear to tread 🙂

  51. Lee Says:

    Well, Scott, you seem to be making the same mistake that Searle has caught Ray Kurzweil making; you can’t decide if consciousness is hardware or software. A computer you suspect might be conscious will be running some program. Is it the code that is causing the consciousness, or is the “collection of silicon chips with the same connection pattern and behavior?” If a computer was conscious, then you rewrote its program to run as an Excel macro (a Turing complete language), and then re-compiled it to run on a totally different architecture, would the consciousness transfer with it to the new machine? How, exactly? Also, you relapse into your old habit of being unfair to Searle when you talk about “mysterious” biological casual powers, like there was something mystical or spooky about them. Finally, aren’t we confusing simulation with duplication? If you had a computer that perfectly simulated the weather, or gravity, would it also make it rain, or warp space and time?

  52. Steve Says:

    What is your take on the experience of “qualia”? The whole “My red could be your blue” debate?

    My own solution to this paradox has been that there is an automorphism of “consciousness” sending “red” to “blue”. Since both of these interpretations of the raw physics underlying my mind are consistent, I think that there are at least two distinct “consciousnesses” running on my hardware. Neither one notices the other, because these two consciousness make identical decisions about what my body should do.

  53. Scott Says:

    wolfgang #8:

      However, at the end of this channel we place a mirror – if it is in the consciousness-OFF position the photons are reflected back into the machine and unitarity is restored, but in the consciousness-ON position the photons escape into the deSitter universe.

    That’s a superb question—hardest one anyone’s asked so far! 🙂

    But in your thought experiment, I tend to gravitate toward an option that’s not any of the three you listed. Namely: the fact that the system is set up in such a way that we could have restored unitarity, seems like a clue that there’s no consciousness there at all—even if, as it turns out, we don’t restore unitarity.

    This answer is consistent with my treatment of other, simpler cases. For example, the view I’m exploring doesn’t assert that, if you make a perfect copy of an AI bot, then your act of copying causes the original to be unconscious. Rather, it says that the fact that you could (consistent with the laws of physics) perfectly copy the bot’s state and thereafter predict all its behavior, is an empirical clue that the bot isn’t conscious—even before you make a copy, and even if you never make a copy.

    Of course, your next question might be: does that mean that, if this view were accepted, and if technologically-advanced aliens were able to surround the earth with an AdS boundary (thereby making the earth a quantum-mechanically closed system), then we’d have to say that humans were never conscious—even if, as it happens, the aliens never show up?

    Yes, it does mean that.

    A saner-sounding but logically-equivalent version is: if it were possible for aliens to surround the earth with an AdS boundary, or otherwise put us into a unitary closed system, without doing anything invasive like slicing up our brains and imaging them, then (given that we take ourselves to be conscious) the view I’m exploring would be wrong.

    Fortunately for the view, I don’t see how the aliens could, in fact, do such a thing, without re-engineering spacetime in ways that would take us far into the realm of Greg Egan sci-fi novels. Sure, they could surround us with thick mirrors, but those mirrors would presumably disintegrate (or information would leak out from them) long before there had been any recoherences of the quantum state inside the mirrors.

    Actually, now that I write that: does anyone feel like doing a quick calculation to confirm or refute the paragraph above? 😉

  54. Scott Says:

    Adam M #50:

      this still leaves my main question unanswered: the decoherence part of your dyad of necessary conditions makes good sense to me; the computation part doesn’t. Once again: is there any evidence that consciousness has anything to do with computation (except in the completely trivial sense in which all physical processes are computations being carried in a computer composed of the whole universe)?

    It’s a fair question. I’d give the following argument.

    It seems clear that, in ordinary life, our ascriptions of consciousness to things other than ourselves are bound up with our observations of intelligent behavior in those things. Why else would we regard dogs (for example) as conscious, but not trees?

    But then, we’ve understood since Turing that the problem of producing intelligent behavior, of the sort that typically causes us to ascribe consciousness to something, is, at bottom, a computational problem. By this I mean two things:

    (a) In principle, and even if we rule out the giant lookup table, there exist purely computational ways to solve the problem—e.g., whole-brain emulation. (Or in the extremely unlikely event that there don’t, as Penrose speculates, that itself would be a computability-theoretic phenomenon!)

    (b) In practice, there’s no way to solve the problem without doing something that could be interpreted as an interesting, nontrivial computation.

    (I don’t think I’m saying more here than is implicit in the Church-Turing Thesis.)

    If I’m right, then discussion of which entities are or aren’t conscious is always going to be bound up with discussion of where the right kinds of computations are taking place to produce intelligent behavior, and it’s perfectly understandable why. As I see it, the burden post-Turing is squarely on those (like me) who suspect something non-computational might be relevant, to explain what the something is and give arguments for why it should make any difference.

  55. Scott Says:

    Lee #51:

      aren’t we confusing simulation with duplication? If you had a computer that perfectly simulated the weather, or gravity, would it also make it rain, or warp space and time?

    The classic response to that chestnut is given in Russell and Norvig’s textbook, where they point out that it relies on a careful choice of examples. Sure, a perfect simulation of the weather doesn’t make it rain—at least, not in our world. On the other hand, a perfect simulation of multiplying two numbers does multiply the numbers: there’s no difference at all between multiplication and a “simulation of multiplication.” Likewise, a perfect simulation of a good argument is a good argument, a perfect simulation of a sidesplitting joke is a sidesplitting joke, etc.

    So the question is just: is consciousness more like the weather, or is it more like multiplication and those other examples? Because of the close link between consciousness and intelligent behavior, and our near-total reliance on the latter to infer the presence of the former in anything other than ourselves (see comment #54), one could easily argue that consciousness is more like multiplication—i.e., that a perfect simulation of consciousness is conscious. To address your other point, this would also imply that consciousness was more like software than like hardware: something that can easily transfer from one physical substrate to another, as long as its abstract form is unchanged.

    Or maybe not. Maybe the hardware substrate is relevant after all. But as I said, I think the burden is firmly on those of us who suspect so, to explain what about the hardware matters and why. Post-Turing, no one gets to treat consciousness’s dependence on particular hardware as “obvious”—especially if they never even explain what it is about that hardware that makes a difference.

  56. Jay Says:

    Scott #47,

    Tthx for the clarified sentence. If you can remember on the fly where you’ve read Bohr, Turing, etc. speculating on this possibility, please share. 🙂

    “Whether the being on Mars had its own consciousness—similar to but discontinuous with mine—would depend on the details of the Mars-being.”

    Sure, but what you’d guess could be transfered as part of your own “digital template”? For example, would you guess your memory cannot be copied? Your temper? Intellectual abilities? The way you move or interact? Or do you think what cannot be copied is large enough that a template from you is likely the same as a template from most humans? Or that there is just no such template because digital information from one’s brain makes no sense outside of its “non copyable” context?

    “transtemporal identity is fundamentally different from a hypothetical identity between different “copies” of you (…) all your transtemporal doppelgängers are connected by a single, linear chain of causation.”

    Maybe that’s a related question: in what sense do you see the chain of causation as more “single and linear” for transtemporal doppelgänger than for transcopies? Is there any computation or measure we could make to assess it?

  57. Scott Says:

    Steve #52: I hope you don’t mind if I punt on the red/blue qualia inversion thing, since we have enough on our plate as it is! But you should check out the comment thread on my review of Tegmark’s book, where we did debate that very puzzle at length (see comment #95 on).

    Speaking of which, here’s a very small philosophical puzzle, as a palate-cleanser. For the past couple weeks, I’ve been trying to teach my 19-month-old daughter Lily color words, so far with almost zero success. On the other hand, if you give her a bunch of Play-Doh containers with the lids removed, she will very reliably put the red lid on top of the red Play-Doh, the blue lid on top of the blue Play-Doh, etc. (there are ten of them). Should one say, in her current state, that she “knows what colors are” or “has the concept of redness”?

  58. Scott Says:

    Jay #56:

      If you can remember on the fly where you’ve read Bohr, Turing, etc. speculating on this possibility, please share.

    I have the quotes and references in GIQTM (indeed, they’re right in Section 1).

      For example, would you guess your memory cannot be copied? Your temper? Intellectual abilities? The way you move or interact? Or do you think what cannot be copied is large enough that a template from you is likely the same as a template from most humans? Or that there is just no such template because digital information from one’s brain makes no sense outside of its “non copyable” context?

    My guess would be that everything you mentioned can in principle be copied—everything, that is, except for the consciousness itself, and perhaps some microscopic details that, after being amplified, might tip some of my future choices one way or the other, but are so subtle that not even the people who know me best (or I, before the fact) would be able to predict them.

  59. fred Says:

    “Physical instantiation,” is at the crux of the question. The old question of subjective vs objective, matter vs soul, etc.

    There’s no escaping the simple observation that if two electrons aren’t conscious, why would 3 of them be conscious, or 10,000 or 100 billions? What is the meaning of “emergent”?
    Similarly one would say it takes “intelligence” to build cities, yet cities are nothing more than the result of the earth atoms interacting with one another for billions of years according to some relatively simple “dumb” rules (of physics).

    Similarly, there is an analogy between “consciousness” and “softwareness”: consider all the computers currently running programs as we speak, consider all the body of knowledge about algorithms, mathematics, realized in those running programs… yet none of those programs are telling the hardware what to do (controlling it) – computer hardware isn’t different from any other clump of atoms, following the same rules of physics, with or without us putting a “software inside!” label on it. Since the dawn of time, it’s been the hardware (the atoms of the earth) that’s been in control, and the software is nothing but an illusion, at best a sign that the hardware has some self-similar structure.

    Another observation is that memory and time perception are crucial – which may be linked to Scott’s ideas about irreversibility, time arrow, etc.
    I don’t think there can consciousness without either.
    And those are two important ingredients of a “physical computation”, they’re tied to the ideas that a computation is a dynamic process, i.e. which is limited in speed and size.
    The simplest example is from the Game of Life – any complex structure that grows is limited both
    1) in speed – it can’t “bloom” faster that one tick at time.
    2) in breadth – at every given tick, the number of cells involved can only grow at its circumference, by a finite number of cells.
    This is true of any sort of “amplification” effect in general (which computations are a special kind).

  60. N Says:

    Scott & Mitchell, you’re quite right that a Eugene Gootman-style ‘simulation’ of consciousness wouldn’t prove anything about whether actual humans are conscious or not. I actually phrased my original question poorly: what I had in mind when I said ‘simulation’ was a neuron-by-neuron classical simulation of a human brain. If that type of simulation still talked about consciousness, then it would mean that, in regular humans, the quantum-level details that are responsible for consciousness are not relevant structurally to whether or not we talk about consciousness.
    So, my question really was: do you think that such a ‘naive’ attempt to simulate consciousness would fail to work, fail to talk about consciousness, or what?

  61. Ron Garret Says:

    Are you familiar with Nicolas Cerf and Cris Adami’s QIT interpretation of quantum mechanics? (http://arxiv.org/abs/quant-ph/9605002) Ever since I discovered it 10 years ago QM has been completely non-mysterious to me, and the “crazy idea” that you presented here has seemed not merely plausible but at least some variation on the theme has seemed obviously true. If you haven’t read this paper you might want to, and if you have I’d be interested in knowing why you don’t seem to attach the weight to it that I did.

  62. Scott Says:

    N #60: Note that my response to your objection is different than Mitchell’s.

    I see no reason why a whole-brain emulation wouldn’t talk about its consciousness, every bit as eloquently (or as confusedly) as the original that served as a template for it. My point is that, if we examined the source code of the emulation—or ran the code on a different computer—we could see for ourselves that everything it said, on this subject as on others, was mechanistically predictable from its code. So, if we could actually do such a thing, then it seems to me that our intuitions would eventually change, to the point where we regarded the emulation as “clearly no more conscious than a book.” (Books, of course, can also have lots to say on the subject of consciousness, including transcripts of people talking about it, but we don’t normally regard that as evidence that the books are conscious.)

  63. Scott Says:

    Ron Garret #61: Sorry, haven’t read that. Will add to my stack.

  64. fred Says:

    (An aside)
    Scott, did you get a chance to check the VR system Oculus Rift in action yet? (the dev kit 2 just came out, certainly many people at MIT must have it). That provides some sort of variation of a Turing Test for faking the real world… we’re still at very beginning, but it’s a stunning experience.

  65. Scott Says:

    fred #64: I’m embarrassed to say I hadn’t seen that, or even heard about it! It looks pretty awesome.

  66. Dez Akin Says:

    I like it when you talk about math more.

    I can’t say that a person is any more conscious than a book, only that a person is easier to have a conversation with. Most of these arguments about consciousness seem to have some special pleading for spiritualism of physical processes that are separate from finite mathematical objects. I don’t know why that notion is so important to people.

  67. QWERTY Says:

    Is coscience in P or in NP?

  68. Scott Says:

    Ron Garret #61: OK, I just looked at the Cerf-Adami paper, but I was unable to understand in what way the account they advocate differs from the 100% standard Everett / decoherence / consistent-histories account—the one that’s either implicitly or explicitly accepted (in some variant) by almost every physicist I know. Maybe you can help?

  69. Scott Says:

    QWERTY #67: “coscience,” I imagine, would be in coNP. 😉

  70. francis Says:

    Scott #35:

    What makes the division non-arbitrary is that the Hamiltonian governing the evolution of the world is local in the position basis.

    The question is the position of what? Given only the full Hamiltonian, what distinguishes a subspace associated with the degrees of freedom of say a particle from any other subspace?

    In order to see a preferred basis emerging you have to write down the full Hamiltonian in the form H = H_system tensor identity + identity tensor H_environment + H_interaction. Such an approach cannot derive a preferred decomposition into subsystems because it already puts one in by hand.

    People like Zurek and Schlosshauer acknowledge that decoherence depends on the decomposition. See section 2.14 in Schlosshauer’s book for example. There’s also a nice paper by Schwindt (http://arxiv.org/abs/1210.8447) which however I think overshoots the mark a bit by claiming that the MWI is untenable because of this.

  71. francis Says:

    Scott #35:

    Of course, such decoherence might be reversed sometime in the distant future; in ordinary situations, whether it will or won’t strikes me as ultimately a question of cosmology.

    There are also lab experiments which deal with this issue. What we see is that if we get more control of the environment, we may recover coherence (see Jaynes-Cummings type experiments for example).

    Also as already mentioned, we need to make certain idealizations of the environment like a short memory (the so-called Markov approximation) in order to derive irreversible decoherence.

    Together, this suggests that in quantum theory as it stands today, irreversible decoherence isn’t fundamental but an artifact of the lack of control of the environmental degrees of freedom.

  72. Sid K Says:

    Scott #57:

    The question about your daughter is interesting. Clearly, she has some grasp of color and has some concept of redness.

    I would readily say that she knows what colors are if she can match, based on color alone, an arbitrary set of objects to another arbitrary set of objects — crayons to shirts, say. That is, I would say that she understands color if she realizes that it is not always mediated by the lid-container relation, or any other specific object-object relation.

    But I definitely wouldn’t say, and I think you will agree, that she realizes that she understands color. Or that she knows that she has a concept of redness.

  73. Dan Haffey Says:

    Scott #62:

    So, if we could actually do such a thing, then it seems to me that our intuitions would eventually change, to the point where we regarded the emulation as “clearly no more conscious than a book.”

    I’m having trouble grokking the intuition behind this statement. Unlike a book, the emulation has a neural structure sufficiently close to the original that the causal relationship between its brain and its behavior appears isomorphic to those of the original. It’s hard to see how to construct a similar analogy for the book’s “brain” and “behavior”.

    I assume you agree that the reason the emulation speaks as eloquenfusedly about consciousness as the original is that they share (classical) neural structure? (Otherwise why assume the em would even broach the subject?) So just to make the em/book analogy more explicit: The book contains words describing consciousness because a conscious person put them there. The em contains a neural structure describing consciousness because a conscious person had the same structure.

    The assumption that decoherence-based consciousness causes these “phantoms” of consciousness seems testable in theory: Simulate, not a copy of an existing brain, but the entire development of one (cf. Egan’s Orphanogenesis) or several in an environment conducive to language we can understand, but as devoid as possible of any references to consciousness. If “wild” classical ems can independently arrive at a notion of subjective experience, I would take it as strong evidence that our notion coincides with theirs.

    Since I find it much harder to imagine the scenario in which they didn’t arrive at a similar notion (while still engaging in recognizably-intelligent behavior), I don’t find the book analogy very compelling, nor the idea that intuitions would converge on finding it so. If anything, the opposite – if consciousness has observable impacts on classical brain structure, then so should the absence of consciousness. It may be that a “zombie” em wouldn’t show any outward signs of its lack of consciousness after running for 50 years, but I would take that as at least weak evidence that no special quality was lost to begin with. I certainly wouldn’t take it as evidence to the contrary.

  74. Ray Says:

    Hi Scott,

    I want to ask you something which is only tangentially related to your post. I want to know why you have retreated somewhat from the pure reductionist position in some of the so-called “hard problems”. You have said it yourself that till a few years ago you were a MWIer and solidly in the Strong AI camp. I understand that your current views are quite nuanced and you are not exactly a dualist when it comes to consciousness. However how can you take Searle/Chalmers seriously at all if you have seen the light (;)) once before? When someone understands that consciousness is simply an emergent biochemical phenomenon with quantum mechanics playing no interesting role then what can possibly happen to convince him otherwise?

    In the same vein, once exposed to MWI how can anyone go back to the Copenhagen interpretation? I understand that the two interpretations are empirically indistinguishable, save for some unlikely dynamical collapse scenario. However as you so eloquently put it, the unitary evolution of the state vector militates towards MWI. It’s true that the origin of the Born rule is still unclear in MWI but I do not think that’s enough justification to jettison the whole framework of MWI.

    I would be really glad if you can share with us what led to this evolution of your beliefs.

  75. QWERTY Says:

    Hence science is in NP. Thus Von Neumann’s brain is NP. Now we have just to prove that Von Neumann was human. I stay skeptical about it.

    Is/are there formal, semiformal, quasiformal, suboptiformal, out-of-shape-but-formal deifinition/s of conscienciousness?
    I checked wiki:Consciousness page finding nothing similar.

    In a different and easier but related topic, do Quantum Frame of Physics always require immaginary unit and complex numbers to improve over classical theories?

  76. Darrell Burgan Says:

    Not sure why there are so many caveats, provisos, and disclaimers associated with this talk. No, we don’t yet have the hard science to back up any of the speculations. But if we can’t speculate openly about things we don’t yet understand, how can we ever jump start the scientific process of understanding them?

    I found this to be a great talk, fascinating subject, and a really well thought out analysis, if quite speculative.

  77. JG Says:

    I’d really like the ‘Arrow of Time’ to get us there but I have to ask about relativistic effects, outside of normal time flow. A consciousness at 99% c or stranded in an event horizon fails to remain conscious?

  78. John Eastmond Says:

    Hi Scott,

    I would be interested in what you think of the following argument that a computer (either classical or quantum) cannot be conscious.

    Imagine to the contrary that we have an algorithm that produces a period of conscious awareness. Now in principle we can enclose that algorithm in a loop so that the computer experiences the same conscious experience forever. Let us further assume that each time the algorithm is repeated a counter is incremented.

    Now let us suppose that every time the computer “wakes up” it asks itself the question “what is the present value of the counter?”. I suggest that the probability that the computer finds the counter to have any finite number n is equal to 1/infinity which is zero. But if the computer finds itself in a conscious moment then it must find that the counter has some finite value. I think this paradox implies that the computer can never “wake up” in a moment of conscious awareness.

  79. Luca Turin Says:

    “Indeed, I see no evidence, from neuroscience or any other field, that the cognitive information processing done by the brain is anything but classical.”

    Maybe, just maybe:

    http://www.pnas.org/content/early/2014/08/06/1404387111.abstract?tab=metrics

  80. Joshua Zelinsky Says:

    John Eastman #78,

    There seem to be three problems with that argument. First, it relies on the assumption that there should be a reasonable measure on the probability of consciousness. Since trying to construct reasonable measures for observers is already a major problem in anthropics and many believe it cannot be done in general, the fact that it can’t be done in a specific hypothetical model isn’t strong.

    Second, to be able to do this hypothetical requires being to have an infinitely large memory since one needs to be able to incremement the counter indefinitely. Since computers have finite memory it seems that this argument fails.

    Third, the obvious question is whether someone could do something similar with brains. Scott has a (possibly) satisfying answer here, but it takes a completely different tactic than your hypothetical.

  81. fred Says:

    With about 10^24 times the resources it would take to simulate a brain we could simulate the entire earth as it was 4 billion years ago and maybe watch the apparition of life, intelligence, civilizations, math, computers, and brain simulations.

  82. wolfgang Says:

    >> how can anyone go back to the Copenhagen interpretation?

    I cannot speak for Scott but in my case it happened when I began to think about all the unsolved problems of mwi (the Born probabilities being only one of them).

  83. Scott Says:

    John Eastmond #78: Cute argument, but Joshua Zelinsky #80 already raised many of the objections against it that I was going to.

    Suppose the universe was infinite in spatial extent, and as a result, contained a countably infinite number of organic brains. And suppose further that we numbered the brains by positive integers, and that each brain had some way of learning what its number was. Since there’s no uniform probability measure over the positive integers, does the mere conceivability of this scenario imply that organic brains can’t be conscious? That seems like a strange result.

    For one thing, why isn’t a nonuniform prior acceptable? For example, either in your scenario or in mine, why can’t we weight each integer x by the Solomonoff prior (i.e., ~2-K(x) where K is the prefix-free Kolmogorov complexity), rather than attempting the impossible task of weighting all integers uniformly?

  84. Scott Says:

    francis #71:

      Together, this suggests that in quantum theory as it stands today, irreversible decoherence isn’t fundamental but an artifact of the lack of control of the environmental degrees of freedom.

    Yes, I completely agree with that statement. That’s why, like Bousso and Susskind did, I looked to the cosmology of deSitter space as the most plausible source of “fundamental decoherence” currently on offer.

  85. David C Says:

    Scott #29

    And if the aliens can demonstrate to us their perfect abilities to copy, predict, and rewind us, then we ought to agree with them in regarding ourselves as automata.

    If that happened, would you bite the bullet and concede that an ethical system (“ours” and the aliens’) shouldn’t have any concern for you? (Assuming, say, that the aliens show they *they* don’t have a clean digital abstraction layer?

  86. John Eastmond Says:

    Joshua Zelinsky #80

    In answer to your second point one could imagine a coherent quantum computer with finite resources that evolves according to a unitary operator U and then perfectly reverses itself by evolving according to U^-1. It could repeat this cycle indefinitely without dissipating energy or increasing entropy.
    Assume that the system was started at time t=0. If the computer was conscious then it could repeatedly ask itself the question “What is the present time?”. This would not involve explicitly incrementing a counter. Again if such a system continued cycling forever then the probability of a conscious computer finding itself at any particular moment is zero which contradicts the assumption that it has conscious awareness in the first place.

    Notice this argument against computer consciousness only works for a coherent reversible system. Perhaps, as Scott says, the brain is conscious precisely because its action is irreversible and decoherent. Maybe the ramifications of every conscious observation are guaranteed to persist in some branch of the multiverse forever.

  87. Scott Says:

    Dan Haffey #73: You’ve stolen the crown from Wolfgang #8 for the toughest question anyone’s asked me so far. 🙂

    On reflection, I agree to the interest and relevance of your proposed experiment, in which we take a population of AI-bots, raise them from childhood without ever exposing them to the Hard Problem, qualia, etc. etc., and see whether any of them develop an interest in such topics on their own, and whether the things they say about them are indistinguishable from what various humans have said.

    However, I’d like to suggest two variations of your experiment.

    In variation #1, the AI-bots have no idea that they’re easily copyable from one physical substrate to another, or predictable by examining their code. E.g., maybe we put them in a simulated “Matrix” universe, in which they believe themselves to be humans with organic brains like ours.

    In variation #2, the AI-bots know that they’re AI-bots. We even let them experiment with making exact copies of themselves and each other. (Of course, we still don’t tell them that anyone questions whether AI-bots are “conscious,” or anything like that.)

    One can now conceive of at least three possible outcomes of the experiment:

    In outcome #1, the AI-bots in both variations never develop the slightest interest in the Hard Problem. If that happens, presumably we all agree this is evidence that whatever it is about our substrate that makes us difficult to copy, etc., also imbues us with a “special spark” that changes our behavior, if nothing else then by causing us to leave blog comments about consciousness.

    In outcome #2, the AI-bots in both variations do develop an interest in the Hard Problem and in their own consciousness, and they say pretty much the same range of things about the topic that humans have said. (So in particular, there are AI-Searles, AI-Penroses, AI-Chalmerses, AI-Dennetts, etc.) In this case, while the idea that there’s anything fundamental that separates us from them might not be refuted, I agree with you that it would be severely weakened. For any believer in human specialness would then be in the uncomfortable position of arguing that, while all the stuff they said about being conscious (in a sense that AIs aren’t) was true, nothing they said about the topic had any causal relation to the fact of their being conscious. I.e., we could “switch off the consciousness” and they’d be saying the exact same things.

    But then there’s outcome #3, the one that I speculate is the most likely. In this outcome, AIs in the first variation of the experiment (the one where they live in a Matrix and think they’re organic humans) do write books arguing about qualia and the Hard Problem, that are pretty much indistinguishable from the books humans write. However, once the AIs can see that they’re perfectly copyable, reversible, quantumly-coherable, predictable from examining their code, etc., the opinions they express about the topic change almost beyond recognition. (How, I’m not sure.)

    If outcome #3 obtained, then we could say: theorizing about consciousness and qualia, the way (some) humans do, is a reasonable response to being an intelligent entity that doesn’t know itself to be perfectly copyable and predictable—i.e., that doesn’t know there to exist any other entity in the physical world able to “unmask it as an automaton.” If the belief in one’s “unmaskability” were ever shown to be false, then everything one had said about one’s unique subjective experience would now be open to doubt or even ridicule: “sure, maybe you really mean it, but your robot doppelgänger here says the exact same thing!” On the other hand, as long as it looks like a reasonable hypothesis (even if ultimately shown to be false…) that one inhabits a universe with no one around to do the unmasking, the statements about consciousness could be reasonably defended as well.

  88. Scott Says:

    David C #85:

      If that happened, would you bite the bullet and concede that an ethical system (“ours” and the aliens’) shouldn’t have any concern for you? (Assuming, say, that the aliens show they *they* don’t have a clean digital abstraction layer?

    Yes.

  89. John Eastmond Says:

    Scott #83

    I think that the assumption that one consciously “finds” oneself somewhere in a possibly infinite population inevitably leads to the problem of a uniform probability distribution over an infinite range of integers. Somehow the postulate of being located somewhere in an infinite ensemble due to one’s conscious awareness overstretches the bounds of rationality/probability theory/physicality. It seems that if consciousness exists (which we can assume from introspection) then it must be unphysical.

    By comparison one could imagine a random algorithm that physically places one somewhere along an infinite line. In that case one would assume a non-uniform prior for one’s position based on the complexity of the algorithm.

    But, as I said above, I think the assumption of consciousness drives us to the impossible uniform prior over an infinite range.

    By the way I came up with this idea as a way to think about the Doomsday argument which is closely related. I think the Doomsday argument fails because there is no single timeline in the multiverse (with an infinite uniform prior over it) but rather time has a continuously branching structure.

  90. David C Says:

    Re Scott #88: Admirable! Thanks. Loved the post.

    I still can’t really wrap my mind around potentially conceding that *I’m* not conscious.

    I pretty much agree with this:

    if we divide all questions into “is-questions” (about the state of the world) and “ought-questions” (about what we should value), it seems to me one could make a strong case that questions about which entities are conscious, despite their grammatical form, belong on the “ought” side.

    … with the one lingering confusion about how “Am I conscious?” could (for me) be anything other than an ‘is-question’ (for me) with a pretty obvious answer (for me).

    Even if it’s not an ‘is-question’, I can’t imagine whatever means I use to answer other kinds of questions actually concluding that I’m not conscious.

    So until I get over that, I’m left rejecting any notion of consciousness that even leaves open even the possibility that I’m not conscious.

  91. Michael Bacon Says:

    “But then there’s outcome #3, the one that I speculate is the most likely. In this outcome, AIs in the first variation of the experiment (the one where they live in a Matrix and think they’re organic humans) do write books arguing about qualia and the Hard Problem, that are pretty much indistinguishable from the books humans write. However, once the AIs can see that they’re perfectly copyable, reversible, quantumly-coherable, predictable from examining their code, etc., the opinions they express about the topic change almost beyond recognition. (How, I’m not sure.)”

    So, tomorrow you learn that you’re perfectly copyable, etc. How does that change ‘your’ view of the matter?

  92. Scott Says:

    David C #90 (and Michael Bacon #91): I might tell the aliens something like this.

    “Well, despite the fact that you’ve now shown that you can perfectly predict everything I’m going to say—including what I’m saying right now—and can create as many identical copies of me as you want, and can interfere me with myself in a double-slit experiment, I still feel like a unique locus of consciousness, much as I did before. But at the very least, you’ve now shown me, a million times more convincingly than any human neuroscientist ever did, just how radically mistaken I can be about my own mind. It’s only a slight exaggeration to say that you are to me as I am to a C. elegans. So, if you find it advantageous to euthanize me as part of your intergalactic expansion project, or your conversion of the universe into paperclips, or whatever it is your superior minds find worth doing … then please, don’t let me get in the way! Although if it’s not too much trouble, maybe you could keep a backup of me lying around in RAM somewhere? You know, just in case I could be of use to you later…”

  93. Shmi Nux Says:

    Scott, I think you are pulling a Tononi here, by “biting the [consciousness] bullet with mustard”. Tononi assigns consciousness to 2D matrices, you are happy to de-assign consciousness from humans (provided our brains are reversible / have a “clean digital abstraction layer”). Let me misquote/paraphrase you in reply: “Scott valiantly tries to reprogram our intuition to to make us think humans might not be conscious. But the main things we start with, in my view, are certain paradigm-cases that gesture toward what we mean: You are conscious.[…]”

  94. Scott Says:

    Shmi Nux #93: Yeah, I was waiting for the first commenter to accuse me of “pulling a Tononi.” 😉 My reply would be: yes, that you and I are conscious seems like a pretty clear paradigm-case. On the other hand, that you and I would still be conscious even if there were aliens who could perfectly copy, predict, reverse, and cohere us (very likely by first uploading us into a digital substrate), seems far from a paradigm-case. If anything, it seems to me like a paradigmatic non-paradigm-case.

  95. JG Says:

    Might one argue the Internet is conscious by virtue of the Arrow of Time, a physical instantiation, etc? Most likely not. Its a manifestation of our instrumentality. Can any manifestation of instrumentality be conscious? If you create and completely control something can it ever be said to exhibit its own self-determination? Iterative function systems and emergent behavior in code is not considered ‘conscious’ though, as it lacks other prerequisites. They easily race out of control. Perhaps consciousness requires that Arrow, the Physical, Self-Determination, Self-Awareness and the ability to create its own instrumentality be it stick or den or String Theory?
    Well what about Dolphins and Whales? I put out that they have an instrumentality of social interaction because of their paralimbic brains instead of neocortex. They are limited by environment. You are just not going to develop electronics technology or lots of others living below the sea.
    That brings me to the last point, that consciousness is relative. Someone at 99% c or stuck on an event horizon is still conscious, just not to you and me. Aliens, if they arrive, may well decide we are not conscious by their definition due to our environment. If they are able to transcend limitations of our space time by existing between the stars and their senses encompass the entire spectrum and live in awareness of superpositions of quantum states and such might they decide we are not ‘conscious’?

  96. Dez Akin Says:

    #73

    Books and brains are isomorphic in that you can describe the entire state model of a deterministic brain in a book. I don’t see anything particularly controversial about this; A mathematical object, like a brain or an arithmetic operation is representable in a particular medium, like a computer or a book.

    Looking for mystical explanations for subjective experience though dualism or some uniquely physical aspect of consciousness is something that people, so sure of the uniqueness of conscious subjective experience, have done for some time. It seems like special pleading to me.

    I don’t have any particular problem with books being conscious, because the notion of consciousness arising from finite mathematical structures doesn’t strike me as particularly troubling. But for some reason, many people take exception to the notion that subjective experience of consciousness is so small.

  97. fred Says:

    We identify a pipe with the atoms within the volume of space that we designate as the pipe.
    And if we look at a still photo of an ocean wave, it’s easy to make the mistake of identifying the wave with the specific water atoms under it.
    Once we watch a movie of an ocean wave though, it becomes clear that an ocean wave is *something* that’s not realized by specific molecules.
    We then wonder whether ocean waves are any less real than pipes – are they just an illusion? For there’s nothing more going on there than water atoms and their interactions.
    Yet ocean waves can be filmed, measured, and overturn boats.
    Maybe a consciousness is a bit like an ocean wave that’s in the shape of a pipe?

  98. Shmi Nux Says:

    Scott #94: I hope you agree that you would still think of yourself as conscious even if you knew that whoever is currently running your simulation is able to press the rewind button. If you do not, then we are clearly not using the same definition of consciousness. I agree that we would have to distinguish between “I think Scott is conscious” and “aliens with an un-Scott button think that Scott is conscious”, but I doubt that it is worth considering the case “I think Scott is conscious, provided the aliens who run his simulation are unable to run it in reverse”.

    Imagining this dialog now: “Sweetie, I misplaced my FHE key for the universe with humans in it, quick, we have to find it, or I will be responsible for having killed untold billions!” “It’s OK, honey, you never meant to run this universe in reverse, anyway.” “But it’s my ability to change my mind that keeps them non-conscious! And I lose it unless we find this key!” “Oops, what was it I stepped on?” “Don’t look down, or you might become a mass murderer!”

    In other words, the issue of irreversibility goes one level up (or down): is the Scott-simulator in itself reversible by whoever simulates it? And so it is again turtles all the way down.

  99. Scott Says:

    Ray #74:

      I want to know why you have retreated somewhat from the pure reductionist position in some of the so-called “hard problems” … I would be really glad if you can share with us what led to this evolution of your beliefs.

    There was never a “Road to Damascus” moment. Even as an undergrad, I was perplexed about many of the puzzles I’m now writing about. And even today, I’m probably closer to both an Everettian and a Dennettian than I am to the average person who could be said to have an opinion on either issue.

    Having said that, I can tell you seven things that were influential in the evolution of my thinking on these issues:

    1. Reading Nick Bostrom’s book Anthropic Bias (in draft form) in 2001. Bostrom is someone who takes extremely seriously the possibility of computer-simulated consciousnesses (as you can see from his most recent book, Superintelligence, which I’m reading right now). But in his early work on anthropics, he laid out with crystal clarity the severe empirical (never mind metaphysical) difficulties that such simulated consciousnesses would lead to. E.g., someone flips a fair coin, and starts up one copy of the simulation if the coin lands tails, or 100 copies if it lands heads. You find yourself as a copy of the simulation. What probability should you now assign to the coin having landed heads: 1/2, or 100/101? If either answer makes you fully comfortable, then read Bostrom to be cured of your certitude. 🙂

    2. Realizing, also in 2001, that quantum mechanics, on an Everett-like picture, gives no unique prescription for “transition probabilities,” if we’re able to put our own brains into coherent superposition. I.e., what is the probability that you’ll see Y tomorrow, given that you saw X today? If today you’re in a state like 3/5|X>+4/5|Y>, and tomorrow you’re going to rotate unitarily to the state 4/5|X>+3/5|Y>, then quantum mechanics is silent about such questions—unless you supplement it by something like Bohmian mechanics, but then the problem is that Bohm’s rule is far from unique, and there seems to be no empirical way to choose between it and its infinitely many possible competitors. (Obviously, I’m very far from the first person to have worried about any of this, though my rediscovery of it did lead to one of my crazier papers.) In any ordinary situation, of course, we get to sweep this problem under the rug by waving our hands and saying “decoherence”! (Or: “since the unitary transformation would erase your memories anyway, it’s an invalid question!”) But it surprised me how QM gave no answer to what seemed to me, at the time, like a reasonable question about how my life history would evolve in a doable-in-principle experiment (assuming MWI is correct). This caused me to doubt that I understood “what it would be like” to exist in a coherent superposition of states, or whether it would be like anything at all.

    3. Reading Rebecca Goldstein’s novel The Mind-Body Problem in 2004. To whatever extent an attitude can be called true or false, Goldstein’s attitude toward the ancient questions of philosophy (including the titular one) struck me as the truest one I had ever seen: it was neither the snide mockery that I’d honed in my teenage militant-reductionist days, nor the forehead-banging credulity of the anti-science woo crowd, nor the tedious ism-splitting of a philosophy textbook, nor the fiery certitude of the ideologue, but a literate and intelligent sort of wonder, an urge to make the reader feel the questions, to make them doubt what they believed whatever they believed, not because any answer is as good as any other but because that’s the only possible way to make progress. I found myself wishing that I could someday write about these questions the way she did.

    4. The research I did on quantum advice, quantum copy-protection, and quantum money. While of course I knew the No-Cloning Theorem long before this, this research brought home for me just how striking it is that quantum information has a “privacy” and “uniqueness” to it, which can actually be exploited to achieve certain classically-impossible cryptographic tasks. If you’re used to the total promiscuous copyability of classical information as one of the basic features of reality, and then you encounter this, it might strike you as a “waste” if Nature were to exploit it for nothing more fundamental than giving a possible new tool against currency counterfeiting and software piracy. 🙂

    5. My contacts with the singularity movement, and especially with Robin Hanson and Eliezer Yudkowsky, who I regard as two of the most interesting thinkers now alive (Nick Bostrom is another). I give the singulatarians enormous credit for taking the computational theory of mind to its logical conclusions—for not just (like most scientifically-minded people) paying lip service to it, but trying extremely hard to think through what it will actually be like when and if we all exist as digital uploads, who can make trillions of copies of ourselves, maybe “rewind” anything that happened to us that we didn’t like, etc. What will ethics look like in such a world? What will the simulated beings value, and what should they value? At the same time, the very specificity of the scenarios that the singulatarians obsessed about left a funny taste in my mouth: when I read (for example) the lengthy discourses about the programmer in his basement clicking “Run” on a newly-created AI, which then (because of bad programming) promptly sets about converting the whole observable universe into paperclips, I was less terrified than amused: what were the chances that, out of all the possible futures, ours would so perfectly fit the mold of a dark science-fiction comedy? Whereas the singulatarians reasoned:

    “Our starting assumptions are probably right, ergo we can say with some confidence that the future will involve trillions of identical uploaded minds maximizing their utility functions, unless of course the Paperclip-Maximizer ‘clips’ it all in the bud”

    I accepted the importance and correctness of their inference, but I ran it in the opposite direction:

    “It seems obvious that we can’t say such things with any confidence, ergo the starting assumptions ought to be carefully revisited—even the ones about mind and computation that most scientifically-literate people say they agree with.”

    6. Realizing that the copyability of AIs gave a plausible answer to the moral argument for automatically granting them full human rights. This was crucial for me, because Turing’s “Computing Machinery and Intelligence” essay had played a central role in my thinking since I was 16. I reasoned that, however implausible it seemed that billions of people simulating a brain by passing around pieces of paper would bring an Earth-sized consciousness into being, I had no moral right to question the consciousness of the resulting entity, any more than I had the right to question the consciousness of a black person (“yes, yes, truly a remarkable simulation of what, in a white person, we’d call ‘understanding’ and ‘intelligence,’ even though obviously, being black and all…”). But, the more I worried about people appealing to arbitrary “meatist” prejudice to justify the torture or murder of the slips-of-paper consciousness, the more I’d ask myself things like: OK then, why not just ask the billions of people to start passing around the slips of paper in the reverse direction, and thereby undo whatever terrible thing had happened? Not quite as easy in the case of the black person… 🙂

    7. One could argue that, even if MWI and computationalism have all these caveats, loose ends, and unanswered questions, they’re still so much closer to the truth than the average person would ever imagine they could be, that one can best serve the cause of the public understanding of science by downplaying any subtleties, and cheering MWI/computationalism unreservedly. I.e., don’t let the plebes imagine that modern science gives them even the slightest opening to indulge in the cringe-inducing woo-woo they love so much, their chakras and energy crystals and creationism and whatnot!

    While that line of argument still has some sway with me, as I got older I decided there’s an even more powerful response. To wit: “I reject the woo-woo nonsense, despite feeling the same sense of vertigo as you do when I contemplate the mind-body problem, the measurement problem, or the problem of free will—despite having stayed up nights with the weight of these mysteries on my chest. I freely admit my ignorance about some of the greatest questions ever asked, while nevertheless insisting that we know the answers to some questions. Yes, consciousness is a mystery, but homeopathic water is just water.”

    Anyway, hope that answers your question.

  100. Adam M Says:

    Scott #54:
    Sorry for coming back late to this discussion. If I understand correctly, your argument for the relevance of computation in #54 is essentially:

    Things that seem to be conscious also tend to exhibit what we’d consider intelligent behaviour;

    Our efforts to make machines do things we interpret as intelligent involve abstract computation;

    Therefor, abstract computation (along with some possible other necessary conditions) is the cause of consciousness.

    I think there are a some logical weaknesses in this argument, but I admit my reformulation of it is probably not very good and likely it could be formalized in a much more convincing fashion, so I’ll not waste our time arguing against my own formulation.

    Instead, I’d like your comments on a question I think is much more fundamental and troubling: what do we mean by computation?

    We are accustomed to thinking about computation from our normal, conscious perspective. But, when we start talking about computation (or anything else) as the cause of consciousness, we have to be careful that the concepts we’re using still make any sense.

    For something to have an effect — such as generating consciousness — it has to exist. But in what sense does a computation exist, other than as synonymous with a particular motion of physical stuff? For example, let’s take a very simple computation — say, 2 + 2. The computational assumption says that, were this a consciousness-producing computation, it would produce the same consciousness and conscious state any time it was “done”, regardless of the physical substrate, and of course also regardless of whether there was any external observer watching it.

    So, what would that look like? Well, here’s a simple example: let’s say there are two rocks sitting at the bottom of a hill. A small earthquake happens, and two more rocks come rolling down, and now there’s four rocks there: 2 + 2 = 4. This seems simple enough, but what actually just happened? The rocks are made of smaller crystals, which are made of molecules, atoms, sub-atomic particles, etc. So was that really 2 + 2, or was it 10^3 + 10^3, or 10^6 + 10^6? Or since a couple of chips busted off one rock, was it really 2 + 2 + 2 = 6? And what about where the rocks came from? There, it might have been 5 – 2 = 3. And who decides how close they have to be to count? If we look just at the place where the rocks landed, it might be 0 + 2 = 2, or, if we’re including some nearby rocks, 15 + 2. That’s not looking at all the other, much more complicated “computations” going simultaneously at larger and smaller scales, and in other dimensions of the system’s state.

    If we make the computation more complex, or place it in some other physical context like a brain or computer, we face exactly the same problem: the physical system doesn’t define the computation. This is the iceberg of incoherence I was worrying about in my previous comment. I think the only way to make sense of it without involving our pre-existing consciousness is by considering only the entire physical system, at which point identifying it as “computation” becomes meaningless and any hope of substrate independence disappears.

    To summarize: to reasonably hold that consciousness is (even partially) a result of abstract computation, I think we at least need a meaningful and physically grounded definition of what computation is, independent of already existing consciousness. What is that definition?

  101. Michael Bacon Says:

    Scott,

    You say that the opinions you express about the mind-body problem (“the topic”, as you call it elsewhere) will change “almost beyond recognition”, once you see that you’re perfectly copyable, reversible, quantumly-coherable and (worst of all), predictable.

    But when you assume you are those things, what you say is cogent, and purposefully and clearly connected to the basic point you’re trying to make, but still well within ‘recognizable’ limits. 😉 .

    I agree that there are a lot of silly questions and wild, but now threadbare premises that conventional wisdom adopts (it feels odd saying “confidential wisdom” for Everettarian associated thinking, but given your venue, I suppose that’s appropriate).

    I’m biased to think we are clever enough to begin to make some reasonable explanations for “consciousness”.

    Do you view your ‘third way’ approach as primarily a philosophical contribution? God knows we need something along those lines.

    Don’t give short shrift to engineering recoherence — even if you don’t care about this 😉 , it would be fundamentally cool.

  102. Scott Says:

    Adam M #100: I confess that at some point, my interest in this particular debate wanes, because it seems like pure semantics to me—I have trouble understanding what’s actually at issue.

    As your comment cogently points out, “abstract computation” (at least for present purposes) is simply a language, a mode of description that we can either apply or not apply to physical objects as we see fit. However, a crucial thing we learned in the 20th century is that, if you’re trying to understand, not merely an electronic computer, but any object anywhere in the physical universe that does anything you’d want to call “planning,” “thinking,” “reasoning,” etc., then abstract computation is likely to be an extraordinarily useful language for describing how that object works.

    As a prime example, I don’t see how you can intelligibly discuss what we know about the brain without using the language of computation. Do you think about the color yellow using a yellow-colored brain region? Do you store memories about pears by forming pear-shaped neurons? Do you remember that pears are called “pears” because the pear-shaped neurons have “PEAR” written across them, for the other neurons to see? Does your brain require a smaller brain inside of it to “do the thinking”—with a smaller brain inside that, and so on ad infinitum? Today anyone above the age of 5 understands the answers to such questions. But the only reason we understand, I’d say, is that we’ve by now internalized the idea that the brain functions in many respects as a computer, with “logic gates” (neurons) wired together to implement more complex functionalities, and subroutines and internal algorithms and loops and recursion and abstract schemes for representing data. It’s no accident that cognitive scientists use this language. And because of the Church-Turing thesis, it’s hard to imagine how there could be a “thinking organ” anywhere in the physical universe for which one wouldn’t want to use this language.

  103. Luke A Somers Says:

    Once you’ve asked about decoherence and irreversibility, that immediately raises the question of whether these are what we’re aiming at, or something usually very closely related – or indeed whether these are the same thing at all! Suppose we have a quantum computer with three parts, each much larger than the previous.

    Alice is a simulation of a putatively conscious entity. Suppose that the only reason we’d have not to think it’s conscious is what we’re about to do to it.
    Alice’s Room is an entropy sink Alice will interact with in the process of its being putatively conscious
    In order to run Alice and Alice’s Room, we also have an entropy sink we use for error correction.
    We run Alice and Alice’s Room forwards in time for a while, and Alice is doing a bunch of locally-irreversible computations, dumping the resulting entropy into Alice’s Room instead of outer space.

    At some point, we quantum-randomly either: 1) let Alice’s Room shed entropy into outer space, causing the local irreversibility to become permanent, or 2) we time-reverse the dynamics of Alice and Alice’s Room until we reach the initial state.

    Was Alice conscious in case 1? In case 2? Since the sequence of events in both cases were in fact the same exact sequence of events – not merely identical, but referring to the exact same physically realized sequence of events – up to our quantum coinflip, it’s nonsense to say that one was conscious and the other was not.

    So yes, consciousness is connected to the arrow of time, but on a local level, not necessarily on the billion-year scale.

    This lets us spit out that bullet about the Anti-deSitter space. If you’re in an AdS space, you’re going to choke on your own waste heat a zillion years before quantum billiards brings you back close to the starting point.

    So, I’d say that there’s consciousness inside this AdS trap, for a little while, until they die. When quantum billiards has again randomly lowered entropy to the point that a potentially conscious entity might have an entropy sink, then you can again have consciousness.

    So, the AdS sphere is 99.999…(insert a lot)..99% not conscious, on account of its being dead, not on account of its being quantum-reversible.

  104. Adam M Says:

    Scott #102:

    I have trouble understanding what’s actually at issue.

    What’s actually at issue is whether there can be something that:

    a) has enough physical reality to plausibly give rise to and characterize consciousness; and
    b) exists in both a brain and a digital brain simulation

    I have no objection at all to describing the operation of brains (or anything else for that matter) using the language of computation. But if, as you say,

    … “abstract computation” (at least for present purposes) is simply a language, a mode of description that we can either apply or not apply to physical objects as we see fit”

    then there is nothing in common between a brain and its simulation other than how we have seen fit to describe it. I think we would agree that how we choose to describe something doesn’t determine whether it’s conscious, so auf wiedersehen to mind uploading, conscious brain simulation, and a wagon load of thought experiments.

    Am I missing something?

  105. Anonymous Programmer Says:

    Why not take free will seriously? How could we communicate anything at all about our conscious state if we have no free will? Not unpredictability nor Knightian Uncertainty but good old fashion I was presented with quantum possibilities and I chose one.

    Maybe you could rename it the conscious force to avoid confusion with the term free will from religion or philosophy.

    It seems like the only way to detect other consciousnesses no matter how alien is to develop theories of what the nature of the conscious force might be mathematically and design experiments to look for it.

    You detect mass with the gravitational force, charge with the electrical force, why not consciousness with the conscious force?

    If we had a theory of the conscious force that seems to check out with experiments we could ask newly scientific questions like:

    Is a two month old fetus conscious?

    Does consciousness survive death?

    Can we move our consciousness into custom designed robot bodies specifically designed for outer space, mars or the ocean deep?

    Can we grow our consciousness to become much more powerful and aware?

    Can we speed up or slow down our consciousness that is moved to a robot body that is very different than our normal time speed — can we live a thousand years in a day or a day in a thousand years?

    Can our consciousness expand to control a whole new universe — a new big bang with our consciousness as god?

    Is a large consciousness necessary to run the universe (some kind of god)?

    Might a consciousness like one that can run the universe care about us on Earth?

  106. Gil Kalai Says:

    Hi Scott,
    (also in reference to #84 and francis #71 )

    I have two questions:
    First, are the following six items consistent with / summarize your point of view?

    (1) In quantum theory as it stands today, irreversible decoherence isn’t fundamental but an artifact of the lack of control of the environmental degrees of freedom.

    (2) On top of this there is in nature some fundamental irreversible decoherence

    (3) This fundamental irreversible decoherence represents a break-down of QM at some scale

    (4) We will be able to reconstruct a burnt unknown book from its ashes and emitted photons, because this task only requires better control of non-fundamental decoherence.

    (5) We will be able to build large-scale quantum computers because this task only requires better control of non-fundamental decoherence.

    (6) But we will never be able to reverse conscious processes or reconstruct concsious entities from their ashes and emitted photons because of the fundamental irreversible decoherence.

    Second, wouldn’t it make more sense instead of speculating on a theory of fundamental irreversible decoherence which is largely beyond quantum mechanics to simply recognize that we don’t have good understanding of the fundamental properties of “ordinary decoherence,” namely of ” the lack of control of the environmental degrees of freedom?”

  107. Scott Says:

    Gil #106: I don’t think that’s an accurate summary of my view, because if (as I speculate) the irreversible decoherence arises from cosmology—from photons flying out to our deSitter horizon and never coming back—then I wouldn’t call that in any sense a “breakdown of QM.” That’s just standard QM, but where (because of cosmology) our causal patch turns out to be an open quantum system, rather than a closed one.

    So in particular: while the equations suggest that it should be possible to recover a burnt book from the ashes by “just” reversing the unitary evolution, I don’t know whether or not it’s possible to do with technology that we could set up within our deSitter horizon. It might depend on details like what the book was made of, how badly it was burnt, whether the book had been surrounded by an array of mirrors and ultra-high-tech recording equipment, and—crucially—just what we mean by recovering the book. I could much more easily believe that technology of the far future might let us recover the words in the book from the smoke and ash, than that it would let us recover the original book’s exact quantum state, so that (for example) we’d see interference if we were doing a double-slit experiment with the book, and only burning it in one of the two branches.

    For the purposes of this question, I don’t think there’s much difference between books and brains.

    With a quantum computer, the obvious difference is that we get to design and control it! I.e., it indeed might not be possible to run a QC if we set all our lab equipment on fire. But what if, instead, we cool the equipment down to 0.000001 Kelvin, and carefully arrange everything (unlike with brains or other physical systems we’re used to) specifically in order minimize the decoherence, use quantum error-correcting codes to deal with the decoherence that does occur? Much less obvious then, wouldn’t you say? 🙂

  108. tyy Says:

    Thanks. Nice, enjoyable and entertaining writing 😀

  109. Scott Says:

    Michael Bacon #101:

      Do you view your ‘third way’ approach as primarily a philosophical contribution?

    If you define “philosophy” to mean “thinking sufficiently inchoate that it can’t be classified into any other academic discipline,” then yes, I suppose you could say that all of us here are doing philosophy. 🙂

  110. Scott Says:

    Adam M #104:

      then there is nothing in common between a brain and its simulation other than how we have seen fit to describe it.

    Well, I’d say there are three pretty massive additional things in common:

    (a) identical external behavior (by assumption)

    (b) “isomorphic” functional organization—which is partly a matter of description, yes, but also partly a matter of hard physical fact (e.g., maybe the simulation contains one silicon chip, or one register, corresponding to each neuron in the brain, with an identical pattern of connections relating them)

    (c) identical external behavior because of the isomorphic functional organization

    Now, it’s conceivable—as I’ve been speculating here—that even (a)-(c) combined aren’t enough to cause the simulation to be conscious. But I certainly don’t think it’s a possibility that can be impatiently brushed aside the way people like Searle do.

  111. JG Says:

    Hurry up and come up with a definition. Some of us are trying to build these things and it would be great to know what it is.

  112. JG Says:

    I was reading about the Fermilab Holometer experiment to determine the Plank fidelity bandwidth and upper limit on the resolution of spacetime, which might have implications for complexity theorists. They mentioned Nested Causal Diamonds which look to be analogous to light cones in Minkowski space. Could this be better than ‘Arrow of Time’ for a metric? Consciousness creates Nested Causal Diamonds or Light Cones in Minkowski Space?

  113. Richard Says:

    I think one problem here is that we too easily elide together certain “aspects” of consciousness. To me consciousness “from the inside” appears to consist of four things:

    1. My awareness of things around me.

    2. My memory of past things

    3. My “free will” and ability to choose a course of action..

    4. My (sense of?) identity (the thing that you would be worried about losing in the various teleportation thought experiments.)

    Now the first two are all relatively easy to simulate classically and the third _might_ require Quantum Mechanics – but not in any particuarly difficult way.

    However the fourth is more difficult to pin down and the problem seems to be that it requires the other three to be meaningful. The idea that it can exist on its own inevitably gives rise to questions about trees or even inanimate objects being conscious. However when you put the four together it is hard to avoid being distracted by aspects of the first three.

  114. Adam M Says:

    Scott #110:

    (a) identical external behavior (by assumption)

    Then, if we decide not to hook the simulation up to anything external, would consciousness in the simulation disappear?

  115. Scott Says:

    Adam #114: That’s an excellent question, closely related to some of the things I puzzled over in the original post!

    I think many computationalists would take the position that, because we know that the simulation would have shown identical behavior to a human’s had we decided to interact with it, and because the simulation also has functional components fully isomorphic to a human brain’s (e.g., it’s not just an astronomical-sized lookup table), we should therefore continue to ascribe consciousness to the simulation, even if it’s not hooked up to anything external, but is just cogitating in isolation.

    But then, of course, one could push further, and ask: why shouldn’t we continue to ascribe consciousness to the simulation even it’s not running, but just sitting there in memory? And what exactly counts as “running” it, anyway? Would a pen-and-paper simulation suffice? A homomorphically-encrypted one? etc.

    These are hard questions, ones that cry out for some principled dividing line. They’re very much the sort of thing that motivated me to consider a possible role for decoherence, irreversibility, and unclonability in this post.

    But in this debate, the one thing I’m absolutely firm about is that, whatever dividing line you suggest, and whatever arguments you give for it, one must not be able to use analogous arguments to justify (for example) why intelligent, Turing-Test-passing extraterrestrials who land on Earth tomorrow, or a disliked minority, are merely “simulating” consciousness rather than being “truly” conscious like we are. So in particular, your dividing line must appeal to some verifiable, objective, third-person facts (that are plausibly morally relevant), and it must never make a question-begging appeal to our own subjective experience (“unlike those things, we’re obviously conscious, because we’re us, so we just know we are”). This is the key criterion that I maintain is satisfied by an appeal to decoherence and unclonability (whatever the other strengths and weaknesses of that idea), and that’s not satisfied by Searle’s appeal to “biological causal powers.”

  116. fred Says:

    Personally I feel that neuroscience could offer some interesting insights in the near future.
    E.g. take the apparent distributed nature of consciousness – we seem to perceive multiple options and thoughts at once, and those are probably “realized” on physically separated neural pathways. Also, some people have had their left and right brain hemispheres surgically separated, yet they don’t seem to exhibit independent personalities (or maybe they do, and maybe we all do, just we don’t realize it).
    We could also get interesting clues once we start to interface biological neurons with artificial ones (chips in the brain, etc). Patients who undergo this could be able to describe changes in consciousness.

  117. fred Says:

    Scott #115
    “But then, of course, one could push further, and ask: why shouldn’t we continue to ascribe consciousness to the simulation even it’s not running, but just sitting there in memory? And what exactly counts as “running” it, anyway? Would a pen-and-paper simulation suffice? A homomorphically-encrypted one? etc.”

    Well, while one side of the issue, “consciousness”, seems impossibly hard to grasp, we tend to ignore that the other side of the issue, “matter”, is just as hard to grasp.
    I.e. what is the nature of consciousness vs what is the nature of an electron. We just don’t know.
    The irony is that we only perceive the physical world through our consciousness, yet we take the physical world for granted, in an bottom up approach (where there’s no bottom, really), because we assign a mystical reality to the symbols that live in our minds – this “table” is “real” because I can punch it. As Feynman said in an interview, magnetic forces aren’t any more mysterious than chairs, it’s just that we take chairs for granted.
    The obvious way out of this is to figure that consciousness and matter must be of the same nature (let’s label it “mathematical”).

  118. Dan Haffey Says:

    Scott #87: Thanks for the thoughtful reply, I think I’d draw similar conclusions from your extended version of the experiment.

    I do have one minor quibble with outcome #2, where you say:

    nothing they said about the topic had any causal relation to the fact of their being conscious

    This appears to fall so thoroughly to Occam’s razor that if it was the only way to salvage the theory, I’d count outcome #2 as a full refutation. But I think there are more interesting reasons not to count it as such, like speculating that our methods of communication could be so intrinsically coupled to the phenomenon of our consciousness that it’s effectively impossible to do a “clean-room” experiment along these lines in the first place.

  119. Dan Haffey Says:

    Dez Akin #96: I don’t disagree with anything you said, but I was specifically referring to “trivial” books of the sort Scott mentioned in #62:

    Books, of course, can also have lots to say on the subject of consciousness, including transcripts of people talking about it, but we don’t normally regard that as evidence that the books are conscious.

  120. Ron Garret Says:

    Thanks for looking at the Cerf and Adami paper. It’s possible that it doesn’t differ substantively from the standard accounts (my quantum math-fu is not strong enough to make that assessment), but it seems different to me *rhetorically*, and I think that matters. The standard account is that independent measurements of the same particle agree because the measurements reflect an objective reality that actually exists (and in fact, not merely one, but an infinite number of such objective classical realities a.k.a. parallel universes). Cerf and Adami’s account is that measurements agree because when you (mathematically) *ignore* the thing being measured (by tracing over its degrees of freedom) what you are left with is (again, mathematically) a subset of a quantum system that behaves like (but isn’t really) a classical system. In other words, correlations are *primary*. They do not come about as a result of classical reality, they are the *reason* that classical reality (of which consciousness is a part) appears to exist. (David Mermin makes much the same point in a different way in his Ithaca Interpretation of Quantum Mechanics. The money quote: “We are correlations without correlata.”)

    Maybe I should write up a longer account of this. I thought the connection would be obvious in light of what you wrote in your original post, but apparently it’s not.

  121. Michael Bacon Says:

    An effort that’s at least endeavoring to explore possibilities:

    http://thecreatorsproject.vice.com/blog/baby-x-the-intelligent-toddler-simulation-is-getting-smarter-every-day

  122. Scott Says:

    Michael #121: I never expected that the Singularity would come from New Zealand… 🙂

  123. CS Says:

    “E.g., someone flips a fair coin, and starts up one copy of the simulation if the coin lands tails, or 100 copies if it lands heads. You find yourself as a copy of the simulation. What probability should you now assign to the coin having landed heads: 1/2, or 100/101?”

    Does denying consciousness to digital computers actually remove such questions? The various doomsday arguments and presumptuous philosopher type cases still come up. E.g. Radford Neal’s treatment of just updating on your full sensory information.

    http://arxiv.org/abs/math/0608592

    Even the Simulation Argument can be slightly modified to involve biocomputers/brains-in-vats/the Matrix/futuristic Truman shows or some other incorporation of the features you’re talking about. Even if the brains-in-vats/biocomputers aren’t identical in every way, they can be indistinguishable enough that you can’t tell which you are.

    Splitting cases could be done with multi-hemispheric brains, a souped-up version of actual split-brain patients.

    http://en.wikipedia.org/wiki/Split-brain

    Basically, I feel a twinge of suspicion at arguments of the form “it’s hard to figure out an answer to problem X, which comes up in situations A and B, so let’s postulate that A is impossible, while leaving B intact.”

    “future…trillions of…uploaded minds”

    Saying that minds run on digital computers wouldn’t be conscious in this way seems fairly distinct from the question of whether they will get produced. One would just be more glum about the possibility, no?

  124. Luke A Somers Says:

    > But then, of course, one could push further, and ask: why shouldn’t we continue to ascribe consciousness to the simulation even it’s not running, but just sitting there in memory? And what exactly counts as “running” it, anyway? Would a pen-and-paper simulation suffice? A homomorphically-encrypted one? etc.

    If consciousness is a process rather than a thing – which seems rather natural, frankly – then no, something which isn’t doing anything at all is not conscious.

  125. Scott Says:

    Luke #124: I agree, that is a natural position. But then which processes suffice? (E.g., what do you think of the pen-and-paper and FHE examples, or the other weird processes discussed in the post?)

  126. Ike Says:

    I didn’t read through all of this (yet), but this part

    “If someone could put some sheets of paper into a sealed envelope, then I spoke extemporaneously for an hour, and then the person opened the envelope to reveal an exact transcript of everything I said, that’s the sort of thing that really would cause me to doubt in what sense “I” existed as a locus of thought.”

    doesn’t seem skeptical enough. It doesn’t look like too much of a stretch to say that future tech might be able to incorporate a recorder/transcription that writes whatever you say onto the paper and then destroys itself. Remember Clark’s third law. Imagine doing this 50 years ago with an ipad. I would sooner postulate a mini-computer than a brain-scanner+predicter. It seems less complex, somehow.

    Oh, for myself I’d consider strong evidence of brain emulation the following: Walk into a room with a camera. The room has two exits, numbered. As soon as you walk in, a machine spits out a paper with one of three numbers on it: 1, if you will exit through the first door, 2, if you will exit through the second, and 3, if you will look at the paper before deciding. Take a picture of the paper before deciding which door to leave through. Repeat. Each time you do this and it is right, there are at least 2:1 odds favoring the “it’s real” hypothesis. That’s my own test scenario, which I thought of a few months ago, anyway.

  127. JG Says:

    Bacon #121Scott #122 Haha hardly SkyNet is it? They arent going so far out on a limb as to claim a machine consciousness. They are modeling processes in the brain based on positive reinforcement from facial recognition encoding ‘memories’ in a neural network. With a kick ass front end I might add. You know, its as capable as any baby I’ve seen. Of course, I expect a baby AI to be at least 10^5 times smarter than that by day 3 but I’m an optimist. We’ll see how fast it ‘learns’. I’m thinking it tops out at ‘Ship’, and ‘Kit’. Anyway, I smell a Eugene Goostman on this one.

  128. JG Says:

    I feel kinda bad I poopooed those guys and they are actually trying to get somewhere it seems. There just doesnt seem to be anything really new there. I read all their stuff and watched all the vids and man thats a great front end but lets apply our test; Physical instantiation, check, arrow of time, causal diamond, Minkowski space light cone? Bzzt. Sorry, you could leave baby X for a hundred years and come back and its ‘Kit’ and ‘Ship’…so I have to say no consciousness

  129. Abel Says:

    Really pleasing as usual to see some exploration of the “crazy” implications of some theories when taken to their logical conclusions : ) Something that might be interesting to do for some of these philosophical posts is to have some kind of table at the beginning with which philosophical positions are assumed, seen as possible, and considered untenable. Could be an interesting format to show changes in point of view too.

    One thing that confuses me about the Boltzmann Brain part is the following, though – it seems reasonable to assume the word “Brain” could be substituted by anything when talking about Boltzmann Brains, and their existence would still have the same truth value. Therefore, why couldn’t one consider modified Boltzmann Brains that last enough to be “continually taking microscopic fluctuations, and irreversibly amplifying them into stable, copyable, macroscopic classical records.”, and therefore pass the consciousness requirement being discussed here?. Is the existence of irreversible macroscopical decoherence just impossible after some finite time, because of some cosmological reasons? (sorry if the question has been asked before in the thread)

    Also, a nice aspect of the Digital Abstraction Layer requirement is that it seems like it could be built upon to obtain a quantitative Phi-like measure (of hardness of approximation wrt such a layer to have a certain kind/degree of consciousness).

  130. Gabriel Nivasch Says:

    […]
    if (rock.hits(me)) pain++;
    […]
    if (pain >= 5) cout >> “Aaaargh”;
    […]

    Q1: Does this computer program experience pain?
    Q2: What if we rename the variable “pain” to “a”?

  131. Spencer Says:

    “why not worry equally about the infinitely many descriptions of your life history that are presumably encoded in the decimal expansion of π”

    A minor nitpick here: although most people assume that pi is normal (contains all possible substrings), we can’t prove it! So maybe we don’t all “live in π”, even if we do live in Champernowne constant and almost all other real numbers.

    Mark Chu-Carroll has a fun rant about the loose use of the “X occurs in the digits of π” idiom:
    http://www.goodmath.org/blog/2014/05/22/infinite-and-non-repeating-does-not-mean-unstructured/

  132. Scott Says:

    Spencer #131: Yes, of course I was assuming π’s normality. I know that’s an open problem (that’s why I wrote “presumably”); on the other hand, my uncertainty about that question is completely negligible compared to my uncertainty about the other questions being discussed on this thread! 🙂 (And as you say, even in the freak case that π turned out not to be normal, we could then worry about living in Champernowne’s constant.)

  133. Scott Says:

    Abel #129: You ask an excellent question. But yes, I think there are strong senses in which one could say that any decoherence that appears to occur within a Boltzmann bubble (however large that bubble might be) “doesn’t count as real decoherence.”

    The first sense is the one that was pointed out by Boddy, Carroll, and Pollack. Namely, if you grant certain technical assumptions (e.g., no “horizon complementarity”), then all we mean anyway by “vaccum fluctuations” is things that would be found to occur with nonzero probability, if someone were there to make the measurement. But if no one is there making the measurement, then you can also just view the vacuum as staying forever in an energy eigenstate, from which perspective there are no Boltzmann bubbles and no decoherence, no matter how long you wait.

    Another, related answer was given in my GIQTM essay. There I explain why, when we talk about “amplifying microscopic fluctuations into stable, copyable classical records,” there are reasons why what we ought to mean by “microscopic fluctuations” is, “microscopic fluctuations that are left over from the very early universe.” (Namely: if that’s not what we mean, then we could always in principle track the fluctuations back to their source, in which case we could learn the density matrices governing the fluctuations via non-invasive measurements, and would no longer be in a state of Knightian uncertainty.) If this were so, then it would follow that a prerequisite for “participating fully in the Arrow of Time,” in the sense I’m speculating is necessary for consciousness, would be having a causal connection to the microstate of the very early universe. Boltzmann brains would lack that causal connection (since they only occur after everything has thermalized), and for that reason wouldn’t be conscious.

  134. Gil Kalai Says:

    Hi Scott,

    I see. I am certainly glad that you do not regard irreversible decoherence as something which breaks QM and I withdraw my point 3). The reason for my misunderstanding is that in an  earlier discussion you offered a dilemma (with a 50/50 subjective probability): either we can have existence in coherent superposition of two mental states or we see some violation of linearity of quantum mechanics (or something as dramatic as that). Of course, irreversible decoherence based on cosmological reasons is quite a spectacular possibility, but, in my view, it is not at all as surprising as a violation of QM linearity.   (But maybe I still miss some subtle point in your  view for why you think that cosmologically based irreversible decoherence does not refer to the issue we discussed then of superposition of two mental states.)  Of course, irreversible decoherence based on cosmological reasons can be an obstacle in principle for quantum computing.

    “That’s just standard QM, but where (because of cosmology) our causal patch turns out to be an open quantum system, rather than a closed one.”

    Yes, this is my point of view all along except the parenthetical “because of cosmology”.  There are two reasons to be cautions  about trying to primarily base a “principle of irreversible decoherence”  for natural open systems on  cosmology. One reason is practical and the other is conceptual. The practical reason is that the cosmological models are fairly hypothetical and speculative at this point, we may well study conclusions regarding the behavior of open quantum systems on their own, and similar conclusions may well come from a whole variety of cosmological models. The conceptual reason is that I do not see why the behavior of open quantum systems should be an emergent conclusion of a cosmological model rather than the opposite –  understanding the principles for the behavior of open quantum systems will give some guidelines/restrictions for cosmological models. But, of course, studying irreversible decoherence “because of cosmology” (within the framework of QM) is very interesting.

    “Much less obvious then, wouldn’t you say? 🙂 ”

    I suppose you refer to irreversible decoherence as an obstacle to quantum computing rather to the other two issues we considered: reversing mental processes and reconstructing an unknown book from its ashes and emitted photons.

    Nothing here is obvious, but my intuition is different from yours. What we may agree is that the question regarding quantum computation is much more clear-cut compared to reversing mental processes and recovering a book from its remains. As you pointed out even the question about books is a bit vague (which is why I added “unknown” to make it slightly better.) For example, I am pretty sure that if we have to decide based on the ashes and emitted photons  if a book was either Tim Gowers’s “A very short introduction to mathematics” or Tim Gowers’s (ed.) “Princeton companion to mathematics” then we can do it! (Even without the emitted photons. 🙂 ) But I don’t expect future technology  will let us recover the words of an unknown book.

    “With a quantum computer, the obvious difference is that we get to design and control it!…  what if, instead, we cool the equipment down to 0.000001 Kelvin, and carefully arrange everything…”

    This is a very interesting comment for two reasons. The first is that it demonstrates that ultimately it all boils down to constants! The asymptotic point of view is useful but the constants matter a lot. We cannot take it for granted that technology will be able to drive any positive constant to as close to zero as we want, and a principle of irreversible decoherence (coming from cosmology or otherwise) may well tell you that you cannot reach 0.0000001 Kelvin for your designed quantum computer. As a matter of fact, the lower limits for cooling  quantum states in certain symmetry classes is very relevant to the issue (and I relate to it in a few places.)

    The second is that the need and passion for control and the illusion of control is an issue which goes well beyond QC and have interesting social and philosophical connections. It has perhaps something to do with talking about “computational models that won’t be practical for a million years,” (next post) or about speculating that the problems we study (but not most of those that people studies 500 years ago) have universal importance spanned over millions of years, reaching even to advanced extraterrestrial civilizations.

  135. Scott Says:

    CS #123: Yes, you’re quite right that one can get extremely confused over anthropic observer-counting puzzles, even without copyable minds. On the other hand, I’d say that copyable minds make the puzzles vastly worse, for the following reason: without them, one can wriggle out of many anthropic puzzles by simply refusing to contemplate oneself being randomly drawn from a large reference class, except in a few narrow situations (e.g., being tested for genetic diseases) where that sort of reasoning has been found to work. (Cf. Lubos Motl’s “I can’t be a random observer, because almost everyone else is an idiot.” 🙂 ) One could, in particular, refuse to contemplate oneself having been born in a different era, as is needed for the Doomsday Argument and many of the other puzzles.

    But no matter how small your reference class is, presumably it can’t exclude any observers whose experiences are subjectively completely identical to yours. But those are precisely the sort of observers that we could create willy-nilly if your mind were copyable.

    Also: I agree with you that, even if it’s true that decoherence, irreversibility, unclonability, etc. were important for consciousness, that by itself could do nothing to rule out a future full of uploads or paperclip-maximizers. It could at most make us feel more glum about such a future (in Nick Bostrom’s wonderful phrase, even the best possible future of this kind would be “like Disneyland with no kids”). I apologize if, in recounting the twists and turns of my intellectual journey, I gave a misleading impression otherwise.

  136. Luke A Somers Says:

    Scott #125: which processes suffice?

    Letter-passing would be fine. Holomorphically quantum-encrypted consciousness, too, though with a caveat that will soon become apparent.

    I do think the distinction I was drawing with Alice is relevant. In particular, it acts just a lot like what you said in #53:

    “A saner-sounding but logically-equivalent version is: if it were possible for aliens to surround the earth with an AdS boundary, or otherwise put us into a unitary closed system, without doing anything invasive like slicing up our brains and imaging them, then (given that we take ourselves to be conscious) the view I’m exploring would be wrong.”

    Let’s drop the whole ‘run backwards’ idea. Alice can only run forwards, and she needs an entropy sink, Alice’s Room. The purpose of the Room is, she pulls in pure 0 states from it and spits out arbitrary states. For simplicity, we can say they might be 1 or 0, and Alice operates on the subset of quantum operations that, given pure 0 and 1 states and the entropy sink of Alice’s Room, act like regular old boolean logic (for other readers – having a redundant output is the only way you can use reversible quantum operations to produce irreversible operations like AND). If she pulls a 1 out of her room, the reversible operations in her computer put something different into Alice, she ends up in the wrong state, and is dead (or at least confused).

    If you keep Alice’s Room clean by resetting the 1s to 0s periodically (before she loops around and tries to read one of the bits she’s dirtied, then Alice operates like a classical computer, and by your criterion we should have no trouble declaring her conscious.

    If you don’t empty out her room, then she eventually (but not right away) chokes on her waste ‘1’s and dies. Of course, since Alice and Alice’s Room are doing only reversible operations, eventually the forward action will bring the Alice + Alice’s Room state back to the state they were in when you last cleaned it out.

    The solution, I think, is that the relationship between Alice and her Room is the required one for consciousness. Consciousness must LOCALLY dump entropy. If it can’t, then it chokes on its waste heat and dies.

    If you’re in a quantum billiards situation, well, then, this is going to happen over and over again. So?

  137. JG Says:

    Gabriel #130
    You program only feels pain and is conscious if, after you run it once the line
    if (rock.hits(me)) pain++;
    becomes
    if (rock.hits(me)) ++pain;
    somehow without you typing it

  138. David Pearce Says:

    Perhaps Scott’s question might be re-posed: can anything other than a quantum computer be conscious – a fleetingly unitary subject of experience? We take phenomenal binding for granted. Yet its existence would seem classically forbidden.
    (cf. David Chalmets’ “The Combination Problem for Panpsychism”: http://consc.net/papers/combination.pdf) So short of giving up on reductive physicalism – a total catastrophe for the unity of science – we’ll need to derive the properties of our bound phenomenal minds from the underlying quantum field-theoretic formalism – not invoke some mysterious classical “woo”.or not-even-wrong.”emergence” Anyhow, here’s an odd-sounding question. What does it feel like to instantiate a sequence of macroscopic superpositions of neuronal edge detectors, motion detectors, colour detectors (etc) in the CNS? An obvious reply would be “nothing at all”. As Max Tegmark has convincingly calculated., thermally-induced decoherence in the CNS destroys such macro-superpositions at sub-picosecond timescales. However, “nothing at all” _isn’t_ a possible answer if we assume Strawsonian physicalism i.e. the conjecture that experience discloses the intrinsic nature of the physical. (cf. “Galen Strawson’s “Does Physicslism Entail Panpsychosm”? http://www.utsc.utoronto.ca/~seager/strawson_on_panpsychism.doc Further, I predict that when we can experimentally probe the CNS at the timescales at which neuronal macro- superpositions must occur, we’ll discover not psychotic “noise”, but instead the structural shadows of bound phenomenal objects – a perfect match between the formal and phenomenal structures of our minds. Recall it’s the ostensible lack of any such match that pushes David Chalmers to his naturalistic dualism. Scott thinks that decoherence creates minds; I think it destroys them. Fortunately, this question will eventually be resolved by experiment.

  139. Ray Says:

    Scott #99:

    I don’t put a lot of stock in philosophical puzzles because they seem to produce outputs totally disproportionate to the inputs. Standard examples are Zeno’s paradoxes which purport to show time and motion are illusions, Doomsday argument, various existence proofs of God etc. Often it’s hard to refute them because they usually go wrong in a very subtle way. It took 2000 years and the development of the theory of convergent series to show where Zeno had gone wrong. So if a philosophical paradox seems to imply that our best physical theory is wrong and it produces something way more profound than should be possible for lazy word games I would suspect that the logic has gone haywire at some point.

    Incidentally I finished reading your essay GIQTM. Sorry to say, it seems to be a more sophisticated version of Deepak Chopra. Whereas Chopra et al. locates consciousness in quantum probability you locate it in cosmological initial conditions/quantum microstates. Whereas it is clear to me quantum mechanics or cosmology can have nothing interesting to say about the human brain, which is a classical biochemical device. This seems to be softer version of the same woo that Chopra peddles.

    Since you put a lot of stock on copyability in your paper, consider this thought experiment. Suppose there is a hardware AI which is every bit as complicated as the human brain, so copying it directly would be every bit as invasive as with the human brain. Now suppose that you destroy all the other software and hardware copies of this AI. So in your view, this destruction somehow magically imbues this AI with consciousness which it previously lacked even though you haven’t directly interacted with it. I hope the absurdity of your position is clear.

    Also you suggest that I oppose your argument because it would be bad for the “masses”. Let me assure you that I have no such interest. I’m only interested in the truth and I couldn’t care less what other people choose to believe.

    I’m going out on a limb here, but I think your aversion to the rationalist position on consciousness/MWI stems from your dislike of libertarian politics. In both cases the proponents take an argument to its logical conclusion. Since libertarian politics is misguided (in your view) there must be something wrong with taking things to their logical conclusions (again, in your view). I would like to know if I’m on the mark here or just shooting blinds.

    Anyway I appreciate your thought provoking writings. Though I disagree with the conclusions I still enjoyed both the post and the essay enormously. Keep up the good work.

  140. Scott Says:

    Ray #139: Sorry, but I’m not sure you get to tell me to “keep up the good work” right after comparing me to Deepak Chopra. 😉

    Your “reductio ad absurdum” of my position is a misreading of what I said. Let me just quote what I said in comment #53 (I said similar things in GIQTM):

      For example, the view I’m exploring doesn’t assert that, if you make a perfect copy of an AI bot, then your act of copying causes the original to be unconscious. Rather, it says that the fact that you could (consistent with the laws of physics) perfectly copy the bot’s state and thereafter predict all its behavior, is an empirical clue that the bot isn’t conscious—even before you make a copy, and even if you never make a copy.

    Also, your idea that I’m not a hardcore MWI/computationalism believer because I’m not a libertarian, or because I don’t like taking anything to its logical conclusion, is almost too silly to deserve a response. But OK, here’s one: if I dislike taking things to their logical conclusions, then why shouldn’t I fail to take this thing to (what you claim I claim is) its logical conclusion, and just dislike libertarianism without also disliking computationalism or MWI? 😀

  141. Ben Standeven Says:

    I didn’t notice this back when I read GiQTM, but: A “clean digital abstraction layer” for a human brain is precisely a computer which passes the Turing test! So the main difference from Penrose’s position is that he explains why, on his view, computers aren’t able to pass the Turing Test.

  142. Scott Says:

    Ben #141: Err, I (unlike Penrose) never denied, anywhere, that a computer could pass the Turing Test. Reread GIQTM and see for yourself!

  143. Ben Standeven Says:

    Ok; I guess the fact that Penrose actually believes his theory is another important difference… I’m not actually sure, even if you do/did believe the theory, that you would need to argue that computers can’t pass the Turing Test; that would only be important for the free-will aspect, which as far as I can tell has nothing to do with the unclonability aspect.

    Oh, I forgot to mention that it seems that Wikipedia has an article on “Knightian probability,” under the name “Imprecise probability” (the article does point out the obvious problem with this name).

  144. wolfgang Says:

    @Ray #139

    >> libertarianism

    Scott actually wrote a blog post about “bullet swallowers” i.e. libertarians and many worlders.

    Did you read that previously? If not it would somehow be confirmation that Scott was onto something then.

    I personally think that both, libertarianism and mwi, have in common that they are easy to understand and yet radical enough that many people find them attractive.
    But both also have in common that they actually fail to solve the problem(s) they are dealing with – and again both have in common that their proponents just don’t care too much about that failure.

  145. Worldy Says:

    “But both also have in common that they actually fail to solve the problem(s) they are dealing with . . ”

    Probably true, but at least where the MWI is concerned, the problems faced are no worse than for other explanations, which is something in my opinion that can’t be said for libertarianism.

  146. wolfgang Says:

    >> the problems faced are no worse than for other explanations
    If you click on the link that comes with my name you can read my argument against mwi.

  147. Adam M Says:

    Scott #115:
    I’m sorry to keep slowly harping on this question of computation, especially because I realize that you have moved on from the idea of computation as necessary and sufficient (NaS?) for consciousness, and into much more interesting possibilities.

    However, since I’ve gone this far I will continue a little farther, at the risk of boring you and other commentors, because I think that computation’s lack of physical reality is a serious problem for any theory that relies on computation as even a necessary condition of consciousness.

    If a computation exists primarily as a choice of description (per your #102), and it is actually the external behaviour that is primarily responsible for a simulation’s putative consciousness (per your #110), then I think we are entering some weird and unlikely territory:

    If we hook up a human brain simulation to a turtle robot, does the simulation experience turtle consciousness? If we hook it up to, say, a pipe organ, then what kind of consciousness does it have?

    Why do I not lose consciousness when I’m sitting still, with no external behaviour? Or, do we include all of the biological activity within a (motionless) body as behaviour? In that case, then, we couldn’t expect human consciousness in a simulation unless it was attached to a biological human body.

    Does waving a metal arm create as much consciousness as waving a flesh arm? What about behaviour could give it the magical power of bestowing consciousness to the thing that’s controlling it? Does it have to have some consequence in the surrounding world? Why do none of these behaviour-related issues seem to apply to our own consciousness?

    I think all this points to the possibility that computation plays no direct role in consciousness at all. Yes, it is a powerful and effective means of description, and yes, it is incredibly useful for understanding ourselves and the rest of nature, and for creating intelligent machines. But, I think there must be something physically real (not dependent on our interpretation) occurring in our brains that is responsible for the arising of consciousness. Until we figure out what this is, attempting to build a consciousness machine seems like a far-fetched endeavour.

    You’ve mentioned physical isomorphism, but I think if we examine the physical isomorphism between any currently proposed simulation and a living brain it’s pretty sketchy. Maybe you have a processor representing each neuron, so the “connectomes” are isomorphic, but the internal architecture of the processor is not isomorphic to that of a neuron. So we’re still left with a matter of interpretation, where we’d have to assign quasi-magical powers of consciousness-generation to the shape of some network, at an arbitrarily-chosen scale.

  148. JG Says:

    I postulate that:
    Consciousness is a functor. It maps the domain of cause to the codomain of effect through a mapping function of (action/time).

  149. Ben Standeven Says:

    Adam M, #147:

    How are the territory you describe “weird and unlikely?” You could ask most of these questions about a real brain as easily as a simulated one, after all.

    Also, he didn’t say that simulations are conscious because they have identical behavior, he said that we should regard them as conscious because they have identical external behavior (and “equivalent” internal behavior as well).

  150. Ben Standeven Says:

    David C. @90:

    That sounds right; questions about your consciousness are is-questions; but questions about other people’s consciousnesses are ought-questions.

  151. John Eastmond Says:

    I would be interested in what people think about the following argument that quantum computers cannot be conscious observers.

    Imagine that an atom with its spin along the +ve z-axis is sent into a Stern-Gerlach apparatus which splits its wave function into a -ve x-axis component and a +ve x-axis component.

    Imagine that we later combine these two horizontal spin components with a reverse Stern-Gerlach apparatus to reproduce the atom with a vertical spin. We can verify by observation that the atom definitely has a spin along the +ve z-axis.

    Now imagine that we couple two identical quantum computers to the left and right spin components whose unitary evolution is given by U_L and U_R. We then reverse the two computations using U_L^-1 and U_R^-1 and then combine the left and right components so that we get back an atom whose spin is definitely along the +ve z-axis.

    As I understand it QM says that we can make those quantum computers as complex as we like and still end up with an atom with its spin definitely along the +ve z-axis.

    Now imagine that the quantum computers are complex enough and have the right architecture to be conscious. In that case one could say that the left computer “measures” the left-hand spin component and the right computer “measures” the right-hand spin component. I would say that the later reversal of the two calculations should not change the fact that the spin components were observed.

    But then surely we have *two* atoms going into the reverse Stern-Gerlach apparatus; one with a definite left spin and the other with a definite right spin? This of course violates conservation of energy.

    I think quantum mechanics implies that coherent computation, however complex, cannot produce conscious awareness.

  152. JG Says:

    John #151, lovely thought experiment but you still have failed to provide a definition for ‘conscious awareness’, therefore your implication that computation cannot produce it seems hollow.

  153. fred Says:

    How about this thought-experiment:

    Take a conscious subject, and “slice” its/his/her brain along a particular plane, putting each neuron into one part or the other (only the axons would be cut).
    The brain is separated into two groups (by brain we mean whatever is the siege of high level thought):

    (I model each axon as unidirectional, so we have some that carry information left->right and some that carry information right->left)

    [left group]—-axon—>x cut x—axon–>[right group]
    [left group]<—axon—-x cut x<—axon—[right group]

    (repeated for all axons along the slicing plane)

    The groups don’t have to be of equal sizes (could just be a handful of neurons in one), but ideally we would like them of equal size.

    Whenever an axon is cut, we connect each half of it to the equivalent of an analogdigital converter (to both read data and stimulate the axon).

    Then a series of experiments:
    1) we directly (and quickly!) re-connect each half of the sliced axons with its opposite half, in what looks like a “no-op” analog-digital-analog conversion:

    [left-group]—axon—->[A/D]===>[D/A]–axon–>[right-group]
    [left-group]<–axon–[A/D]<===[D/A]<–axon–[right-group]

    (this is repeated for all the axons)

    Can distance (physical separation between the two brain parts) be increased arbitrarily? (as long as the effect of the speed of light on the digital signals propagation is limited)

    Does this “no-op” conversion have any effect on consciousness?

    2) if we imagine that we could somehow simulate an arbitrary group of neurons to an extremely great precision, using a classical computer, in real-time, then we could insert a computer in the chain.

    Here the computer simulates the left group only (but there is a corresponding system for the right group):

    [left-simulation]————–>x
    [left-group]–axon–>[A/D]—->[D/A]–axon–>[right-group]

    The simulated signal can be compared to the direct signal to make sure the simulation works as expected.
    If the signals are in match (say, at 99.999999% for a few minutes), then we could swap the direct digital signal for the simulated one:

    [left-simulation]———–>[D/A]–axon->[right-group]
    [left-group]–axon–>[A/D]->x

    (same with right group)

    Does this swapping has any effect on consciousness?

    The full “organic” brain has been replaced with two “half-organic, half-artificial” brains, each in the same state:

    [left-simulation]<–signals1–>[right-group]

    +

    [left-group]<–signals2–>[right-simulation]

    The signals between the two halves of each brain are the same (signals1 == signals2).

    3) as we wait longer and longer, the simulation becomes imperfect, and each brain group falls out of synch with the other one.
    The signals between the two halves of each brain have drifted apart (signals1 != signals2).
    How may “units” of consciousness do we have here?

  154. John Eastmond Says:

    JG #152

    One could define the conscious awareness that is assumed in my argument #151 as the same subjective experience of distinct mental phenomena that humans have when they act as observers. Thus the quantum computers are assumed to fully substitute for human observers.

  155. anon Says:

    Hey Scott, I wonder if you have thought of enabling the comment threading in wordpress? We have lots of post calling here.

  156. Luke A Somers Says:

    John Eastmond @ 151:

    “I would be interested in what people think about the following argument that quantum computers cannot be conscious observers.”

    All you have shown is that quantum computers executing code which involves reversing course and undoing their operations will not be conscious observers across the whole time period involved.

    You could say they were never conscious observers, or you could say that the part where they reversed course was an operation being executed upon their state which was not part of a consciousness process. Just like when Alice chokes on the waste ‘heat’ in Alice’s Room, she loses consciousness (at best) or dies (worse).

  157. Tony Says:

    If we could simulate (some aspects of) evolution, shouldn’t it bring us closer to simulating (some levels of) consciousness?

  158. Jay Says:

    #Scott 88 [hard to keep the beat…]

    I don’t understand the logic by which outcome #3 is the most likely. Let’s accept for a moment that AI-Searle, AI-Penrose, etc. would say “Fuck me, I’m not the conscious being I was pretending to be.”. Why would AI-Denett, AI-Turing, etc. change their own opinion? They already knew they were copying in principle, they would just learned it’s now technologically feasible. And what about AI-Aaronson? Why do you think likely he/it will accept it/he’s not trully conscious, when your (probably biological) version thinks he would say “you’ve now shown me (…) just how radically mistaken I can be about my own mind.”?

  159. Scott Says:

    Jay #158: My central contention, if you like, is that real, actual copying of the complete state of your mind would be nothing at all like agreeing intellectually that copying should be possible, and then going back to your life the way it was before. So for example, you could then perform any task whatsoever by delegating it to your copy. Other people would also know this, and for that reason, “would no longer have any need for the original you.” You could jump off a bridge just to see what it was like, “restoring yourself from backup” in the likely event that you die. If you played the role of the Chooser in Newcomb’s Paradox, you would see that every time you chose both boxes, one of the boxes ended up being empty, while every time you chose one box it had a million dollars—something that, if it happened today, we would call supernatural, or proof of backwards-in-time causal influences. I can try to make up what “I” (to whatever extent it still makes sense to use that pronoun) would say or feel in such circumstances, in a Woody Allen sort of way, but honestly I don’t know, and neither do you. The one thing I feel strongly about is that people in these discussions shouldn’t understate the world-changing implications of copyability, as a consequence of not having fully thought it through.

  160. Jay Says:

    #99 re #74

    1) flips a fair coin (…)

    If goddy give 1$ to any succefull copy, bet 100/101. If she splits the gain among copies, bet 1/2. Do you know of any example for which confusion remains once who gains what is clearly specified?

    2) the probability that you’ll see Y tomorrow, given that you saw X today? (…) 3/5|X>+4/5|Y>

    I don’t understand the problem. If you saw X today, then how could you treated yourself as in a superposition? It seems that, as for Bostrom’s examples, confusion just arise because of faulty formulation. The first part indicates we consider the point of view of the one who see X, the second part indicates we could the point of view of someone who see yourself seing X or Y. One idea is maybe you measured yourself using some basis and now you know that your state is 3/5|X>+4/5|Y>. But then where is the problem?

  161. Jay Says:

    RE#159

    So you’re really adressing the “ought” question, namely you don’t like the implication of what seems the best picture we have of the mind, even if you agree that’s the best picture. I prefer the “is” question, but also don’t think this picture is as bad for the “ought”. If you’re a Chooser in some version of Newcomb paradox, you can always restore your knightian freedom by selecting a strategy that depends on what the Predictor will do. If the Predictor never communicates with you, then is as real as any multiverse extension we can imagine. If he communicates with you, then he gives you a place to stand that can move the earth. There’s even a theorem about that, isn’t it?

  162. Scott Says:

    Jay #160:

    1) No, the gain isn’t split among the copies, but why should determine a clear answer? Assume you’re not an “altruist”: you don’t care about maximizing the gain of any copies of yourself; you only care about maximizing your own gain. Then the question is, why should you consider yourself 100 times more likely to be in the case with 100 copies, given that “where you are” is still the outcome of a fair coin flip? Also, suppose the 100 copies differed in some trivial way, like having different hair colors, and you knew your hair color. Would you still be 100 times more likely to be in the case with 100 copies? Also, suppose that if the coin landed heads, God would make an infinite number of copies of you, while She’d only make one if it landed tails. Could you then be certain, without ever leaving your armchair, that the coin must have landed heads?

    2) The problem arises because, by assumption, a state like 3/5|X>+4/5|Y> is a superposition of you seeing X and you seeing Y. And if MWI—i.e., the thing that tells us to talk about such states in the first place—is accepted, then you must be willing to describe such a state in terms of “two copies of yourself,” one who sees X and the other who sees Y. And, provided you agree that the Born rule works, it must make sense to say that with probability (3/5)2, “you’ll” find “yourself” as the copy who sees X, and with probability (4/5)2, “you’ll” find “yourself” as the copy who sees Y. But then you run into the difficulty I described.

  163. Scott Says:

    Jay #161: “If the Predictor never communicates with you, then is as real as any multiverse extension we can imagine.” I don’t understand that sentence—you’ll need to clarify that.

  164. Jay Says:

    #162 #1

    > Assume you’re not an “altruist”

    My own gain not that of the copies? Oh then I’m not a copy. I bet 1/2, and the copies should bet 1. Oh but I don’t know if I’m not one of the copies? Then you can’t say their gain is not mine: 101/100. No hard problem here. Just precise the question and it drops.

    > suppose that if the coin landed heads, God would make an infinite number of copies of you

    I guess it depends on AC. 🙂

    #162 #2

    Sorry, still don’t get the problem. Probability conditionned to we saw X are straightforward. Probability conditionned we saw X and some alien quantum reversed our measurement, that’s not straightforward (until we know more about the alien), but that’s just not the same event.

    #163

    Sorry! A predictor with whom we can communicate cannot remain a perfect predictor. A predictor with whom we can’t communicate should have no consequence on “ought” questions.

  165. Jay Says:

    #164 re #162#1: 101/100 would be good indeed, but what I meant was 100/101…

  166. Scott Says:

    Jay #164: No, it doesn’t depend on AC, since we only need a countable number of copies. I ask again: if there’s one copy of you if the coin lands heads, and infinitely many copies if it lands tails, are you then certain—willing to bet your life—that the coin landed tails? Because that seems like the logical implication of your 100/101 answer.

    And sorry, the thing about “a predictor with whom we can communicate can’t remain a perfect predictor” is constantly repeated, but seems completely wrong to me. Why can’t the predictor simply seal its prediction in an envelope, only opening the envelope after you did the thing it predicted you would do? (Or send you its prediction in encrypted form, only decrypting it later?)

  167. fred Says:

    Scott #167

    How is this different from this sort of question:

    Say that you were isolated since birth and therefore have no idea how big the earth population is.
    And you’re told that you have a rare genetic defect (e.g. you’re born with 6 fingers), which, you’re told, only happens with probability 0.0000000001… can you deduce, on your own, something about the current population of the earth (i.e. the current number of births a day)?

  168. Scott Says:

    fred #167: Yes, of course it’s related to other anthropic observer-counting puzzles. But as I explained in comment #135, it seems qualitatively worse, because if the other copies are really subjectively identical to you (or differ only in some way that’s designed to be trivial), then you don’t have the option of excluding them from your reference class.

  169. Jay Says:

    #166

    >No, it doesn’t depend on AC,

    Actually this was a joke, but truth is I have no idea how to treat the infinite case. Would you mind if we stick to a finite number? As large as you wish, but finite please.

    No, the way I’ll select my guess is unchanged if it’s one google or one hundred, conditionned there zero doubts on the terms of the bet. I also have little doubt that, for any bet where the problem seems anthropic, you can find another version with no anthropic problem at all.

    Example? World population, one hundred, all different human beings. God will flip a coin and ask you to bet. If you’re wrong, everyone’s dead. If it’s head and you can guess it, he will spare you and you only. If it’s tail and you can guess it, he will spare everyone. See that’s the very same bet?

    >Why can’t the predictor simply seal its prediction

    Then it’s not interacting, and a predictor with whom we can’t communicate should have no consequence on “ought” questions.

    >Or send you its prediction in encrypted form, only decrypting it later?

    Better! Because you could choose to act based on this encrypted message, even if can’t decipher it. Then, there is no way the predictor can garantee a consistent story where your action based on the encrypted form leads to the deciphered content of this message.

  170. Nick M Says:

    Scott …

    “The intermediate position that I’d like to explore says the following. Yes, consciousness is a property of any suitably-organized chunk of matter. But, in addition to performing complex computations, or passing the Turing Test, or other information-theoretic conditions that I don’t know (and don’t claim to know), there’s at least one crucial further thing that a chunk of matter has to do before we should consider it conscious. Namely, it has to participate fully in the Arrow of Time. More specifically, it has to produce irreversible decoherence as an intrinsic part of its operation. It has to be continually taking microscopic fluctuations, and irreversibly amplifying them into stable, copyable, macroscopic classical records.”

    Does or can this occur in other than nonequilibrium systems?

  171. Scott Says:

    Nick #170: No, it can only occur in nonequilibrium systems—that’s part of the content of this position.

  172. Scott Says:

    Jay #169: Sorry, I still don’t understand your position, but at least it leads to an interesting technical question (see below).

    OK, I’m fine if you want to restrict to finite numbers. Again I ask: if there’s only one copy of you if the coin lands heads, or a trillion copies of you if the coin lands tails, are you essentially certain that the coin landed tails?

    Let’s not make this about betting money, or about equivalence or inequivalence to any other scenario you might name, or about any other issue. It’s just a simple question about what you expect to see.

    Regarding the predictor: let’s say that it unseals the envelope the very second after you make your decision, showing it knew exactly what you would decide. And let’s say it does this again and again, 50,000 times, every day of your life. It’s not obvious to me that this wouldn’t change your entire conception of yourself in bizarre and profound ways.

    Now, for the interesting technical question: you say that even if you couldn’t decode an encrypted message predicting what you would do, you could still do something that depended on it, and thereby foil the predictor’s prediction. However, I conjecture that in this scenario, because you can’t decode the message, it’s always possible for the predictor (in some appropriate sense) to find “fixed-points”: that is, predictions it can make for your behavior that remain valid, even allowing that you can base your behavior on an encrypted version of the prediction. Let me think about it later and then maybe write another comment.

  173. fred Says:

    Scott #169

    “because if the other copies are really subjectively identical to you (or differ only in some way that’s designed to be trivial), then[…]”

    But this entire discussion is about the impossibility of answering “are these two things subjectively identical?”, no?

    There seems to be also an assumption that a consciousness is somehow “tied” to a particular “cone” of space/time/matter.

    But maybe a “unit” of consciousness is tied to a particular mathematical structure (call it “state”, “pattern”, “information”…), which itself can be realized by many cones of space/time/matter (a consciousness also includes to a large extent the state of the surrounding environment).
    So, cloning a brain perfectly wouldn’t “create” an additional consciousness.
    In that sense, a consciousness is more like the concept of an algorithm, which can be realized in many different ways.

    As I was rambling about in a previous post, if the brains are true clones of each other (which could be achievable with programs that simulate brains and their environment, and we can then duplicate them at will), then it really doesn’t matter whether there is 1, 2, or a billion of them in the same state. They all realize one subjective experience.
    All this would suggest that subjective experience is mathematical in nature, which is what you are trying to avoid.

  174. fred Says:

    Scott #173

    “And let’s say it does this again and again, 50,000 times, every day of your life. It’s not obvious to me that this wouldn’t change your entire conception of yourself in bizarre and profound ways.”

    Hmm, maybe that’s the modern equivalent of putting a caveman in front of a perfect mirror, then 10 of them, then 1,000 of them? 🙂

  175. Jay Says:

    Scott #172

    For the technical point, here’s what I’ll do. If the encrypted message forms a prime number, I’ll spell the bits loud. If it doesn’t, I’ll spell them in reverse order. Then I’ll encrypte this message, and again spell the bits if it’s a prime, spell it in reverse order if it’s not. And again and again..

    Intuitively it’s impossible for Predictor to find a fixed point. If you can prove it, or break that conclusion, even probabistically, I’m very interested!

  176. fred Says:

    Scott #173
    I’m noticing that you always strongly articulate your theories on consciousness/free will around the constraint of preserving the idea that we’re all unique beautiful un-cloneable snowflakes.

    Maybe this preoccupation is more a reflection of your (well-deserved) status in society as an exceptional individual, as someone who will be leaving explicit recognized achievements long after they’re gone 🙂
    For someone like myself, one of the very many average/unremarkable/fungible individuals, the concept of clonability isn’t that disturbing, on the contrary, it’s almost comforting somehow.

  177. John Eastmond Says:

    Luke #156

    I would say that *if* the quantum computers in comment #151, U_L and U_R, experience conscious awareness then we should treat them as “observers” so that both the spin components of the atom are now “real”. Subsequently reversing the quantum calculations should not change the reality of the observed spin components. My argument does not make any assumptions about whether the reverse computations are conscious or not.

  178. Scott Says:

    Jay #175: OK, I thought some more about the issue of fixed-points—i.e., can a predictor still predict what you’re going to do, even if you first get access to an encrypted version of the prediction? I believe the solution is that it all depends on just how much the predictor is trying to predict about your future behavior.

    To illustrate, let’s first suppose the predictor is trying to predict your answer to a single yes/no question: that is, a single bit x∈{0,1}. The predictor gives you an encrypted version Enc(x,r) (where r∈{0,1}n is some random “padding” bits, unrelated to x), using some secure digital commitment scheme, which we’ll assume to exist. Then you apply some function f (which we’ll assume the predictor, being a predictor, knows), and you output f(Enc(x,r))∈{0,1}. The question is whether the predictor can find a pair (x,r) such that

    x = f(Enc(x,r)).

    In this case, I claim that the answer is clearly yes: not only can the predictor find such an (x,r), but it can do so easily. For if such a pair were hard to find, then it would almost always need to be the case that

    f(Enc(x,r)) = 1-x.

    But then taking 1-f(Enc(x,r)) would be an easy way to break the commitment scheme, contradicting the assumption of its security.

    The above argument can be generalized to where f(Enc(x,r)), the aspect of your behavior that the predictor is trying to predict, takes values not just in {0,1}, but in {1,…,M}, for any M=O(poly(n)). (In other words, to where the predictor is trying to predict any O(log(n)) bits about your behavior.) For then you’d be able to learn x, not with certainty, but with probability 1/poly(n) above random guessing, which would still violate the security assumption. Furthermore, in this case the predictor could still find a fixed-point efficiently, by simply trying a bunch of random (x,r)’s until it found one for which f(Enc(x,r))=x happened to be satisfied.

    If you’re willing to make a very strong security assumption—e.g., that it’s hard even to learn x with probability 1/cn above random guessing, for some constant c>1—then you can even get that, for some constant β>0, the predictor can find a fixed-point allowing it to predict βn bits about your behavior. (In this case, however, finding the fixed-point might be a computationally intractable problem for the predictor.)

    By contrast, if the predictor has to predict an unlimited amount of information about your future behavior, then there’s a sense in which it’s trivially impossible for the predictor to succeed, if it first has to give you an encrypted version of its prediction. (And maybe that’s what you were trying to get at in your comment; I’m not sure.)

    To see this: suppose that, if the predictor gives you a k-bit encrypted prediction, then you respond by outputting k+1 bits. There’s no encryption scheme with unique decoding that can map a (k+1)-bit plaintext to a k-bit ciphertext; ergo, the equation x = f(Enc(x,r)) can’t possibly be satisfied.

    Even here, however, there’s a possible loophole. Namely, suppose the prediction consisted not of x itself, but of some compressed representation c(x) of x—say, a computer program that outputs x when run. The fixed-point equation would then be

    x = f(Enc(c(x),r)).

    Now can the predictor find an x that satisfies the equation? Intuitively, it seems difficult, since it seems like you could always choose f so that f(Enc(c(x),r)) had Kolmogorov complexity a little bit greater than K(x), the Kolmogorov complexity of x itself. (Either that, or else you could invert the encryption function Enc.) But maybe something is possible here.

    Even if not, as I said, everything works fine as long as the predictor is trying to predict a number of bits about your behavior, that’s sufficiently less than the number of bits in the encrypted message that it sends you. And of course, the predictor could then repeat this feat over and over, in order to make you arbitrarily unsettled about being the locus of your choices.

    (Final note: even if the predictor skips encryption entirely, and just puts its prediction in an envelope, there’s still a slightly-analogous issue that arises! Namely, if you can see the envelope, know how large it is, and accordingly have an upper bound on the number of bits in it, then you could try to do something that requires more than that number of bits to describe. But again, since a single envelope can store a very large number of bits, this doesn’t seem like a serious problem in practice.)

  179. Scott Says:

    Addendum to comment #178: OK, I thought more about the case where the predictor is allowed to issue its prediction in the form of a computer program that outputs your behavior when run. And in this case, I claim, perhaps counterintuitively, that the predictor can always easily find a fixed point that lets it predict your behavior—regardless of how its program is encrypted before being handed to you, and indeed, even if its program is given to you completely in the clear!

    Why? The Recursion Theorem.

    Let Enc be the encryption function used by the predictor, if any (I’ll hardwire any random bits r as part of Enc), and let f be the function you apply to generate your behavior. Then our problem is to find a computer program M that satisfies

    f(Enc(<M>)) = M(),

    where <M> is the code of M and M() is its output. But the Recursion Theorem ensures that we can always find an M that satisfies this equation.

    More explicitly, M’s pseudocode will be as follows:

    Output f(Enc(S)), where S is the following string repeated twice, the second time in quotes.
    “Output f(Enc(S)), where S is the following string repeated twice, the second time in quotes.”

  180. Jay Says:

    #172

    “still don’t understand your position”

    Sorry! My position is: specify clearly what we should maximize and all confusion will drop.

    “Again I ask: if there’s only one copy of you if the coin lands heads, or a trillion copies of you if the coin lands tails, are you essentially certain that the coin landed tails?”

    Yes, assuming we count each copy as another me. No, assuming we count all copies as a single me.

    “It’s not obvious to me that this wouldn’t change your entire conception of yourself in bizarre and profound ways.”

    It would certainly change my conception of the universe I live in, otherwise no it would just confirm one of my prefered working hypothesis about the mind.

    #178

    How interesting thanks!

    -Things I understand:

    “suppose the predictor is trying to predict your answer to a single yes/no question (…) In this case, I claim that the answer is clearly yes”
    “The above argument can be generalized to where (…) the predictor is trying to predict any O(log(n)) bits about your behavior”
    “if the predictor has to predict an unlimited amount of [uncompressed] information (…) it’s trivially impossible for the predictor to succeed”
    “even if the predictor skips encryption entirely, and just puts its prediction in an envelope, there’s still a slightly-analogous issue that arises!”

    -Things I disagree:

    “maybe [the trivial case]’s what you were trying to get at”

    No, it would’t be fair nor interesting.

    “the predictor could then repeat this feat over and over, in order to make you arbitrarily unsettled about being the locus of your choices.”

    No, the predictor would just prove that *a very tiny part* of my behavior is predictable. Guessing log(n) of n things I’ll do, my wife can do better! That said, I’m not sure I’d remain unimpressed if the predictor could choose which log(n) bits he will be able to predict. Can he?

    -Things I just don’t understand:

    “[if] it’s hard even to learn x (…) the predictor can find a fixed-point (…) finding the fixed-point might be a computationally intractable problem for the predictor”

    I read this sentence as “The predictor can find a fixed-point that might be intractable for the predictor”. I just don’t catch it.

    ” the Recursion Theorem ensures that (…) the predictor can always easily find a fixed point (…) even if its program is given to you completely in the clear!”

    Really? I need to add this theorem to my pile of things to learn, but… if the program was in the clear, then my strategy would be: say 1 if the program predict I’ll say 0, say 0 if the program predict I’ll say 1. How could we have a fixed point?

  181. Scott Says:

    Jay #180: Aha! The Recursion Theorem is easy to prove—indeed, I already proved it for you at the end of comment #179. But it really does say that fixed-points always exist: in other words, you can always assume without loss of generality that any computer program has access to its own source code. Or more precisely, given any equation of the form

    M’s output = f(M’s source code),

    for any computable function f, you can always find a program M that satisfies the equation (namely, the one I showed at the end of #179).

    As for your question—why can’t you just output 1 if the program predicts you’ll output 0, and vice versa?—the basic answer is that you can’t simulate the program without creating an infinite loop, because part of what the program does is to simulate your behavior. Or to put it differently: if your own internal program told you to run the program that was given to you, you would presumably run it for a while, see that it still hadn’t halted, then give up and output some guess of the opposite of what the program predicted you would do. But it would inevitably turn out that, had you run the program all the way to the end, it would simulate you doing all of that—including your trying to confound its prediction!—and would end up with the correct prediction for your output.

    Thank you for this; I had never before noticed the relevance of the Recursion Theorem to free will. (Maybe I’ll turn it into a blog post…)

  182. Scott Says:

    Jay #180: To address some of your other questions:

    Yes, with the “cryptographic” fixed-point, it’s crucial that the small number of bits we choose to predict can be any bits about your behavior. That is, we can pick any yes-or-no question we want (“will you move across the world to be with the love of your life, or not?”) and send you a cryptographic commitment to your answer to that question, such that the prediction will remain valid even after you receive the message. Of course your decision might, in general, depend on details of the encrypted message, but we’ll have ensured beforehand that we found a fixed-point.

    In the case where we’re trying to predict a linear number of bits about your behavior (say, βn for some 0<β<1), I can say that at least one solution to the fixed-point equation will almost certainly exist; what I don’t know is a polynomial-time algorithm for the predictor to find such a solution (I only know how to find it using exhaustive search). Sorry if that was unclear.

  183. fred Says:

    Scott #182
    Isn’t this a result of the more basic observation that you can’t simulate a closed system from the inside? The simulation would have to take its own state into account, leading to potential infinite recursion?

  184. Scott Says:

    fred #183: No, it goes deeper than that.

    Even if you somehow knew that “you can’t simulate a closed system from the inside” (whatever exactly that means), that would still leave open two possibilities:

    (1) Maybe it’s impossible to hand someone a computer program that predicts their own future behavior (even assuming you have a computable description of the person), because if you did, then they could simulate the program and do the opposite of whatever it did.

    (2) Alternatively, maybe it is possible to hand such a person such a program, because if you did, then they couldn’t simulate it without going into an infinite loop.

    Both of these options sound pretty plausible (and consistent with the intuition that “you can’t simulate a closed system from the inside”), yet at most one of them can be correct. The Recursion Theorem tells us that (2) is correct and (1) is incorrect.

  185. fred Says:

    Scott #185,
    Oh, I see cause in (2), if the user tries to simulate the program in order to base his behavior on it, then the system simulated by the program becomes “person+simulation(person)” (not just the person alone), which is a system simulating itself.
    And when you say “infinite loop” it means that it would require infinite resources (memory and time).

  186. fred Says:

    By “system simulating itself” I mean a program simulating a closed system which state depends on the output of the program (so the program is itself “inside” the system).
    So the program has to take its own state into account in the simulation, which is an infinite recursion, which would require an unlimited amount of resources (like memory), which goes against the notion that the system simulated can be “closed” (as in finite).

  187. Michael Gogins Says:

    (1) resembles MacKay’s “logical indeterminacy of a free choice,” and (2) resembles Sartre’s critique of the transcendental ego.

    Then does (2) not also imply that one’s consciousness of being conscious cannot result from the simulation of consciousness?

  188. fred Says:

    #188
    Maybe I’m missing something but I don’t see how it’s saying anything more than:
    f simulates system g:
    f() = g()
    g tries to use f to “break” the simulation:
    g() = k(!f())

    it’s an infinite recursion, like the typical paradox “this statement is false” (which in system theory can be “solved” by considering a solution state the keeps flipping between true and false, like an oscillator).

  189. Scott Says:

    fred: OK, I’ll try one more time. The whole bit about “if you tried to run the prediction program and do the opposite, you’d get into an infinite loop” is just some verbiage, meant to help you understand how the Recursion Theorem could be true. But it still doesn’t show that it is true!! I.e., even after we satisfy ourselves that the Recursion Theorem isn’t false for a completely trivial reason, we still need to prove that there’s actually a solution to the fixed-point equation

    M’s output = f(M’s code),

    for any computable function f. And ideally, we should explicitly describe that M in terms of f. That’s what the theorem does.

  190. fred Says:

    #190, thanks Scott,
    this is more evidence that I need (and want) to properly study computability theory – the wiki on Kleene’s recursion theorem takes me down a rabbit hole of very abstract concepts like “partial recursive functions”, “T predicates”, etc.
    (and this seems relevant to my current interest in functional programming – I’m reading “the structure and interpretation of computer programs” ).

  191. Jay Says:

    Scott #180

    >Aha!

    Indeed! I was wrong on this point and I’m very glad I could understand why. Thank you thank you thank you!

    There’s one interesting line of retreat. Not sure I wish I adopt it, but at least it’s amusing:

    “Yes, the predictor can give me my code but that’s not enough. He needs to provide the code plus, say, an hash of the result after he ran the code himself. And then, he would not prove he can predict me. He would prove he can predict me *the second time*. The first time I was the sim, as real as any latter me, by definition. What matters for my “ought” is you can’t predict what I’ll do before I decide it, no matter if I first made my decision in the present matrix.”

  192. Jay Says:

    Scott #182

    Thx, that’s clear too. Ok, now I get we can choose the bits at will, log(n) in reasonable time seems enough for any moral statement we may think of.

  193. Joseph Brisendine Says:

    Hi Scott,
    First thank you so much for posting and after reading this post I went ahead and read gitqtm, and thank you even more for that!
    Regarding the “open neuroscience questions” related to the freebit proposal, you mention that neuroscientists are in disagreement over the role of fluctuations in determining brain states at the level of neurons firing. I’m a biophysicist who studies electron transfer mechanisms in proteins, and it occurs to me that there is a large literature on this subject that might interest you and your readers if you aren’t already familiar. In particular, charge-transfer mechanisms in DNA that involve thermal fluctuations inducing transient electronic degeneracy in nucleotides can cause signalling events like the binding of a transcription factor which will in turn make large-scale changes to the metabolic activity of the cell. As far as I can tell, this is literally a biochemical example of “amplifying a microscopic fluctuation.” The Beratan group at Duke has done a lot of good theoretical work on this issue, and have recently proposed a model they call “flickering resonance” that describes how fluctuations in macromolecules can be used to gate charge-transfer events that occur through transient intermediates, which a cell can in turn use to initiate signalling events with macroscopic consequences.

  194. Is A Simulated Brain Conscious? - Snap VRS Blog Directory Says:

    […] Dr. Scott Aaronson, a theoretical computer researcher at Durch and author from the blog Shtetl-Enhanced, belongs to several researchers and philosophers (and cartoonists) who’ve designed a practice of coping with these ethical sci-fi questions. Some scientists concern themselves mainly with data, these authors perform thought experiments that frequently reference space aliens, androids, and also the Divine.&nbsp(Aaronson can also be quick to indicate the highly speculative character of the work.) […]

  195. AdamT Says:

    Hi Scott,

    I am still trying to digest this, but I had one thing I wanted to add right away… You mention in your talk the principle of a lump of matter having a ‘clean digital abstraction layer’ as one distinguishing characteristic between geiger counters that turn microfacts into macrofacts and conscious entities like ourselves. You do this in order to distinguish these systems so as not to encounter ethical questions about ‘killing’ geiger counters 🙂 LOL

    But, I will point out, even if it turns out to be true that lumps of matter that we right now regard as ‘conscious’ (such as me or you) ALSO have ‘clean digital abstraction layer’s’ I don’t think your argument falls apart. That is because the ethics of killing regards whether a given lump of matter can suffer. I don’t think a geiger counter, even were it to be conscious in the sense of fully participating in the arrow of time, could feel pain.

    Another way of saying this is that while “fully participating in the arrow of time” might be a necessary condition for a lump of matter to be conscious I don’t think it follows that it is sufficient for such a lump of matter to be capable of feeling pain… and therefore not necessarily worthy of being ethically worried about.

    I think another interesting or ‘Pretty Hard Problem’ consists of asking what kinds of physical systems are capable of feeling pain or suffering? Consciousness would be a necessary prerequisite feature of such a system, but I doubt that it is sufficient. This is where I think ideas from Buddhism could help inform what such a system might look like or the principles on which it is based.

  196. Luke A Somers Says:

    John Eastmond @ 177: Ah, right. In that case, I completely agree. They’re both real. If they proceed to die, that doesn’t mean they never existed.

  197. nomorefoaming Says:

    I like this. On Tegmark’s Mathematical Universe, I’ve been thinking it is a trick that only works once, at the top level. If our universe is a mathematical structure with at least the four dimensions of space and time, then presumably we exist as subunits along that. Whereas if you just make a lookup table of consciousness it isn’t participating or doing anything along the time dimension.

    Maybe things like those and Boltzmann brains exist way out in the level 4 multiverse, but in our universe mere existence isn’t enough because it already is its own mathematical structure. For something to be conscious inside running or not running makes a difference because that determines its role as a subunit of our structure, from just a set of bits into an operating function.

  198. BLANDCorporatio Says:

    Apologies if this is daft. It’s bound to be a rather trivial objection; forgive me, I’m trying to lumber to some understanding.

    The thing is, human beings are, for the most part, predictable in some sense. I ride the bus and have a good expectation of fellow riders not breaking into a knife fight or an impromptu macarena. That’s a choice, of course; we make ourselves somewhat predictable for etiquette or similar reasons. Every one of us knows (mostly) what to expect of others and they (often) know what to expect in return. I might not know what you will say or do for the entire next day, but given knowledge of past habits and such I probably might come to a good idea, which is why surveillance is useful.

    Probably, for the vast majority of our lives we make decisions that don’t even need nanobots in our brains to predict. Given this, is that one-in-who-knows-what chance of something flipping and cascading into a large effect that significant as a criterion for consciousness or moral standing?

    The sticking point for me here is robustness. It seems like your view will immediately deny consciousness to any ‘well-engineered’ system, one that takes pains to robustify its workings against random photons (whether they carry freebits or not) or small-scale glitches in the particles that make it up. And having consciousness rely on something equivalent to shoddy design is just unsatisfying to me.

    “Wishing doesn’t make it so”- my aesthetic judgement of an idea has no bearing on its validity. However. As you note, there’s no reason to suspect we can ever copy brains exactly. On the other hand, (AFAIK) the brain is fairly resilient against a few random photons of, say, cosmic background radiation bumping into it, and it doesn’t seem to swing that dramatically despite whatever small quantum events might happen in it. Or at least, the right event to cause a bus knife fight didn’t happen around me yet.

    Given that this is speculation for entertainment, am I allowed to say I don’t find it aesthetically pleasing?

    (I find it very interesting, mind you. It’s novel, it aims for clarity. I just wished it would be something else 😛 )

  199. Link Farm and Open Thread: Enacted Rules Edition | Alas, a Blog Says:

    […] Shtetl-Optimized » Blog Archive » “Could a Quantum Computer Have Subjective Experience?” I found this very interesting, although there were parts of it I couldn’t fully follow. When I started reading it I was confused over what the words “decoherence” and “classical” mean in this context, and I found that reading this lecture through the end of the section entitled “Story #1″ clarified those terms enormously. […]

  200. Eric S. Raymond Says:

    “Only for irreversible systems are there moral acts with irreversible consequences”

    Very intersting. I think I was working towards this insight when I wrote this:

    “On being against torture”

    http://esr.ibiblio.org/?p=1742

  201. Decius Says:

    If there are quantum effect that get amplified into macro states in brains, such that identical instances of some thought pattern resolve in a manner similar to observing individual atoms for radioactive decay, is it literally correct to say that the differences between the choices are randomly determined?

  202. Innocent Bystander Says:

    > what if each person on earth simulated one neuron of your brain, by passing pieces of paper around. It took them several years just to simulate a single second of your thought processes

    Do the calculation. Let’s assume that each synaptic operation takes one floating point operation, which takes a human about a minute to calculate. Then it would take all of humanity over a year (given the need to sleep 8 hours a night) to simulate a second’s thinking.

    It would take 30,000 years to simulate a year’s thinking.

    This is the sleight of hand behind Searle’s example. An actual person who processed information that slowly would be thought to be unconscious.

  203. Assorted links Says:

    […] Aaronson, “Could a quantum computer have subjective experience?” […]

  204. Romeo Stevens Says:

    I’m amenable to quantum information theory because a complete account of what this science thingy is has to include a theory of what “observation” is and it doesn’t seem like other approaches are attempting to make progress on that front. I’ll take blind groping over not trying at all.

  205. Lifeboat News: The Blog Says:

    […] ), your blog “Could a Quantum Computer Have Subjective Experience?” ( https://scottaaronson-production.mystagingwebsite.com/?p=1951 ), and your book “Quantum Computing since Democritus” ( […]

  206. Shtetl-Optimized » Blog Archive » “Can computers become conscious?”: My reply to Roger Penrose Says:

    […] (including the replies to Penrose) is contained in The Ghost in the Quantum Turing Machine, Could A Quantum Computer Have Subjective Experience? (my talk at IBM T. J. Watson), and Quantum Computing Since Democritus chapters 4 and 11.  See also […]