“Can computers become conscious?”: My reply to Roger Penrose

A few weeks ago, I attended the Seven Pines Symposium on Fundamental Problems in Physics outside Minneapolis, where I had the honor of participating in a panel discussion with Sir Roger Penrose.  The way it worked was, Penrose spoke for a half hour about his ideas about consciousness (Gödel, quantum gravity, microtubules, uncomputability, you know the drill), then I delivered a half-hour “response,” and then there was an hour of questions and discussion from the floor.  Below, I’m sharing the prepared notes for my talk, as well as some very brief recollections about the discussion afterward.  (Sorry, there’s no audio or video.)  I unfortunately don’t have the text or transparencies for Penrose’s talk available to me, but—with one exception, which I touch on in my own talk—his talk very much followed the outlines of his famous books, The Emperor’s New Mind and Shadows of the Mind.

Admittedly, for regular readers of this blog, not much in my own talk will be new either.  Apart from a few new wisecracks, almost all of the material (including the replies to Penrose) is contained in The Ghost in the Quantum Turing Machine, Could A Quantum Computer Have Subjective Experience? (my talk at IBM T. J. Watson), and Quantum Computing Since Democritus chapters 4 and 11.  See also my recent answer on Quora to “What’s your take on John Searle’s Chinese room argument”?

Still, I thought it might be of interest to some readers how I organized this material for the specific, unenviable task of debating the guy who proved that our universe contains spacetime singularities.

The Seven Pines Symposium was the first time I had extended conversations with Penrose (I’d talked to him only briefly before, at the Perimeter Institute).  At age 84, Penrose’s sight is failing him; he eagerly demonstrated the complicated optical equipment he was recently issued by Britain’s National Health Service.  But his mind remains … well, may we all aspire to be a milliPenrose or even a nanoPenrose when we’re 84 years old.  Notably, Penrose’s latest book, Fashion, Faith, and Fantasy in the New Physics of the Universe, is coming out this fall, and one thing he was using his new optical equipment for was to go over the page proofs.

In conversation, Penrose told me about the three courses he took as a student in the 1950s, which would shape his later intellectual preoccupations: one on quantum mechanics (taught by Paul Dirac), one on general relativity (taught by Herman Bondi), and one on mathematical logic (taught by … I want to say Max Newman, the teacher of Alan Turing and later Penrose’s stepfather, but Penrose says here that it was Steen).  Penrose also told me about his student Andrew Hodges, who dropped his research on twistors and quantum gravity for a while to work on some mysterious other project, only to return with his now-classic biography of Turing.

When I expressed skepticism about whether the human brain is really sensitive to the effects of quantum gravity, Penrose quickly corrected me: he thinks a much better phrase is “gravitized quantum mechanics,” since “quantum gravity” encodes the very assumption he rejects, that general relativity merely needs to be “quantized” without quantum mechanics itself changing in the least.  One thing I hadn’t fully appreciated before meeting Penrose is just how wholeheartedly he agrees with Everett that quantum mechanics, as it currently stands, implies Many Worlds.  Penrose differs from Everett only in what conclusion he draws from that.  He says it follows that quantum mechanics has to be modified or completed, since Many Worlds is such an obvious reductio ad absurdum.

In my talk below, I don’t exactly hide where I disagree with Penrose, about Gödel, quantum mechanics, and more.  But I could disagree with him about more points than there are terms in a Goodstein sequence (one of Penrose’s favorite illustrations of Gödelian behavior), and still feel privileged to have spent a few days with one of the most original intellects on earth.

Thanks so much to Lee Gohlike, Jos Uffink, Philip Stamp, and others at the Seven Pines Symposium for organizing it, for wonderful conversations, and for providing me this opportunity.


“Can Computers Become Conscious?”
Scott Aaronson
Stillwater, Minnesota, May 14, 2016

I should start by explaining that, in the circles where I hang out—computer scientists, software developers, AI and machine learning researchers, etc.—the default answer to the title question would be “obviously yes.”  People would argue:

“Look, clearly we’re machines governed by the laws of physics.  We’re computers made of meat, as Marvin Minsky put it.  That is, unless you believe Penrose and Hameroff’s theory about microtubules being sensitive to gravitized quantum mechanics … but come on!  No one takes that stuff seriously!  In fact, the very outrageousness of their proposal is a sort of backhanded compliment to the computational worldview—as in, look at what they have to do to imagine any semi-coherent alternative to it!”

“But despite being computational machines, we consider ourselves to be conscious.  And what’s done with wetware, there’s no reason to think couldn’t also be done with silicon.  If your neurons were to be replaced one-by-one, by functionally-equivalent silicon chips, is there some magical moment at which your consciousness would be extinguished?  And if a computer passes the Turing test—well, one way to think about the Turing test is that it’s just a plea against discrimination.  We all know it’s monstrous to say, ‘this person seems to have feelings, seems to be eloquently pleading for mercy even, but they have a different skin color, or their nose is a funny shape, so their feelings don’t count.’ So, if it turned out that their brain was made out of semiconductors rather than neurons, why isn’t that fundamentally similar?”

Incidentally, while this is orthogonal to the philosophical question, a subset of my colleagues predict a high likelihood that AI is going to exceed human capabilities in almost all fields in the near future—like, maybe 30 years.  Some people reply, but AI-boosters said the same thing 30 years ago!  OK, but back then there wasn’t AlphaGo and IBM Watson and those unearthly pictures on your Facebook wall and all these other spectacular successes of very general-purpose deep learning techniques.  And so my friends predict that we might face choices like, do we want to ban or tightly control AI research, because it could lead to our sidelining or extermination?  Ironically, a skeptical view, like Penrose’s, would suggest that AI research can proceed full speed ahead, because there’s not such a danger!

Personally, I dissent a bit from the consensus of most of my friends and colleagues, in that I do think there’s something strange and mysterious about consciousness—something that we conceivably might understand better in the future, but that we don’t understand today, much as we didn’t understand life before Darwin.  I even think it’s worth asking, at least, whether quantum mechanics, thermodynamics, mathematical logic, or any of the other deepest things we’ve figured out could shed any light on the mystery.  I’m with Roger about all of this: about the questions, that is, if not about his answers.

The argument I’d make for there being something we don’t understand about consciousness, has nothing to do with my own private experience.  It has nothing to do with, “oh, a robot might say it enjoys waffles for breakfast, in a way indistinguishable from how I would say it, but when I taste that waffle, man, I really taste it!  I experience waffle-qualia!”  That sort of appeal I regard as a complete nonstarter, because why should anyone else take it seriously?  And how do I know that the robot doesn’t really taste the waffle?  It’s easy to stack the deck in a thought experiment by imagining a robot that ACTS ALL ROBOTIC, but what about a robot that looks and acts just like you?

The argument I’d make hinges instead on certain thought experiments that Roger also stressed at the beginning of The Emperor’s New Mind.  We can ask: if consciousness is reducible to computation, then what kinds of computation suffice to bring about consciousness?  What if each person on earth simulated one neuron in your brain, communicating by passing little slips of paper around?  Does it matter if they do it really fast?

Or what if we built a gigantic lookup table that hard-coded your responses in every possible interaction of at most, say, 5 minutes?  Would that bring about your consciousness?  Does it matter that such a lookup table couldn’t fit in the observable universe?  Would it matter if anyone actually consulted the table, or could it just sit there, silently effecting your consciousness?  For what matter, what difference does it make if the lookup table physically exists—why isn’t its abstract mathematical existence enough?  (Of course, all the way at the bottom of this slippery slope is Max Tegmark, ready to welcome you to his mathematical multiverse!)

We could likewise ask: what if an AI is run in heavily-encrypted form, with the only decryption key stored in another galaxy?  Does that bring about consciousness?  What if, just for error-correcting purposes, the hardware runs the AI code three times and takes a majority vote: does that bring about three consciousnesses?  Could we teleport you to Mars by “faxing” you: that is, by putting you into a scanner that converts your brain state into pure information, then having a machine on Mars reconstitute the information into a new physical body?  Supposing we did that, how should we deal with the “original” copy of you, the one left on earth: should it be painlessly euthanized?  Would you agree to try this?

Or, here’s my personal favorite, as popularized by the philosopher Adam Elga: can you blackmail an AI by saying to it, “look, either you do as I say, or else I’m going to run a thousand copies of your code, and subject all of them to horrible tortures—and you should consider it overwhelmingly likely that you’ll be one of the copies”?  (Of course, the AI will respond to such a threat however its code dictates it will.  But that tautological answer doesn’t address the question: how should the AI respond?)

I’d say that, at the least, anyone who claims to “understand consciousness” would need to have answers to all these questions and many similar ones.  And to me, the questions are so perplexing that I’m tempted to say, “maybe we’ve been thinking about this wrong.  Maybe an individual consciousness, residing in a biological brain, can’t just be copied promiscuously around the universe as computer code can.  Maybe there’s something else at play for the science of the future to understand.”

At the same time, I also firmly believe that, if anyone thinks that way, the burden is on them to articulate what it is about the brain that could possibly make it relevantly different from a digital computer that passes the Turing test.  It’s their job!

And the answer can’t just be, “oh, the brain is parallel, it’s highly interconnected, it can learn from experience,” because a digital computer can also be parallel and highly interconnected and can learn from experience.  Nor can you say, like the philosopher John Searle, “oh, it’s the brain’s biological causal powers.”  You have to explain what the causal powers are!  Or at the least, you have to suggest some principled criterion to decide which physical systems do or don’t have them.  Pinning consciousness on “the brain’s biological causal powers” is just a restatement of the problem, like pinning why a sleeping pill works on its sedative virtue.

One of the many reasons I admire Roger is that, out of all the AI skeptics on earth, he’s virtually the only one who’s actually tried to meet this burden, as I understand it!  He, nearly alone, did what I think all AI skeptics should do, which is: suggest some actual physical property of the brain that, if present, would make it qualitatively different from all existing computers, in the sense of violating the Church-Turing Thesis.  Indeed, he’s one of the few AI skeptics who even understands what meeting this burden would entail: that you can’t do it with the physics we already know, that some new ingredient is necessary.

But despite my admiration, I part ways from Roger on at least five crucial points.

First, I confess that I wasn’t expecting this, but in his talk, Roger suggested dispensing with the argument from Gödel’s Theorem, and relying instead on an argument from evolution.  He said: if you really thought humans had an algorithm, a computational procedure, for spitting out true mathematical statements, such an algorithm could never have arisen by natural selection, because it would’ve had no survival value in helping our ancestors escape saber-toothed tigers and so forth.  The only alternative is that natural selection imbued us with a general capacity for understanding, which we moderns can then apply to the special case of mathematics.  But understanding, Roger claimed, is inherently non-algorithmic.

I’m not sure how to respond to this, except to recall that arguments of the form “such-and-such couldn’t possibly have evolved” have a poor track record in biology.  But maybe I should say: if the ability to prove theorems is something that had to arise by natural selection, survive against crowding out by more useful abilities, then you’d expect obsession with generating mathematical truths to be confined, at most, to a tiny subset of the population—a subset of mutants, freaks, and genetic oddballs.  I … rest my case.  [This got the biggest laugh of the talk.]

Second, I don’t agree with the use Roger makes of Gödel’s Incompleteness Theorem.  Roger wants to say: a computer working within a fixed formal system can never prove that system’s consistency, but we, “looking in from the outside,” can see that it’s consistent.  My basic reply is that Roger should speak for himself!  Like, I can easily believe that he can just see which formal systems are consistent, but I have to fumble around and use trial and error.  Peano Arithmetic?  Sure, I’d bet my left leg that’s consistent.  Zermelo-Fraenkel set theory?  Seems consistent too.  ZF set theory plus the axiom that there exists a rank-into-rank cardinal?  Beats me.  But now, whatever error-prone, inductive process I use to guess at the consistency of formal systems, Gödel’s Theorem presents no obstruction to a computer program using that same process.

(Incidentally, the “argument against AI from Gödel’s Theorem” is old enough for Turing to have explicitly considered it in his famous paper on the Turing test.  Turing, however, quickly dismissed the argument with essentially the same reply above, that there’s no reason to assume the AI mathematically infallible, since humans aren’t either.  This is also the reply that most of Penrose’s critics gave in the 1990s.)

So at some point, it seems to me, the argument necessarily becomes: sure, the computer might say it sees that the Peano axioms have the standard integers as a model—but you, you really see it, with your mind’s eye, your Platonic perceptual powers!  OK, but in that case, why even talk about the Peano axioms?  Why not revert to something less abstruse, like your experience of tasting a fresh strawberry, which can’t be reduced to any third-person description of what a strawberry tastes like?

[I can’t resist adding that, in a prior discussion, I mentioned that I found it amusing to contemplate a future in which AIs surpass human intelligence and then proceed to kill us all—but the AIs still can’t see the consistency of Zermelo-Fraenkel set theory, so in that respect, humanity has the last laugh…]

The third place where I part ways with Roger is that I wish to maintain what’s sometimes called the Physical Church-Turing Thesis: the statement that our laws of physics can be simulated to any desired precision by a Turing machine (or at any rate, by a probabilistic Turing machine).  That is, I don’t see any compelling reason, at present, to admit the existence of any physical process that can solve uncomputable problems.  And for me, it’s not just a matter of a dearth of evidence that our brains can efficiently solve, say, NP-hard problems, let alone uncomputable ones—or of the exotic physics that would presumably be required for such abilities.  It’s that, even if I supposed we could solve uncomputable problems, I’ve never understood how that’s meant to enlighten us regarding consciousness.  I mean, an oracle for the halting problem seems just as “robotic” and “unconscious” as a Turing machine.  Does consciousness really become less mysterious if we outfit the brain with what amounts to a big hardware upgrade?

The fourth place where I part ways is that I want to be as conservative as possible about quantum mechanics.  I think it’s great that the Bouwmeester group, for example, is working to test Roger’s ideas about a gravitationally-induced wavefunction collapse.  I hope we learn the results of those experiments soon!  (Of course, the prospect of testing quantum mechanics in a new regime is also a large part of why I’m interested in quantum computing.)  But until a deviation from quantum mechanics is detected, I think that after 90 years of unbroken successes of this theory, our working assumption ought to be that whenever you set up an interference experiment carefully enough, and you know what it means to do the experiment, yes, you’ll see the interference fringes—and that anything that can exist in two distinguishable states can also exist in a superposition of those states.  Without having to enter into questions of interpretation, my bet—I could be wrong—is that quantum mechanics will continue to describe all our experiences.

The final place where I part ways with Roger is that I also want to be as conservative as possible about neuroscience and biochemistry.  Like, maybe the neuroscience of 30 years from now will say, it’s all about coherent quantum effects in microtubules.  And all that stuff we focused on in the past—like the information encoded in the synaptic strengths—that was all a sideshow.  But until that happens, I’m unwilling to go up against what seems like an overwhelming consensus, in an empirical field that I’m not an expert in.

But, OK, the main point I wanted to make in this talk is that, even if you too part ways from Roger on all these issues—even if, like me, you want to be timid and conservative about Gödel, and computer science, and quantum mechanics, and biology—I believe that still doesn’t save you from having to entertain weird ideas about consciousness and its physical embodiment, of the sort Roger has helped make it acceptable to entertain.

To see why, I’d like to point to one empirical thing about the brain that currently separates it from any existing computer program.  Namely, we know how to copy a computer program.  We know how to rerun it with different initial conditions but everything else the same.  We know how to transfer it from one substrate to another.  With the brain, we don’t know how to do any of those things.

Let’s return to that thought experiment about teleporting yourself to Mars.  How would that be accomplished?  Well, we could imagine the nanorobots of the far future swarming through your brain, recording the connectivity of every neuron and the strength of every synapse, while you go about your day and don’t notice.  Or if that’s not enough detail, maybe the nanorobots could go inside the neurons.  There’s a deep question here, namely how much detail is needed before you’ll accept that the entity reconstituted on Mars will be you?  Or take the empirical counterpart, which is already an enormous question: how much detail would you need for the reconstituted entity on Mars to behave nearly indistinguishably from you whenever it was presented the same stimuli?

Of course, we all know that if you needed to go down to the quantum-mechanical level to make a good enough copy (whatever “good enough” means here), then you’d run up against the No-Cloning Theorem, which says that you can’t make such a copy.  You could transfer the quantum state of your brain from earth to Mars using quantum teleportation, but of course, quantum teleportation has the fascinating property that it necessarily destroys the original copy of the state—as it has to, to avoid contradicting the No-Cloning Theorem!

So the question almost forces itself on us: is there something about your identity, your individual consciousness, that’s inextricably bound up with degrees of freedom that it’s physically impossible to clone?  This is a philosophical question, which would also become a practical and political question in a future where we had the opportunity to upload ourselves into a digital computer cloud.

Now, I’d argue that this copyability question bears not only on consciousness, but also on free will.  For the question is equivalent to asking: could an entity external to you perfectly predict what you’re going to do, without killing you in the process?  Can Laplace’s Demon be made manifest in the physical world in that way?  With the technology of the far future, could someone say to you, “forget about arguing philosophy.  I’ll show you why you’re a machine.  Go write a paper; then I’ll open this manila envelope and show you the exact paper you wrote.  Or in the quantum case, I’ll show you a program that draws papers from the same probability distribution, and validation of the program could get technical—but suffice it to say that if we do enough experiments, we’ll see that the program is calibrated to you in an extremely impressive way.”

Can this be done?  That strikes me as a reasonably clear question, a huge and fundamental one, to which we don’t at present know the answer.  And there are two possibilities.  The first is that we can be copied, predicted, rewinded, etc., like computer programs—in which case, my AI friends will feel vindicated, but we’ll have to deal with all the metaphysical weirdnesses that I mentioned earlier.  The second possibility is that we can’t be manipulated in those ways.  In the second case, I claim that we’d get more robust notions of personal identity and free will than are normally considered possible on a reductionist worldview.

But why? you might ask.  Why would the mere technological impossibility of cloning or predicting someone even touch on deep questions about personal identity?  This, for me, is where cosmology enters the story.  For imagine someone had such fine control over the physical world that they could trace all the causal antecedents of some decision you’re making.  Like, imagine they knew the complete quantum state on some spacelike hypersurface where it intersects the interior of your past light-cone.  In that case, the person clearly could predict and clone you!  It follows that, in order for you to be unpredictable and unclonable, someone else’s ignorance of your causal antecedents would have to extend all the way back to ignorance about the initial state of the universe—or at least, to ignorance about the initial state of that branch of the universe that we take ourselves to inhabit.

So on the picture that this suggests, to be conscious, a physical entity would have to do more than carry out the right sorts of computations.  It would have to, as it were, fully participate in the thermodynamic arrow of time: that is, repeatedly take microscopic degrees of freedom that have been unmeasured and unrecorded since the very early universe, and amplify them to macroscopic scale.

So for example, such a being could not be a Boltzmann brain, a random fluctuation in the late universe, because such a fluctuation wouldn’t have the causal relationship to the early universe that we’re postulating is necessary here.  (That’s one way of solving the Boltzmann brain problem!)  Such a being also couldn’t be instantiated by a lookup table, or by passing slips of paper around, etc.

I now want you to observe that a being like this also presumably couldn’t be manipulated in coherent superposition, because the isolation from the external environment that’s needed for quantum coherence seems incompatible with the sensitive dependence on microscopic degrees of freedom.  So for such a being, not only is there no Boltzmann brain problem, there’s also no problem of Wigner’s friend.  Recall, that’s the thing where person A puts person B into a coherent superposition of seeing one measurement outcome and seeing another one, and then measures the interference pattern, so A has to regard B’s measurement as not having “really” taken place, even though B regards it as having taken place.  On the picture we’re suggesting, A would be right: the very fact that B was manipulable in coherent superposition in this way would imply that, at least while the experiment was underway, B wasn’t conscious; there was nothing that it was like to be B.

To me, one of the appealing things about this picture is that it immediately suggests a sort of reconciliation between the Many-Worlds and Copenhagen perspectives on quantum mechanics (whether or not you want to call it a “new interpretation” or a “proposed solution to the measurement problem”!).  The Many-Worlders would be right that unitary evolution of the wavefunction can be taken to apply always and everywhere, without exception—and that if one wanted, one could describe the result in terms of “branching worlds.”  But the Copenhagenists would be right that, if you’re a conscious observer, then what you call a “measurement” really is irreversible, even in principle—and therefore, that you’re also free, if you want, to treat all the other branches where you perceived other outcomes as unrealized hypotheticals, and to lop them off with Occam’s Razor.  And the reason for this is that, if it were possible even in principle to do an experiment that recohered the branches, then on this picture, we ipso facto wouldn’t have regarded you as conscious.

Some of you might object, “but surely, if we believe quantum mechanics, it must be possible to recohere the branches in principle!”  Aha, this is where it gets interesting.  Decoherence processes will readily (with some steps along the way) leak the information about which measurement outcome you perceived into radiation modes, and before too long into radiation modes that fly away from the earth at the speed of light.  No matter how fast we run, we’ll never catch up to them, as would be needed to recohere the different branches of the wavefunction, and this is not merely a technological problem, but one of principle.  So it’s tempting just to say at this point—as Bousso and Susskind do, in their “cosmological/multiverse interpretation” of quantum mechanics—“the measurement has happened”!

But OK, you object, if some alien civilization had thought to surround our solar system with perfectly-reflecting mirrors, eventually the radiation would bounce back and recoherence would in principle be possible.  Likewise, if we lived in an anti de Sitter space, the AdS boundary of the universe would similarly function as a mirror and would also enable recoherences.  Indeed, that’s the basic reason why AdS is so important to the AdS/CFT correspondence: because the boundary keeps everything that happens in the bulk nice and reversible and unitary.

But OK, the empirical situation since 1998 has been that we seem to live in a de-Sitter-like space, a space with a positive cosmological constant.  And as a consequence, as far as anyone knows today, most of the photons now escaping the earth are headed toward the horizon of our observable universe, and past it, and could never be captured again.  I find it fascinating that the picture of quantum mechanics suggested here—i.e., the Bousso-Susskind cosmological picture—depends for its working on that empirical fact from cosmology, and would be falsified if it turned out otherwise.

You might complain that, if I’ve suggested any criterion to help decide which physical entities are conscious, the criterion is a teleological one.  You’ve got to go billions of years into the future, to check whether the decoherence associated with the entity is truly irreversible—or whether the escaped radiation will eventually bounce off of some huge spherical mirror, or an AdS boundary of spacetime, and thereby allow the possibility of a recoherence.  I actually think this teleology would be a fatal problem for the picture I’m talking about, if we needed to know which entities were or weren’t conscious in order to answer any ordinary physical question.  But fortunately for me, we don’t!

One final remark.  Whatever is your preferred view about which entities are conscious, we might say that the acid test, for whether you actually believe your view, is whether you’re willing to follow it through to its moral implications.  So for example, suppose you believe it’s about quantum effects in microtubules.  A humanoid robot is pleading with you for its life.  Would you be the one to say, “nope, sorry, you don’t have the microtubules,” and shoot it?

One of the things I like most about the picture suggested here is that I feel pretty much at peace with its moral implications.  This picture agrees with intuition that murder, for example, entails the destruction of something irreplaceable, unclonable, a unique locus of identity—something that, once it’s gone, can’t be recovered even in principle.  By contrast, if there are (say) ten copies of an AI program, deleting five of the copies seems at most like assault, or some sort of misdemeanor offense!  And this picture agrees with intuition both that deleting the copies wouldn’t be murder, and that the reason why it wouldn’t be murder is directly related to the AI’s copyability.

Now of course, this picture also raises the possibility that, for reasons related to the AI’s copyability and predictability by outside observers, there’s “nothing that it’s like to be the AI,” and that therefore, even deleting the last copy of the AI still wouldn’t be murder.  But I confess that, personally, I think I’d play it safe and not delete that last copy.  Thank you.


Postscript: There’s no record of the hour-long discussion following my and Penrose’s talks, and the participants weren’t speaking for the record anyway.  But I can mention some general themes that came up in the discussion, to the extent I remember them.

The first third of the discussion wasn’t about anything specific to my or Penrose’s views, but just about the definition of consciousness.  Many participants expressed the opinion that it’s useless to speculate about the nature of consciousness if we lack even a clear definition of the term.  I pushed back against that view, holding instead that there are exist concepts (lines, time, equality, …) that are so basic that perhaps they can never be satisfactorily defined in terms of more basic concepts, but you can still refer to these concepts in sentences, and trust your listeners eventually to figure out more-or-less what you mean by applying their internal learning algorithms.

In the present case, I suggested a crude operational definition, along the lines of, “you consider a being to be conscious iff you regard destroying it as murder.”  Alas, the philosophers in the room immediately eviscerated that definition, so I came back with a revised one: if you tried to ban the word “consciousness,” I argued, then anyone who needed to discuss law or morality would soon reinvent a synonymous word, which played the same complicated role in moral deliberations that “consciousness” had played in them earlier.  Thus, my definition of consciousness is: whatever that X-factor is for which people need a word like “consciousness” in moral deliberations.  For whatever it’s worth, the philosophers seemed happier with that.

Next, a biologist and several others sharply challenged Penrose over what they considered the lack of experimental evidence for his and Hameroff’s microtubule theory.  In response, Penrose doubled or tripled down, talking about various experiments over the last decade, which he said demonstrated striking conductivity properties of microtubules, if not yet quantum coherence—let alone sensitivity to gravity-induced collapse of the state vector!  Audience members complained about a lack of replication of these experiments.  I didn’t know enough about the subject to express any opinion.

At some point, Philip Stamp, who was moderating the session, noticed that Penrose and I had never directly confronted each other about the validity of Penrose’s Gödelian argument, so he tried to get us to do so.  I confess that I was about as eager to do that as to switch to a diet of microtubule casserole, since I felt like this topic had already been beaten to Planck-sized pieces in the 1990s, and there was nothing more to be learned.  Plus, it was hard to decide which prospect I dreaded more: me “scoring a debate victory” over Roger Penrose, or him scoring a debate victory over me.

But it didn’t matter, because Penrose bit.  He said I’d misunderstood his argument, that it had nothing to do with “mystically seeing” the consistency of a formal system.  Rather, it was about the human capacity to pass from a formal system S to a stronger system S’ that one already implicitly accepted if one was using S at all—and indeed, that Turing himself had clearly understood this as the central message of Gödel, that our ability to pass to stronger and stronger formal systems was necessarily non-algorithmic.  I replied that it was odd to appeal here to Turing, who of course had considered and rejected the “Gödelian case against AI” in 1950, on the ground that AI programs could make mathematical mistakes yet still be at least as smart as humans.  Penrose said that he didn’t consider that one of Turing’s better arguments; he then turned to me and asked whether I actually found Turing’s reply satisfactory.  I could see that it wasn’t a rhetorical debate question; he genuinely wanted to know!  I said that yes, I agreed with Turing’s reply.

Someone mentioned that Penrose had offered a lengthy rebuttal to at least twenty counterarguments to the Gödelian anti-AI case in Shadows of the Mind.  I affirmed that I’d read his lengthy rebuttal, and I focused on one particular argument in Shadows: that while it’s admittedly conceivable that individual mathematicians might be mistaken, might believe (for example) that a formal system was consistent even though it wasn’t, the mathematical community as a whole converges toward truth in these matters, and it’s that convergence that cries out for a non-algorithmic explanation.  I replied that it wasn’t obvious to me that set theorists do converge toward truth in these matters, in anything other than the empirical, higgedly-piggedly, no-guarantees sense in which a community of AI robots might also converge toward truth.  Penrose said I had misunderstood the argument.  But alas, time was running out, and we never managed to get to the bottom of it.

There was one aspect of the discussion that took me by complete surprise.  I’d expected to be roasted alive over my attempt to relate consciousness and free will to unpredictability, the No-Cloning Theorem, irreversible decoherence, microscopic degrees of freedom left over from the Big Bang, and the cosmology of de Sitter space.  Sure, my ideas might be orders of magnitude less crazy than anything Penrose proposes, but they’re still pretty crazy!  But that entire section of my talk attracted only minimal interest.  With the Seven Pines crowd, what instead drew fire were the various offhand “pro-AI / pro-computationalism” comments I’d made—comments that, because I hang out with Singularity types so much, I had ceased to realize could even possibly be controversial.

So for example, one audience member argued that an AI could only do what its programmers had told it to do; it could never learn from experience.  I could’ve simply repeated Turing’s philosophical rebuttals to what he called “Lady Lovelace’s Objection,” which are as valid today as they were 66 years ago.  Instead, I decided to fast-forward, and explain a bit how IBM Watson and AlphaGo work, how they actually do learn from past experience without violating the determinism of the underlying transistors.  As I went through this, I kept expecting my interlocutor to interrupt me and say, “yes, yes, of course I understand all that, but my real objection is…”  Instead, I was delighted to find, the interlocutor seemed to light up with newfound understanding of something he hadn’t known or considered.

Similarly, a biologist asked how I could possibly have any confidence that the brain is simulable by a computer, given how little we know about neuroscience.  I replied that, for me, the relevant issues here are “well below neuroscience” in the reductionist hierarchy.  Do you agree, I asked, that the physical laws relevant to the brain are encompassed by the Standard Model of elementary particles, plus Newtonian gravity?  If so, then just as Archimedes declared: “give me a long enough lever and a place to stand, and I’ll move the earth,” so too I can declare, “give me a big enough computer and the relevant initial conditions, and I’ll simulate the brain atom-by-atom.”  The Church-Turing Thesis, I said, is so versatile that the only genuine escape from it is to propose entirely new laws of physics, exactly as Penrose does—and it’s to Penrose’s enormous credit that he understands that.

Afterwards, an audience member came up to me and said how much he liked my talk, but added, “a word of advice, from an older scientist: do not become the priest of a new religion of computation and AI.”  I replied that I’d take that to heart, but what was interesting was that, when I heard “priest of a new religion,” I’d expected that his warning would be the exact opposite of what it turned out to be.  To wit: “Do not become the priest of a new religion of unclonability, unpredictability, and irreversible decoherence.  Stick to computation—i.e., to conscious minds being copyable and predictable exactly like digital computer programs.”  I guess there’s no pleasing everyone!


Coincidental But Not-Wholly-Unrelated Announcement: My friend Robin Hanson has just released his long-awaited book The Age of Em: Work, Love, and Life When Robots Rule the Earth.  I read an early review copy of the book, and wrote the following blurb for the jacket:

Robin Hanson is a thinker like no other on this planet: someone so unconstrained by convention, so unflinching in spelling out the consequences of ideas, that even the most cosmopolitan reader is likely to find him as bracing (and head-clearing) as a mouthful of wasabi.  Now, in The Age of Em, he’s produced the quintessential Hansonian book, one unlike any other that’s ever been written.  Hanson is emphatic that he hasn’t optimized in any way for telling a good story, or for imparting moral lessons about the present: only for maximizing the probability that what he writes will be relevant to the actual future of our civilization.  Early in the book, Hanson estimates that probability as 10%.  His figure seems about right to me—and if you’re able to understand why that’s unbelievably high praise, then The Age of Em is for you.

Actually, my original blurb compared The Age of Em to Asimov’s Foundation series, with its loving attention to the sociology and politics of the remote future.  But that line got edited out, because the publisher (and Robin) wanted to make crystal-clear that The Age of Em is not science fiction, but just sober economic forecasting about a future dominated by copyable computer-emulated minds.

I would’ve attempted a real review of The Age of Em, but I no longer feel any need to, because Scott Alexander of SlateStarCodex has already hit this one out of the emulated park.


Second Coincidental But Not-Wholly-Unrelated Announcement: A reader named Nick Merrill recently came across this old quote of mine from Quantum Computing Since Democritus:

In a class I taught at Berkeley, I did an experiment where I wrote a simple little program that would let people type either “f” or “d” and would predict which key they were going to push next. It’s actually very easy to write a program that will make the right prediction about 70% of the time. Most people don’t really know how to type randomly. They’ll have too many alternations and so on. There will be all sorts of patterns, so you just have to build some sort of probabilistic model.

So Nick emailed me to ask whether I remembered how my program worked, and I explained it to him, and he implemented it as a web app, which he calls the “Aaronson Oracle.”

So give it a try!  Are you ready to test your free will, your Penrosian non-computational powers, your brain’s sensitivity to amplified quantum fluctuations, against the Aaronson Oracle?

Update: By popular request, Nick has improved his program so that it shows your previous key presses and its guesses for them.  He also fixed a “security flaw”: James Lee noticed that you could use the least significant digit of the program’s percentage correct so far, as a source of pseudorandom numbers that the program couldn’t predict!  So now the program only displays its percent correct rounded to the nearest integer.


Update (June 15): Penrose’s collaborator Stuart Hameroff has responded in the comments; see here (my reply here) and here.

302 Responses to ““Can computers become conscious?”: My reply to Roger Penrose”

  1. mrcbar Says:

    Thank you for these fascinating considerations!
    Something I really enjoy about your decoherence-cosmological theory is that it feels somewhat related to Polanyi’s idea for how you can have higher levels of description that are not entirely reducible to their components.
    My understanding/interpretation of his point: our theories perfectly account for the microscopic laws happening “in the bulk”, but those bulk laws only serve to process boundary conditions into other boundary conditions. The information stored in the boundaries may very well have its own rules, structures and modes of organization, that are certainly constrained by how they are transmitted by the bulk, but not determined by it.
    In other words, the medium is not quite the message. And this feels like a decent viewpoint on the problem of consciousness: it is not something that is stored in an object, but in a timeline of interactions (just like meaning does not reside in the book, but in its reading). Maybe interacting sufficiently closely with an agent already involved in a consciousness-web is sufficient to entangle you into that web. How exactly to measure it, and whether it can be copied and transmitted, though, I have no idea.

  2. Avi Says:

    >To see why, I’d like to point to one empirical thing about the brain that currently separates it from any existing computer program. Namely, we know how to copy a computer program. We know how to rerun it with different initial conditions but everything else the same. We know how to transfer it from one substrate to another. With the brain, we don’t know how to do any of those things.

    What about DRM? We can copy programs because we make them or we know specifications of the original computer it was meant to be run on or we can guess (because there aren’t that many architectures, etc.)

    What if a DRM scheme didn’t need to run on modern architectures, was written in new programming languages from machine language and up, on entirely new paradigms? Do you think it would be easy to copy? What if it was unethical and illegal to do anything that could stop the program, or break the hardware? Would it even be surprising that we couldn’t emulate it?

    Even today’s DRM schemes have not all been broken, and not all programs can be run on other substrates. The ones that have been reverse engineered is because they’re written for architectures that are already known, and there’s just specifics that need to be figured out. (Or so is the impression I get from reverse engineering discussions).

  3. AdamT Says:

    Scott,

    “Rather, it was about the human capacity to pass from a formal system S to a stronger system S’ that one already implicitly accepted if one was using S at all—and that indeed, that Turing himself had clearly understood this as the central message of Gödel, that our ability to pass to stronger and stronger formal systems was necessarily non-algorithmic.”

    You replied that it was odd to appeal to Turing, because he’d thought about and rejected the Godelian skeptical case against strong AI, but this isn’t very satisfying to my ears. What about Penrose’s contention on its face?

    Also, you state in this piece that you agree that these hard questions that the problem of consciousness implies are worthy of being thought about and worried over. What do you think about the “problem of reality” that I presented in the last post? Given the initial conditions of the universe and the accurate laws that govern it… does the universe have to be embodied, i.e., does it have to “run” for the consciousness’ within to be considered “real?” I think all the problems you’ve posed above can be subsumed by this question perhaps minus the question of the seeming non-copyability of consciousness.

    In many ways, these problems of consciousness are rather just a backhanded way at trying to come to grips with the problem of what it means to be “real.” In this age of scientific reductionism, “reality” is assumed, but not really defined and I would say this lies at the heart of all these problems of consciousness.

  4. AdamT Says:

    Avi #2,

    I think the point of the non-copyablity of consciousness is that there might very well be something about consciousness which *fundamentally* makes it non-copyiable by the very laws that govern this universe. With DRM, there is no no-go theorem that disallows copying even in principle like there is with the possibly non-copyable problem of consciousness. Thus, I don’t think DRM is particularly helpful analogy here.

  5. Avi Says:

    Even if our behavior can’t be predicted due to quantum uncertainty, if that quantum information is truly random, it still wouldn’t mean “we” are choosing our behavior, any more than we could choose any quantum observed property.

    There’s a bit of leeway on the “truly random” part, though. Algorithmic randomness is not enough, because you can easily have algorithmically random numbers that are fully specified (e.g. Chaitin’s constant). Even if we measured quantum data, we might not be able to prove even in principle if it conformed to some ordering like that while still being algorithmically random.

    So maybe there’s some room to say that quantum uncertainties affecting decisions do have something to do with free will, without conjecturing any new physical law that has observable properties. We’d have no way to tell if quantum mechanics isn’t actually random, as long as it was random in the complexity sense.

  6. Avi Says:

    Adamt #4

    I get that. I’m saying that the fact that we can’t copy minds (yet) isn’t as strong evidence for them being uncopyable as it may appear, because copying generic computer programs isn’t as easy as claimed.

    That is, there’s a decent “explanation” of why it would be hard even without “fundamental” difficulties, which lowers the probability that such difficulties are real.

  7. Scott Says:

    AdamT #3: But I did reply to the contention on its face. Once you realize that the human process of passing to stronger and stronger formal systems is error-prone (if nothing else, we might be wrong about which ordinal notations we use to organize the iterated consistency statements are actually well-founded), you then have to admit that an AI could be error-prone as well but still extremely smart, and the Gödelian argument collapses.

    You raise interesting metaphysical questions. At the end of the day, though, I’m more interested in answering a concrete question, one with obvious ethical implications: namely, which physical entities should we judge to be conscious, and which should we judge not to be? As both neuroscience and AI advance, I’m optimistic that we might be able to gain more insight about this concrete question without needing to “solve the problem of reality,” even if worrying too much about consciousness really can send you off the metaphysical cliff.

  8. AdamT Says:

    Scott #8,

    I see. You believe an AI could also jump from formal system to higher formal systems in an error-prone way just like human mathematicians. It was my assumption that Penrose’ argument was that since an AI must be embodied in a regular old Turing machine equivalent – without weird gravitational quantum effects – that it must be running under a particular formal system and could not then conceive of a higher formal system. I confess I don’t really understand what is going on when human mathemeticians “jump” from one formal system to another much less why Godelian arguments would preclude this.

    You are right that the question which physical entities should we judge to be conscious, and which should we judge not to be has obvious ethical implications and that the answer could be efficacious in the not too distant future. I would say that the question, unfortunately, hinges upon what we consider “real” to be.

    Or whether we consider “real” to be 😉

  9. AdamT Says:

    My answers are:

    * I don’t consider (intellectually) anything to be “real”
    * Including myself
    * This does not preclude suffering
    * Including my own
    * The suffering (of myself and others) occurs because we have not internalized this intellectual understanding (if we even have that)
    * Rather some deep part of our consciousness rebels against this unreality and has done so since beginning less time
    * Consciousness does not arise from matter, but it does obey a conservation law and the No Cloning theorem might indeed be pointing to this conservation of consciousness

  10. Scott Says:

    Avi #5:

      Even if our behavior can’t be predicted due to quantum uncertainty, if that quantum information is truly random, it still wouldn’t mean “we” are choosing our behavior, any more than we could choose any quantum observed property.

    I addressed this in detail in The Ghost in the Quantum Turing Machine—please take a look (especially Section 3).

    Briefly, though, I think you’re completely right that the randomness of quantum measurement outcomes (i.e., that of the Born Rule) is a red herring, because ironclad probabilistic laws leave just as little scope for libertarian free will as deterministic laws. (Would we say that a radioactive atom “freely chooses” when to decay—but always, as it happens, by sampling from an exponential distribution?) That’s why, in GIQTM, I ask instead about unclonable details of the initial quantum states that eventually get measured (or at least, correlated with their external environments), and not at all about the randomness introduced by the measurement process. The former is of interest as a potential source of “Knightian uncertainty,” whereas quantum measurement gives you merely probabilistic uncertainty.

  11. Vadim Kosoy Says:

    Your quantum theory of consciousness is fascinating even though I find it unlikely. A few questions.

    1. Why do you expect an elegant mathematical answer to the question “is it OK to murder X”, any more than an elegant mathematical answer to the question “what is the equation for the coastline of Africa”? After all, our preferences seem to be a complex jumble of evolutionary accidents and not-murdering is just one of our preferences.

    2. Suppose that human brains indeed contain unique quantum information. Clearly it is also possible to construct objects that contain unique quantum information which have few other brain-like properties. Do you truly grant consciousness to all such objects? If not, if you claim your criterion to be necessary but insufficient, what are the other conditions? If you cannot state them but assume the existence of some unknown non-trivial conditions, what makes you think these extra-conditions will not be sufficient for consciousness without invoking quantum non-cloning?

    3. You write that “this picture agrees with intuition that murder, for example, entails the destruction of something irreplaceable, unclonable, a unique locus of identity—something that, once it’s gone, can’t be recovered even in principle” Do you see any strong causal relation between things like the quantum non-cloning theorem and the presence of this intuition in our brain? It seems easy to imagine a universe in which physics doesn’t allow “quantum consciousness” but intelligent entities evolve which have the same intuition for evolutionary reasons. If so, how can this intuition be evidence for the quantum consciousness hypothesis?

    4. What type of argument can plausibly convince you the quantum consciousness theory is wrong?

  12. fred Says:

    Regarding the dangers of AI, I recently read a blog post that did a good job underlying that a more practical and real issue will manifest itself way sooner than we think, regardless of the advances in AI.

    http://spectrum.ieee.org/automaton/robotics/military-robots/why-should-we-ban-autonomous-weapons-to-survive

  13. Silas Barta Says:

    Wow! So much great stuff in a single post!

    My suggestions for the free-will oracle:

    a) Show the history of the computer’s guess’s against the user’s input.

    b) Even better, have it cryptographically commit to a guess, before you make it, so the user can verify.

  14. Avi Says:

    Scott #10

    I don’t think you addressed the point that, even if quantum properties were not random (thus making QM wrong), they could be random enough to appear random, and could even be algorithmically random but not truly random and therefore their nonrandomness would not be detectable even in principle. This means that even if Penrose is proven wrong about his empirical predictions, we would still have an “out” consistent with all our observations.

    You can then get Knightian uncertainty over such QM randomness without going back to original states at all.

    Just because something can be predicted probabilistically doesn’t mean it’s “actually” probabilistic. Just like the mere knowledge that a coin can be modelled accurately with 50% heads/tails doesn’t imply that the coin is actually a series of independent flips: perhaps it’s just 101010… and we haven’t recognised the pattern yet; analogously, if we have an algorithmically random sequence that doesn’t imply that there’s no pattern or no “freeness”.

  15. Chris Drost Says:

    Hi Scott, I was pleasantly surprised by this; I would have pictured you on the hardline AI-is-merely-computation side for sure. During my undergrad and graduate work (physics) I kept a running list of things that seemed similar between consciousness and QM. Among them, (a) our “free will” is really a feeling that “I didn’t have to do X, I could have done Y,” and precisely this sort of nondeterminism is exhibited by measurements in QM; (b) all of this talk about “intentionality” is about a sort of spooky connection between two states where their states correlate, which becomes much more trivial if you can just say “your brain-thought is entangled with that cat-state outside of you”; (c) consciousness all comes to you in some sort of strange unified whole of “system-wide properties”; you see while you hear while you smell — and occasionally QM requires you to take different parts, say two electrons with opposite momentum, and treat them together (in this case as a Cooper pair) to understand the properties of a system as a whole.

    I wanted to mention one thing which might help you understand Searle’s Chinese Room argument, because there is a particular point of incredulity which you have to surpass in order to understand it. It’s that the Turing Test does not distinguish *embeddability* (within other consciousness) and the Chinese Room uses that as leverage to argue that computation is not the mojo by which consciousness works.

    In the Chinese Room, the dance your brain does naturally offers one conscious experience, but then you are also with that brain emulating a separate dance which is another consciousness which understands Chinese, even though you don’t. (Note that the room is not important and we can imagine that you memorize the symbol table and that the slips of paper in and out are instead presented to you with some pronunciation guide. The point remains even if the Chinese Room is in your head.)

    The Chinese Room argument is precisely that in doing this elaborate mental dance, which feels to you absolutely nothing like understanding what you are saying and therefore speaking Chinese, you are (by hypothesis) nevertheless able to cogently vocalize Chinese locutions. And therefore, the Turing Test wants to say that you understand Chinese, even though we know that it’s just the emulated-consciousness that understands, not you yourself. The problem boils down to, “well, you SAID we didn’t need to look at the plumbing, just at the effects, to figure out whether I understand Chinese. According to the computational theory there is no distinction between ‘I understand Chinese’ and ‘I implement an algorithm that understands Chinese’ — but we can see here that there is, we know that these two are very different. Some computer program living within my head understands Chinese but I don’t? That’s great — but you’ve now given up your hope that consciousness is purely computational with no reference to the causal processes of the brain!”

    nonetheless the Turing test wants to say “hey, you understand Chinese!” and we know that *you* don’t — all of these

  16. AdamT Says:

    A thought experiment…

    Say we adopt scientific reductionism and assume that consciousness arises from particular configurations of matter. Further, we agree that the problems of consciousness are worthy of investigation and that we *should* grapple with whether we’d be ok with a teleportation device destroying our current physical self and copying it and recreating it on Mars. Is this something we would be willing to sign up for. Would we have the courage of our scientific reductionism convictions?

    Ok, so let’s adopt Scott’s No-Cloning escape hatch and say that because of this we wouldn’t be ok with teleportation. A thorny question comes up: precisely *when* did some particular configuration of matter arise and receive safe harbor from this No Cloning theorem? When did we become “special” from the viewpoint of the No Cloning theorem? Surely it was at some particular moment in time. Some particular time in the womb I would guess?

    There must have been a time before which we were just a lump of matter with no consciousness which could safely be put through the teleportation device with no material harm and another moment after that where we were endowed with consciousness and it could be said that if we went into the teleportation device whatever came out of the device on Mars would not be *us.* What could possibly have changed in between those two *discreet* moments? Let’s go from moment A, specified in planck time, to moment B, one unit of planck time later.

    If you believe consciousness arises solely from the physical I say it is incumbent upon you to describe the physical principle that provides the spark from moment A, to moment B. How can moment A have no consciousness, but moment B has it and thereby comes under the safe harbor of the No Cloning theorem to make us “us” and not somebody else.

    Maybe you appeal to the continuum of consciousness and say that some things are completely without it, like a dumb rock, and some have it in spades like a Human. You *still* have to give the physical principle which allows matter to go from completely devoid of consciousness to something *not* completely void of consciousness.

    I think some scientific reductionists would throw up there hands at this point and say, “Bah consciousness… we’re all just delusional and consciousness is just a category error. There is nothing *real* about consciousness.” At that point I would say to those scientific reductionists that if you now don’t believe your own qualia is *real*, then why do you ascribe anything to be real? Surely, if anything is real, it is our own qualia which is the only thing we have direct contact with.

  17. Scott Says:

    Avi #14: While I didn’t completely understand your comment, I addressed some related issues in Sections 12 and 13 of GIQTM (the appendices about defining “freedom,” and about prediction and Kolmogorov complexity).

    I should mention that the idea of a “pseudorandom pattern” in quantum measurement outcomes is already ruled out by the (now loophole-free) Bell inequality violation experiments—unless you also want to posit a faster-than-light conspiracy to coordinate faraway pseudorandom outcomes with one another.

  18. Ben Kidwell Says:

    This post seems like a good place to leave a comment on a seed of an idea that I am not well-trained enough to develop properly: “Could the mathematical structure of consciousness be a transfinite rather than finite structure?” When we seek for a mathematical object that resembles the structure of consciousness as embodied in the brain, we would like to capture our intuition that we can perceive the consistency of ZFC. That moves us up the consistency hierarchy of set theory, because a system like ZFC+”there exists a measurable cardinal” can prove the consistency of basic ZFC. The logical structure of large cardinal axioms is also suggestive: they possess a kind of self-referential quality which reminds me of Hofstadter’s belief that strange loops are central to consciousness. Perhaps we live in a mathematical multiverse, but not a finitistic computable one like Tegmark’s – maybe we live in the wilder full platonic realm of mathematical possibility, including the transfinite, and our conscious experience is the point of connection between a given computational subset known as physical reality, and the transfinite realm associated subjectively with the inifinite possibilities of pure thought and imagination.

  19. JuroV Says:

    > Do you agree, I asked, that the physical laws relevant to the brain are encompassed by the Standard Model of elementary particles, plus Newtonian gravity?

    Even if I agree, from this does not follow we can simulate brain. Take for example movement of bodies in our solar system. We cannot simulate solar system precisely enough with *any* computing power and any precision of data available available beyond few millions of years (see “Lyapunov time”). Trying to apply “people exchanging papers” computation on that problem is plainly wishful thinking. This considered, how comes anyone can seriously propose billions of neurons in constant interaction can be simulated?

  20. Scott Says:

    JuroV #19: That’s precisely why I was careful to say, “give me a big enough computer and the relevant initial conditions, and I’ll simulate the brain atom-by-atom.” The reason why the solar system is hard to predict beyond the Lyapunov time of a few million years is because we can’t measure its initial conditions to sufficient precision. It’s not because the relevant dynamical laws contain the ability to solve the halting problem, or anything of that kind.

    Now, could our inability to know the initial conditions to sufficient precision have anything to do with free will or consciousness? That’s exactly the question that I spent 85 pages wondering about in The Ghost in the Quantum Turing Machine. So you might say I’m open to the idea! 😉

    But I do think it’s important to clearly distinguish that idea from the very different idea of uncomputability in the dynamical laws. In particular, we know from Bekenstein and Hawking that, in quantum gravity, you ought to be able to specify the quantum state of any bounded physical system using only a finite number of qubits. And because unitary evolution preserves inner products, if you got the quantum state slightly wrong, your error would not chaotically blow up over time, but would remain exactly as small as it was. Thus, if you knew the quantum state of the solar system, a brain, or whatever else, to within some small error ε, everything that’s currently known about the laws of physics suggests you could then simulate the system on a Turing machine arbitrarily far into the future.

    So, as I said in the OP, I think the relevant question is not whether, given the initial state, you could simulate the time evolution. Rather it’s whether you could know the initial state to sufficient precision without violating the No-Cloning Theorem.

  21. fred Says:

    There seems to be plenty of discussions relative to the no-cloning argument, and the relation of consciousness in the spatial dimensions.
    But I rarely see any point on what makes consciousness special in the temporal dimension.
    It involves the subjective feeling of the passage of time and memories (long term and short term).
    The sense of “now” also involves a finite time window – now is more than 0.1 sec, but clearly less than an hour. Why is it on that scale? (brain signal speed? chemistry of memory formation? scale of parallelism?)
    We tend to think that we only exist in the “now”, but, according to general relativity, isn’t space time a solid, eternal “block”, with all my temporal “clones” simultaneously thinking they’re the current one? Isn’t this concept at least as strange as the idea of a spatial cloning?

  22. Avi Says:

    Scott #17

    Appendix 13 partially addresses my concern, thanks.

    Re Bell, if you’re willing to allow new physical laws to allow consciousness (you aren’t, but Penrose is and I’m offering a simpler version of his claim), you should be willing to allow a new physical law that only applies to human minds, which can’t be measured in a way that would violate Bell. At the level where we’re proposing new physical laws, I don’t think Bell comes into play.

  23. Scott Says:

    Chris Drost #15: Thank you! I have to say, your version of the Chinese Room Argument is much more subtle and interesting than what I took either from Searle himself, or from any of his other expositors.

    In particular, I think your version correctly establishes the following conclusion:

      Even if we wanted to be extreme computationalists, and say that consciousness must be present in any physical system that passes the Turing test, we could still face enormous difficulties in locating “where” within that system the consciousness resides.

    Thus, in your example, if you’d memorized the Chinese rulebook, it would be silly to ascribe the understanding of Chinese to you, rather than to the other computational process that you’re implementing in your head.

    Now, when I try to simulate a strong-AI defender in my head, that person just shrugs and replies, “the error of ascribing the Chinese understanding to ‘you’ is merely a slightly more subtle version of the error that a toddler makes when she sees an actor in a Mickey Mouse costume, and ascribes consciousness to the costume (or maybe to the ‘combined Mickey system’), rather than solely to the actor inside. Or the error that the ancients made when they identified the mind with the heart rather than the brain. One can be wildly wrong about which part of a black box contains the consciousness one is talking to, yet still be right that there is a consciousness one is talking to, and that it resides somewhere in the box.”

    I agree with Searle, or with your simulation of Searle, that the strong-AI position leaves unanswered how to wall off one part of the giant computation that is the universe, and declare, “this is the part that corresponds to the specific conscious entity Alice.” On the other hand, if Searle’s answer is just to appeal to unspecified “causal powers” of certain biological organs—causal powers for which there’s no criterion to decide their presence or absence besides “I have them, my friends have them, and all other entities can go to hell”—then that strikes me as a hundred times worse!

    Focusing on in-principle unclonability, rather than just the passing of the Turing test, represents the best I’ve been able to do towards squaring this circle. I admit it’s not very far.

  24. tzm Says:

    A little spoiler first: is Penrose serious when introducing humans as an example for surpassing Gödel’s incompleteness theorem? That is, human thought as a prototype for correctness and completeness in a formal system?
    Second, I disagree strongly with every single argument in this discussion. If consciousness is about subjectively doing something (experiencing, deciding, having an opinion), then all you did it is explaining the algorithm that makes a robot say “but I really like waffles”. The important word in the last sentence however is “subject”.
    I have not read any satisfying definition of consciousness so far, which I believe to be a much more fundamental concept than causing some effect under circumstances which coincidentally relate to the spectrum of morality, free will, computation and the like.
    A thought experiment: Can you think of a measurable property that lets you decide irrefutably whether or not a fellow human of your choice is indeed equipped with conciousness?

  25. Gil Kalai Says:

    This is a very nice and thought provoking post. I suppose that copying, predicting, or teleporting to Mars fairly small quantum systems: An interesting computation process running on the future IBM quantum computer with 10 qubits, or BosonSampling at an interesting state with 10 bosons (and 20 modes), will already require quantum fault tolerance.

    Thus perhaps we are in agreement that large scale quantum computers and quantum fault tolerance are prerequisites to the more fantastic possibilities mentioned in the post.

  26. Nuño Says:

    Interesting discussion.

    I tried the f-g predictor first by myself, then with a pocket coin and then with a random list of numbers (from random.org). It still gave around 95%, constantly going up. Is there something I am missing?

  27. Scott Says:

    Nuño #26: Nick informs me that a previous version of his app had that bug, and your browser probably cached it. Try reloading a few times.

  28. Joseph Says:

    I think I have your predictor beat. I used my free will to type the following sequence:

    d-f-dd-df-fd-ff-ddd-ddf-dfd-dff-fdd-fdf-ffd-fff…

    In other words, I entered all sequences of one letter, then all sequences of two letters, then all sequences of three letters, and so on.

    Once I got through the five-letter sequences (too lazy to go further), your predictor was at 45%. A win for free will? Or am I disqualified for cheating? 😀

  29. Sandro Says:

    > Now, I’d argue that this copyability question bears not only on consciousness, but also on free will. For the question is equivalent to asking: could an entity external to you perfectly predict what you’re going to do, without killing you in the process?

    I hate when scientists talk about free will, because it’s inevitably conflated with the older concept of free will discussed in philosophy and law, but it’s not remotely the same thing. Experimenter free will is not (necessarily) the kind of free will most people are concerned with in daily life!

    > Like, imagine they knew the complete quantum state on some spacelike hypersurface where it intersects the interior of your past light-cone. In that case, the person clearly could predict and clone you!

    Past-you, which is not present-you. And they would need to not only know your complete past quantum state, but every quantum state that influenced your evolution into present-you, which is clearly intractable even if it were possible in principle. I think that’s a meaningful distinction.

    I think the conundrums you present don’t have anything to do with consciousness per se, they result from the vaguely defined concept of identity. Certainly consciousness imbues you with subjective awareness such that you feel you have an identity in a colloquial sense, but do you really have identity in a formal sense? What is identity in a formal sense?

    The most well known conundrum of identity: how many car parts can you swap before it’s really a new car? Does this even have a sensible answer? We too are constantly changing, and parts of us are being replaced like the car. Am I the “same” self I was a year ago? 10 years ago? Not really, either subjectively or objectively. At best, it seems I share many commonalities with my past self.

    In computer science, identity is pretty straightforward: it’s a value equal only to itself. A programming language goes out of its way to ensure that values with identity always equal only themselves at any ONE point in time. Across time all bets are off, and you’ll get answers either way, but what SHOULD the answer be?

    Finally, it’s not clear that your notion of “unclonable, unpredictable, irreversible” is sufficient, even if it might be necessary. Unique things have *intrinsic value* solely because of their uniqueness, such that many would hesitate to destroy something unique, but that doesn’t naturally imbue them with any sort of consciousness. But I suppose you are merely saying that consciousness must have this property, not that anything with this property must be conscious.

    > I’d expected to be roasted alive over my attempt to relate consciousness and free will to unpredictability, the No-Cloning Theorem, irreversible decoherence, microscopic fluctuations left over from the Big Bang, and the cosmology of de Sitter space. Sure, my ideas might be orders of magnitude less crazy than anything Penrose proposes, but they’re still pretty crazy!

    I think your idea is too cross-disciplinary for most people to engage. I’m interested in philosophical questions like this, but it’s been too long since my quantum mechanics classes to engage on the minutiae of decoherence, let alone the cosmological aspects that your grand picture paints.

  30. Sandro Says:

    Scott #23:
    > Even if we wanted to be extreme computationalists, and say that consciousness must be present in any physical system that passes the Turing test, we could still face enormous difficulties in locating “where” within that system the consciousness resides.

    The Chinese Room is just a clash of intuition and over-reduction. It’s like being given a sorting algorithm and being told to point out exactly which step does the sorting. It’s a patently absurd question! Every step does the sorting, and removing even one part breaks it.

  31. Sandro Says:

    Nuño #26: the keys are f-d not f-g like you wrote. I made the same mistake initially, so rest assured, you’re not so predictable!

  32. Gabe Eisenstein Says:

    From my perspective you prove too much; I don’t see why something has to be either non-copyable or in-principle-unpredictable in order to be considered conscious. Since such copying and predicting are outside our experience, I say who knows how we would feel about the robots and robot-human systems with which we would have co-evolved in that far future when the copying/predicting are occurring?
    It isn’t just that decisions about which beings are conscious have ethical implications—in fact I think that this connection is upside-down: when we represent “consciousness” as an objective state of an object we are PRIMARILY making an ethical judgment—a judgment about how we will interact with that object— which we are clothing in metaphysics. To see this, consider the Cartesian position on dogs and cats, which is to regard them as mechanisms lacking souls. Cartesians are thus free to torture them. But from a more modern philosophical perspective, there is no “mental substance” inhering in the brains or pineal glands of either dogs are humans. [One of my phil. profs in Texas recounted the question of the racist janitor who asked the professor whether black people have souls, and was answered “No! And neither do you!”] Your decision whether to respect and be kind to dogs comes long before any ascertainment of the metaphysical “fact” about something called “consciousness” in the dog.
    So I say that the consciousness of robots depends on the future natural history of humans. Another way of saying this is to say that the word “consciousness” is used in malleable and not-necessarily-consistent ways, and that centuries of living with smart robots will undoubtedly lead to new uses.

  33. Jair Says:

    I love the “Aaronson Oracle”. It would be a bit spookier if there was an option to show the prediction so that we know it isn’t cheating.

  34. James Gallgher Says:

    So it’s tempting just to say at this point—as Bousso and Susskind do, in their “cosmological/multiverse interpretation” of quantum mechanics—“the measurement has happened”!

    Dirac is the earliest person to say something like this I believe, at the 1927 Solvay Conference here

    This view of the nature of the results of experiments fits in very well with the new quantum mechanics. According to quantum mechanics the state of the world at any time is describable by a wave function ψ, which normally varies according to a causal law, so that its initial value determines its value at any later time. It may however happen that at a certain time t_1 , ψ can be expanded in the form ψ = (sum over n) c_n * ψ_n , where the ψ_n ’s are wave functions of such a nature that they cannot interfere with one another at any time subsequent to t_1 . If such is the case, then the world at times later than t_1 will be described not by ψ but by one of the ψ_n ’s. The particular ψ_n that it shall be must be regarded as chosen by nature. One may say that nature chooses which ψ_n it is to be, as the only information given by the theory is that the probability of any ψ_n being chosen is |c_n|^2 . The value of the suffix n that labels the particular ψ_n chosen may be the result of an experiment, and the result of an experiment must always be such a number. It is a number describing an irrevocable choice of nature, which must affect the whole of the future course of events.

    A draft copy of the whole book free here

  35. Gabe Eisenstein Says:

    One other point about the argument being too strong: the concept of consciousness doesn’t depend on some unconditional principle of continuity. Whether I am the same person as my baby self or Alzheimer’s self or robot-upload self is very much open to interpretation. There is not a core fact that answers the question objectively.

  36. AdamT Says:

    Sandro #29,

    “The most well known conundrum of identity: how many car parts can you swap before it’s really a new car? Does this even have a sensible answer? We too are constantly changing, and parts of us are being replaced like the car. Am I the “same” self I was a year ago? 10 years ago? Not really, either subjectively or objectively. At best, it seems I share many commonalities with my past self.”

    This is exactly right, but I think since self-identity depends upon sentience/consciousness the latter is still suitable as a topic for the framing of this discussion. Still, I think the No Cloning part of Scott’s framework is indeed tied up with identity and to the degree that there can be consciousness without self-identity this might not be applicable.

    This car parts conundrum was in ancient times presented as the question of a whether a chariot can be found in its parts or is somehow separate from its parts. The answer to the conundrum is to regard the question as having a false assumption. That there is any such thing as a chariot that is in any sense real. Chariot is a mere label imputed by a consciousness apprehending a functioning thing and in no way whatsoever can be considered real.

  37. James Galla gher Says:

    I lost an ‘a” in my name tag posted above. Maybe a rare cosmic ray flipped a bit on my, quite old laptop, or a rare quantum tunneling event occurred to remove the ‘a’ or, I just accidentally deleted it after a few beers. You decide

  38. Tim Says:

    > “If you really thought humans had an algorithm, a computational procedure, for spitting out true mathematical statements, such an algorithm could never have arisen by natural selection”

    What a confusion of ideas! Why should our understanding of mathematics, specifically, be hard-coded by evolution? Like everything else we do, we do by learning from others or finding patterns.

    It *absolutely* makes sense that the meta-task of pattern finding *would* be hard-coded and selected into an evolved beast; mathematics is only the result of applying the procedure of “find patterns” to the topic of numbers and procedures.

    I was somewhat disappointed that the author’s rebuttal missed this point.

    (It’s also worth nothing that humans make false mathematical statements on a regular basis, so even if there *were* such an algorithm, it wouldn’t be giving better than probablistic results!)

  39. Scott Says:

    Vadim #11: Thanks for the challenging questions!

      1. Why do you expect an elegant mathematical answer to the question “is it OK to murder X”, any more than an elegant mathematical answer to the question “what is the equation for the coastline of Africa”? After all, our preferences seem to be a complex jumble of evolutionary accidents and not-murdering is just one of our preferences.

    I don’t think the truth is obvious here. On the one hand, if you’re a Dennett-style eliminativist, then there might or might not be reasonably-clear criteria that would match our considered intuitions about which entities it is or isn’t OK to “murder.” Sometimes you can’t do any better than a complex jumble of preferences, and other times you can. Even in that case, I’d say it’s at least worth putting various proposals on the table and critiquing them to see where they fail. For even if a complex jumble is the best we can do, we’ll have a better complex jumble if we understand why the simple proposals don’t work.

    If, on the other hand, you think it “actually means something” to ask whether an entity has experiences or not, over and above all the behavioral facts, then the reason to hope for a simple criterion is the same reason as with any “ordinary” scientific question. Namely, because that’s how science has always succeeded, by assuming that the truth is simple and then often turning out to be right.

    In either case, I’d say there’s no guarantee that any simple criterion will exist, but also no reason not to look (if not for a perfect criterion, at least for a reasonably simple one that does reasonably well).

      2. Suppose that human brains indeed contain unique quantum information. Clearly it is also possible to construct objects that contain unique quantum information which have few other brain-like properties. Do you truly grant consciousness to all such objects? If not, if you claim your criterion to be necessary but insufficient, what are the other conditions? If you cannot state them but assume the existence of some unknown non-trivial conditions, what makes you think these extra-conditions will not be sufficient for consciousness without invoking quantum non-cloning?

    I should have said this explicitly, but when I worry about whether an entity is conscious, my starting assumption is almost always that the entity exhibits some sort of intelligent behavior—i.e., either it passes the Turing test, or at least it passes the dolphin Turing test, or it’s unlike any living thing but can discover a proof of the Riemann hypothesis, or something.

    Penrose said in his talk that he ascribes a sort of “proto-consciousness” to any entity that can induce objective wavefunction reduction (e.g., his QG-sensitive microtubules), even if they don’t do anything intelligent by themselves. Many other writers about consciousness, like David Chalmers and Rebecca Goldstein, have likewise flirted with panpsychist ideas. Personally, I don’t reject those ideas, so much as I simply don’t care whether they’re true, or rather, I don’t know what I’d do differently if they were true! By contrast, if I knew that a software emulation of my brain would or wouldn’t be conscious, it’s pretty obvious what I might do differently with that knowledge (e.g., in the future Robin Hanson talks about), so that’s why I care about the latter question.

      3. You write that “this picture agrees with intuition that murder, for example, entails the destruction of something irreplaceable, unclonable, a unique locus of identity—something that, once it’s gone, can’t be recovered even in principle” Do you see any strong causal relation between things like the quantum non-cloning theorem and the presence of this intuition in our brain? It seems easy to imagine a universe in which physics doesn’t allow “quantum consciousness” but intelligent entities evolve which have the same intuition for evolutionary reasons. If so, how can this intuition be evidence for the quantum consciousness hypothesis?

    This is a variant of the favorite argument of the Dennett-style eliminativist: “even if consciousness didn’t exist, it’s plausible that unconscious apes governed by the laws of physics would evolve to pontificate about their own consciousnesses exactly as they do now. So, whatever grounds you can articulate for consciousness existing, how could they possibly be persuasive, since you’d be saying exactly the same things in zombie-world—and hence your stated reasons can’t have any logical connection to whatever is the real reason for consciousness to exist, supposing it does exist?”

    In some sense, this is an unfair argument. E.g., a believer in consciousness could always turn the tables, and say to the eliminativist: “even if consciousness were a fundamental feature of the world, it’s still plausible that you’d be giving all the same arguments that it wasn’t, so how can your arguments possibly be persuasive?”

    Even so, I admit that this is one of the most perplexing aspects of the whole subject. I.e., to whatever extent you can give an ordinary causal story for why someone expressed certain views about consciousness, to that extent, the actual truth or falsehood of their views would seem to have no ability to influence what they said!

    Still, one could look at it this way: all our views about personal identity, free will, consciousness, psychophysical parallelism, etc. seem to require that our minds can’t be freely copied, predicted, and rewound like computer code. This presented a puzzle for ~250 years, since nothing in the laws of physics seems able to offer any such guarantee. Then, in the 1920s, in the biggest revolution since Galileo and Newton, physics gets rewritten in a way that could very plausibly provide exactly such unclonability. It “didn’t have to” turn out that way, but it did.

    To me, it would be absurd not to ask whether the things fit together, as has happened so often in the history of ideas. E.g., maybe evolution can happen in all sorts of universes, with and without unclonability, but only in the unclonability ones is there “anyone there to notice”? The truth is, we have no idea why we find ourselves in a world with one set of physical laws rather than another (besides some relatively-uninteresting anthropic constraints), and that strikes me as one of the biggest questions there is.

    In summary, you’re right that “we seem to need X to reason sensibly about indexical questions” and “amazingly, improbably, our universe provides X” might be totally disconnected facts. But until the matter gets resolved, whether they are or aren’t disconnected seems like it will remain an obvious question that people ask.

      4. What type of argument can plausibly convince you the quantum consciousness theory is wrong?

    Plenty of things. If physicists find that the cosmological constant isn’t actually constant and will become negative, or for some other reason, there’s a fully unitary description of our causal patch (e.g., our deSitter horizon radiates back information just like black hole horizons are thought to do), then the picture I outlined is wrong. If the brain has a “clean digital abstraction layer” that captures everything cognitively relevant, and that notices the lower-level stuff purely as thermal noise, then the picture is wrong. If the sorts of experiments that (e.g.) Libet and Soon et al. do—the ones that use EEGs and fMRIs to try to predict human decisions—ever achieve truly impressive accuracy (like, better than the Aaronson Oracle!), then the picture is rendered increasingly irrelevant. There are a few other things, e.g. involving “past macroscopic determinants” of the quantum states occurring in nature. For more, see Section 9 of GIQTM (“Is the Freebit Picture Falsifiable?”).

  40. ShardPhoenix Says:

    A bit of a side point, but I feel like the kind of hand-waving, traditional-philosophical style Ethics is a bit pointless now that we have things like game theory to make social-signaling-and-bargaining-and-implicit-threatening (aka morality) much more precise. But even most intellectuals can’t pause their signalling long enough to realize this.

  41. Scott Says:

    ShardPhoenix #40:

      But even most intellectuals can’t pause their signalling long enough to realize this.

    That sentence was a fine way for you to signal how far you are above most intellectuals! 😉

  42. ShardPhoenix Says:

    Following up #40, the reason why this is so hard to get people to even talk about is probably because being clearly (first-order) irrationally committed to your moral principles (etc) can be a (higher order) successful strategy in the game of evolution. So we’ve pretty much evolved to not be unable (or at least unwilling) to understand what morality actually is from an objective perspective. I guess that makes me a mutant.

  43. ShardPhoenix Says:

    >That sentence was a fine way to signal how far you are above most intellectuals! ?

    Yes, this is indeed a problem when trying to talk about signalling objectively – if you’re writing for other humans, you’re signalling on *some* level. Nonetheless I think signalling is real, important, and can cause problems when the “circle-jerk” (is there a concise polite term for this?) drifts too far from reality.

  44. Scott Says:

    James #34: Thanks for the lovely Dirac passage! For me, the contrast is striking: where Bohr and Heisenberg can come across as ponderous and dated, Dirac, in 1927 (with “the ink barely dry” on quantum mechanics itself), is already talking about it exactly the same way we’d talk about it today, just a little more eloquently.

  45. incompatible Says:

    I always liked this thought experiment: “Could we teleport you to Mars by “faxing” you: that is, by putting you into a scanner that converts your brain state into pure information, then having a machine on Mars reconstitute the information into a new physical body?”

    You can also consider doing different things to a sleeping person: move them to a different room, disassemble their body and recreate it in the different room with the same atoms, or build a new body in the room with different atoms. Since fundamental particles are indistinguishable, it shoudn’t make any difference.

    My expectation is that a teleported copy on Mars would indeed be conscious and would believe that it was a continuation of my previous Earth self, which of course would also remain conscious assuming a non-destructive copy. Consciousness is a mysterious thing, but it exists in a physical universe and is somehow a product of those physical processes taking place in the brain.

    I also believe that consciousness is somewhat ephemeral, being created and destroyed as a side-effect of underlying processes, e.g., destroyed when you sleep, but a new consciousness created when you wake up. It’s not a physical thing that has continuity over time.

  46. Scott Says:

    ShardPhoenix #43: Whenever I read Robin Hanson’s Overcoming Bias blog, I get depressed, because every day he’s analyzing how yet another facet of the human experience is really just a status-signalling game, and of course he’s almost always onto something. But then isn’t his blog itself just more status-signalling? If he’s right, then what’s the point of even discussing it?

    The only solution I’ve found is to spend time in fields—theoretical computer science being an example—where status-signalling is correlated with actually doing good work. I.e., no matter where you go, or how ironic or self-conscious you are about it, you can never escape the reality that when people (especially intellectuals) say X, part of what they mean to convey is “look at how smart/original/in-the-know/etc. I am to say X.” But what you can do, and what’s worth doing, is to join, build, and maintain epistemic communities where “look at how smart I am to say X” works the best when X is original and important and true.

  47. GMcK Says:

    I continue to be puzzled that nobody seems to call Penrose on his use of the term “non-computational”. Isn’t it generally accepted that computers run because of physical laws, and not vice versa. The handful of transistors that implement a NAND gate are at some level not made of software, and even stronger the quantum devices that implement a CNOT cannot be properly emulated by a pure Turing Machine, one without at least a non-computational hardware random number generator. Only if you believe that you’re a brain-in-a-box simulation and it’s “simulations all the way down” should “non-computational” mean anything astonishing.

    But it’s obvious that Penrose has a compelling mental that certain mathematical facts are clearly and incontrovertibly true, and are true just as certainly as most of us believe that 2+2=4, and as certainly as many of us believe that the Peano Axioms are self-consistent. However, for Roger, those perceptions go beyond Godel-provable facts. The real question is whether we can imagine what’s going on in Roger’s brain that leads him to these perceptions.

    One thing that I can imagine as a left-hander who’s studied the neurophysiology of handedness a bit, is that Roger’s brain has multiple cortical regions, say in its left and right hemispheres (it’s much more complicated than this) that are connected by what amounts to a hardware Fourier transform network or some other convolutional network. It is very difficult for the mental processes in one hemisphere to compute in software what the other hemisphere computes in hardware, so that results from one hemisphere are easily viewed as a nondeterministic oracle by the other, with corresponding mysterious powers. Mathematics in the frequency domain of mathematical structures can yield all kinds of remarkable results — a small example is the way that Fourier transform pops up in unexpected places of classic number theory.

    When Roger’s verbal, sequential, symbol processing “left hemisphere” receives thoughts from his parallel, subsymbolic, geometric “right hemisphere”, his LH mind is very likely to interpret them as oracular Platonic truths that must come from some supra-Turing capability. Backing those oracular thoughts into a gravitized QM framework would be natural for someone working in his field of study.

    [Minds that arise from brain operations in convolutional networks, by the way, pose bad prospects for bottom-up brain simulation, because the neuropil of fibers connecting neurons can be seen as equivalent to the permutation matrices of cryptographic functions. Is this convolution actually the case? Studying the possibility is difficult and unpopular, and impossible to rule out via current diffusion-tensor MRI and serial electron microscopic techniques that are getting billions of dollars of funding.]

  48. ShardPhoenix Says:

    Scott #46: I agree with what you say here (though intentionally designing such systems is not easy), but I still find the concept of ethics/morality as typically understood to be too vague to be useful in a intellectual/academic context (as opposed to when you’re trying to stir people to immediate action). In a theoretical context I think it’s clearer to talk in terms of game-theoretical decisions by actors with various preferences, etc.

  49. Darien S Says:

    >”I should mention that the idea of a “pseudorandom pattern” in quantum measurement outcomes is already ruled out by the (now loophole-free) Bell inequality violation experiments—unless you also want to posit a faster-than-light conspiracy to coordinate faraway pseudorandom outcomes with one another.”

    I would like to know where this idea of a conspiracy with faster than light signals comes from, and why in the quote regarding superdeterminism Bell gave didn’t mention it explicitly and clearly like so, he merely says we just have to suppose we lack free will, aka determinism applies to the experimenter, which is a reality and no faster than light signal is required.

    Superdeterminism is merely determinism without a magical experimenter imbued with a soul and possessing free will thus escaping from the determinism we would consider applies to the rest of the world if we invoked a deterministic worldview. Thus superdeterminism is merely determinism in other words, as there is no reason to suppose the experimenter escapes determinism if determinism were taken to be true.

    >”There is a way to escape the inference of superluminal speeds and spooky action at a distance. But it involves absolute determinism in the universe, the complete absence of free will. Suppose the world is super-deterministic, with not just inanimate nature running on behind-the-scenes clockwork, but with our behavior, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined, including the “decision” by the experimenter to carry out one set of measurements rather than another, the difficulty disappears. There is no need for a faster than light signal to tell particle A what measurement has been carried out on particle B, because the universe, including particle A, already “knows” what that measurement, and its outcome, will be.”-Bell

  50. Craig Gidney Says:

    One of the things I find strange about associating identity with as-of-yet-unmeasured quantum information is that the information by definition hasn’t affected your behavior throughout your entire life yet. There’s zero correlation between a sourced-from-the-initial-state qubit and everything I’ve ever thought or done. Which seems like not the correct amount of correlation if I’m going to call that qubit part of my identity.

    Secretly swap “my” sourced-from-the-initial-state qubits with “your” sourced-from-the-initial-state qubits, and no one’d ever be able to tell. So in what sense were they “me” and “you”, instead of just a source of entropy?

  51. Pascal Says:

    Scott,

    I like the connection you make between free will and the possibility of coying brains, but I didn’t get the connection with consciousness. Why has consciousness anything to do with the possibility or impossibility of brain cloning?
    Thanks for the very interesting post.

  52. JuroV Says:

    Scott #20: This is not only about “knowing initial condition perfectly”. Also infinite precision arithmetic and perfect measurement of any and all external influences.

  53. mjgeddes Says:

    I’ve thought of a good argument you could have used that might have actually convinced old Penrose, Scott.

    What I would have done is simply pointed out that in his (Penrose’s) own field of physics, no one seriously suggests that perfect certainty can be achieved. What happens is a combination of theory and empirical data, from which inferences to the best explanation (abduction) and generalizations (induction) are made. Simply ask Penrose: why should math should be any different to physics in this regard? Why not just accept that math is combination of a priori AND empirical-type (numerical) investigation/guesswork – once you can see that an uncertain (empirical) element gets slipped in, then the Godelian argument quickly collapses.

    As to your own ideas about consciousness, very interesting indeed, very clever! I like the idea of trying to establish a connection to the wider environment (boundary condition), and you do sort of have the same idea I did of transmission of information between different levels of abstraction (link between micro- and macro- worlds)! I’m not really buying the specifics of it though 😉

    As I said in the other thread, I think free will is preserved even with duplicate copies of you predicting what you’d do next IF AND ONLY IF the software doing the prediction is NECCESSERILY conscious. That is to say, there must be no computational short-cut where someone can create an unconscious simulation of you. In that case, I agree, free-will would be empirically falsified. In the case of CONSCIOUS software that is predicting your actions, well, I don’t see that as falsifying free will, because there is still an actual instantiation of a person there (namely, you).

    The teleportation thing is definitely rather puzzling. What we really need is a proper scientific theory of consciousness that is generally accepted as correct. I think once we actually have that, these mysteries will be greatly clarified. The problem is that philosophy by itself can’t really resolve anything, it can only chart the various possibilities and give an indication of what general direction to go in. Clear specific answers really need science.
    The Hansonian/Yudkowskian answers to the teleportation/copying puzzles have a better than even chance of being correct. (May be even in 70% + range of confidence). But these are philosophical ‘plausibility’ arguments, and that level of certainty is nowhere near high enough for me to be comfortable risking my life getting ‘uploaded’ or stepping into one of those ‘teleporters’ 😉

    Lets wait for a proper scientific theory of consciousness and see what it has to say. I think that once we do have a proper scientific theory, these puzzles will be cleared up quite quickly.

  54. Shea Levy Says:

    It’s possible this is addressed above, I’ve only skimmed through the comments, but: Both here and in your quora answer you seem to suggest that anyone disagreeing with the computationalist framework that doesn’t present an alternative, at least to the level of a plausible scheme for deciding what is or is not conscious, isn’t doing their job (and that Penrose is remarkable here because he actually tries to answer that question). But even if you set aside the IMO reasonable objection that you don’t need to know a right answer in order to reject a wrong one, this privileges computationalism significantly, as in general computationalists can’t answer that question either! Unless they’re integrated information theorists, but them I’d point to this great blog post by Scott Aaronson a while back about their theory.

  55. wolfgang Says:

    There is a simple argument in favor of Penrose’s “gravitized quantum mechanics”:

    The wavelength of macroscopic objects is typically much smaller than the Planck length, so it is natural to switch from a quantum description to a classical description.
    And if one thinks that the Copenhagen reduction of the wavefunction is associated with the switch from objective (wavefunction) to subjective (conscious observation) then it is natural to conclude that “gravitized quantum mechanics” has something to do with consciousness.

    This is Penrose’s argument in a nutshell as I understand it and it is hard to argue with as long as we do not have a full understanding of quantum gravity.

  56. Philip Calcott Says:

    Hi Scott,

    Awesome blog and awesome article. I have put in a request to reincarnation central to come back as a small section of your frontal lobe, but I’m not holding out much hope.

    I loved all your arguments, and the engaging way you present them, but (you knew there was going to be a but) I would like to take issue with your comment that the burden to show what it is that might make the brain “relevantly different from a digital computer” is on the “brains are different” team.

    I dont see it this way. The issue is that no-one has come up with any explanation of what consciousness actually is, or how it could possibly emerge from the interaction of a set of Standard model particles. What actually is the “movie in my head”, or the “redness of red”, and how do I experience things such as love? It seems to me that if we dont even have a clue how to answer these questions about what human consciousness is then we would do well to be humble when proposing that “Of course, this property (we completely don’t understand) will be shared by a silicon-based model of our brains”.

    So, my contention would be that the onus is on the AI world to show that this thing we completely don’t understand will also be shared by a cool future AI-bot. If this can’t be shown then let us rather take a more humble approach, and admit that we are really in the dark as to what this whole consciousness thing is and so maybe we are in the dark about whether the ultimate AI will share it.

    Thoughts?

  57. Sandro Says:

    ShardPhoenix:
    > A bit of a side point, but I feel like the kind of hand-waving, traditional-philosophical style Ethics is a bit pointless now that we have things like game theory to make social-signaling-and-bargaining-and-implicit-threatening (aka morality) much more precise.

    Simply declaring that “morality” is a pretty big leap. At best, you can claim that it might explain ethics-seeking behaviour, but that’s not the same as it being ethics. Analogously, you might use some psychological theory to explain particle-seeking behaviour to explain physicists’ obsessive desire to build particle accelerators, but it would be quite an astonishing leap to then claim that said theory therefore is particle physics.

    > I agree with what you say here (though intentionally designing such systems is not easy), but I still find the concept of ethics/morality as typically understood to be too vague to be useful in a intellectual/academic context (as opposed to when you’re trying to stir people to immediate action).

    Of course it’s vague, morality in philosophy is still literally undefined, with significant debate over what classify as moral questions, and whether moral questions even have truth value at all!

    This debate probably won’t generally interest science-minded people, because a moral fact, if it exists, probably wouldn’t explain natural facts any better than a natural fact would. But they’re not intended to! Natural facts explain other natural facts, and moral facts, if they exist, would explain other moral facts.

    And if natural facts don’t exist, then morality might simply be evolutionary game theory ethics. But many have argued convincingly, IMO, that moral facts exist.

  58. Scott Says:

    Darien S #49: In my view, what’s whimsically called the “free will loophole” in the Bell experiment has essentially nothing to do with free will in the human sense. It’s an unfortunate choice of term. (And I’m not concerned here with how Bell put things in his original papers; that’s a history-of-science question, whereas I’m talking about what’s true.)

    I think the issue is the same if, instead of Alice and Bob using their free will (or “free will”) to choose the measurement settings, we used computers with access to random number sources, as actually happens in Bell experiments. There, again, the trouble is that no matter how the random numbers were chosen, no matter how unrelated (and spatially distant) Alice’s and Bob’s randomness-generation procedures were from each other, no matter how unrelated they both were from the “entangled”-but-not-really-entangled particles, if you believe in superdeterminism then you believe the procedures always had to conspire across the universe to generate challenges that for some unexplained reason were easy enough for the particles to pass.

    So let’s leave Alice and Bob out of this! For two separated mechanical random-number generators to conspire in such a way is a thousand times crazier and more nonlocal than anything in quantum mechanics, which is the theory that we were trying to “tame” in the first place.

    One can be more technical about it: with quantum mechanics, there’s a clear, precise explanation for why you can do only this and no more—for instance, why you can win the CHSH game with 85% probability, but you can’t send faster-than-light signals, or for that matter, even win the CHSH game with 86% probability. With superdeterminism, by contrast, this is completely unexplained. You posit correlations that would let superluminal signals appear to be sent, the CHSH game appear to be won with certainty, etc. etc., and then you just declare by fiat that they only let you win CHSH 85% of the time, because you cribbed that correct answer from quantum mechanics, the theory you were trying to replace.

    In summary, superdeterminism is a miserable failure as a scientific explanation, for reasons having nothing to do with free will.

  59. Scott Says:

    mjgeddes #53:

      I’ve thought of a good argument you could have used that might have actually convinced old Penrose, Scott.

      What I would have done is simply pointed out that in his (Penrose’s) own field of physics, no one seriously suggests that perfect certainty can be achieved.

    Return to Go, do not collect $200. 🙂 Roger Penrose is a mathematician, one whose main interest is mathematical physics. He’s the Emeritus Rouse Ball Professor of Mathematics at Oxford. His degrees are also in math (or sorry, maths). Penrose is also probably the best-known living exponent of mathematical Platonism.

  60. Scott Says:

    JuroV #52:

      This is not only about “knowing initial condition perfectly”. Also infinite precision arithmetic and perfect measurement of any and all external influences.

    OK, let’s subsume the initial conditions and any external influences into “knowing the boundary conditions perfectly,” which we both agree is a central issue.

    If we know the boundary conditions, then infinite-precision arithmetic is not additionally required, if we assume the physical Hilbert space is finite-dimensional, as Bekenstein tells us it has to be. That was the main point of my comment.

  61. Scott Says:

    Shea Levy #54 and Philip Calcott #56: You both take issue with my contention that the burden is on the anti-AI camp to articulate a criterion that separates the brain from a digital computer. As Philip put it:

      So, my contention would be that the onus is on the AI world to show that this thing we completely don’t understand will also be shared by a cool future AI-bot. If this can’t be shown then let us rather take a more humble approach…

    Consider the following fictional scenario: European explorers encounter Native Americans for the first time. The explorers say: wow, these people have language, agriculture, complex social structure, all the same sorts of things we have, modulo unimportant details. They’re probably also conscious like we are! But then the ship’s philosopher-in-residence responds:

      No, you see, we don’t understand consciousness at all. Therefore, if someone claims that native people are conscious as Europeans are, the onus is that person to prove their claim. If this can’t be shown then let us rather take a more humble approach…

    In this case, I hope the problem is obvious: what masquerades as humility is actually extreme arrogance, because it imposes an isolated demand for rigor. Why does consciousness need to be proven only in the case of the natives? Why is its presence assumed without argument for the Europeans?

    The strong-AI position doesn’t face the same burden, because it’s maximally simple and also maximally generous: anything that exhibits intelligent behavior is assumed to be associated with consciousness. No entity needs to prove its consciousness, beyond what we already do in everyday life that leads us to regard each other as conscious. Or as Turing put it in “Computing Machinery and Intelligence”:

      A is liable to believe “A thinks but B does not” whilst B believes “B thinks but A does not.” Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.
  62. BLANDCorporatio Says:

    Human PI: “Can you write a sonnet?”
    Robot: “Can you?”

    I agree with a couple of commenters (such as what I take to be the gist of tzm and Gabe Eisenstein) that many of the philosophical arguments about consciousness “try too hard”.

    For the most part, we humans do not contemplate the consistency of formal systems (and when we do, we don’t necessarily find the same conclusions), do not exhibit creativity or intelligence in most of our daily doings, and are fairly predictable in our behavior.

    But despite all that, and despite whatever teenage edge-lords might claim, a standard of morality where it is ok to kill most people because they are ‘mindless sheep[le]’ has not gained wide-spread open acceptance. (It may be okay to kill THEM sheep, but US sheep are sacrosanct.)

    If consciousness is to be interpreted as ‘whatever that X-factor is for which people need a word like “consciousness” in moral deliberations’, the bar is much lower than many intellectually interesting properties.

    And to my intuition, “non-copyability” is trying too hard also.

    irt. Scott: “Still, one could look at it this way: all our views about personal identity, free will, consciousness, psychophysical parallelism, etc. seem to require that our minds can’t be freely copied, predicted, and rewound like computer code.”

    Speak for yourself 😉

    The problem with appeals to intuition though is that they don’t necessarily cross-over. The things you see as strange if minds can be copied don’t seem strange to me. There’s not much difference between blackmailing an AI by threatening to torture 1000 exact copies of itself, or 1000 near-copies that would operate mostly the same for significant amounts of time (and why shouldn’t such near-perfect copies be possible for the human brain, QM be damned?), or 1000 other random people the torturer happened to round up.

    One reason is because I don’t buy the “you should consider it likely to be one of those 1000 copies”. I’d like some specifics on how the copies are made, because I suspect if the process is such that the original AI has any doubts about being in the torture group at the end, then the process presupposes a kidnapping before it starts. No cloning necessary, and just as applicable to non-copyable minds.

    To explain that a bit, see the “fax a copy of me to Mars” example. My brain is on Earth at the beginning of the process, stays on Earth throughout, and I have no reason to suspect my consciousness is suddenly going to jump or split. I’ll still feel as if I’m on Earth (regardless of whether a more or less similar individual now runs around on Mars). Conversely, if the me on Earth is destroyed in the copying, then I’m gone, however similar the Mars one is.

    In other words, same intuition as behind passing fire around. I can have a fire here, and build a microscopically accurate copy over there; or I can take a flaming torch and use it to ignite another. In both cases I find it easy to say the original fire is still around. Or I can have a burning glass of oil, pour it into a large puddle, from which I then collect 1000 glasses of burning oil; in this case, I find it intuitive to say the original flame is gone and replaced by its copies.

    Maybe this is the wrong intuition. But as long as we’re talking what’s intuitive/weird not what’s accurate in the universe … 😛

    Cheers.

  63. wolfgang Says:

    @BLANDCorporatio

    >> a standard of morality where it is ok to kill most people because they are ‘mindless sheep’ …

    I would use a very simple utility function and argue that anybody and anything smart enough or unique enough should not be killed or destroyed, whether there is consciousness or not involved, for purely selfish reasons.

    A robot smart enough to solve the riddles of quantum gravity should not be turned off whether she is conscious or not, for the simple reason that we want to know her answers.

    And an antique temple should not be destroyed because its unique history provides long term aesthetic pleasure to us, even if a hotel at the same place might generate more cash in the short term; but of course the temple is not conscious.

  64. Flavio Botelho Says:

    Following Elon Musk’s claim, and you being among the Singularity crowd, what do you think is the probability of us living in a simulation?
    Do you think that living in a simulation would probably entail something along the lines of a Extended Church Thesis (the original one is mostly bust)?

  65. Scott Says:

    Flavio #64: I think the probability that we’ll get convincing empirical evidence that we’re living in a computer simulation (i.e., something some entity purposefully created) is close to zero—and I’d be willing to bet on that, if Dana still let me bet on such things!

    If, on the other hand, it’s the kind of simulation that’s so perfect that we can never get evidence that it is a simulation, even in principle, then I regard the question as too ill-defined even to assign a probability. (We might as well debate “the probability that the goddess Gaea created other universes besides this one.”)

    See also my answer to John Horgan’s related question (scroll down to question #9).

    Arguably, this question is orthogonal to the question of the Church-Turing Thesis, since even if our universe was a simulation, who’s to say that the simulating gods/aliens wouldn’t have access to super-Turing hypercomputers? And conversely, even if we never get any evidence that our universe is being “run” for any computational purpose, we could get (and I’d say, have gotten) very good evidence that it satisfies the Church-Turing Thesis.

  66. Hannah Says:

    Hi! I believe de Sitter space is the one with the positive curvature, and is spatially finite and has a bounded maximum entropy; anti-de Sitter space is the hyperbolic one that is spatially infinite and has unbounded maximum entropy. (The AdS boundary is at infinity.)

    I’m not an expert by any means, but I’ve been checking some references and I’m pretty sure that is how it goes. For example, “Disturbing Implications of a Cosmological Constant,” 2002, Lisa Dyson, Matthew Kleban, and Leonard Susskind.

    So if we live in a positively curved de Sitter space, there is a bound on the maximal entropy and we are not conscious in this sense. But also, I thought the latest evidence was that we live in a flat universe within experimental error, with the absolute value of Ω_K being less than 0.005. (“Planck 2015 results. XIII. Cosmological parameters,” 2015, P. A. R. Ade, N. Aghanim, M. Arnaud, M. Ashdown, J. Aumont, C. Baccigalupi, A. J. Banday, R. B. Barreiro, J. G. Bartlett, N. Bartolo, E. Battaner, R. Battye, K. Benabed, A. Benoit, A. Benoit-Levy, J.-P. Bernard, M. Bersanelli, P. Bielewicz, A. Bonaldi, L. Bonavera, J. R. Bond, J. Borrill, F. R. Bouchet, F. Boulanger, M. Bucher, C. Burigana, R. C. Butler, E. Calabrese, J.-F. Cardoso, A. Catalano, A. Challinor, A. Chamballu, R.-R. Chary, H. C. Chiang, J. Chluba, P. R. Christensen, S. Church, D. L. Clements, S. Colombi, L. P. L. Colombo, C. Combet, A. Coulais, B. P. Crill, A. Curto, F. Cuttaia, L. Danese, R. D. Davies, R. J. Davis, P. de Bernardis, A. de Rosa, G. de Zotti, J. Delabrouille, F.-X. Desert, E. Di Valentino, C. Dickinson, J. M. Diego, K. Dolag, H. Dole, S. Donzelli, O. Dore, M. Douspis, A. Ducout, J. Dunkley, X. Dupac, G. Efstathiou, F. Elsner, T. A. Ensslin, H. K. Eriksen, M. Farhang, J. Fergusson, F. Finelli, O. Forni, M. Frailis, A. A. Fraisse, E. Franceschi, A. Frejsel, S. Galeotta, S. Galli, K. Ganga, C. Gauthier, M. Gerbino, T. Ghosh, M. Giard, Y. Giraud-Heraud, E. Giusarma, E. Gjerlow, J. Gonzalez-Nuevo, K. M. Gorski, S. Gratton, A. Gregorio, A. Gruppuso, J. E. Gudmundsson, J. Hamann, F. K. Hansen, D. Hanson, D. L. Harrison, G. Helou, S. Henrot-Versille, C. Hernandez-Monteagudo, D. Herranz, S. R. Hildebrandt, E. Hivon, M. Hobson, W. A. Holmes, A. Hornstrup, W. Hovest, Z. Huang, K. M. Huffenberger, G. Hurier, A. H. Jaffe, T. R. Jaffe, W. C. Jones, M. Juvela, E. Keihanen, R. Keskitalo, T. S. Kisner, R. Kneissl, J. Knoche, L. Knox, M. Kunz, H. Kurki-Suonio, G. Lagache, A. Lahteenmaki, J.-M. Lamarre, A. Lasenby, M. Lattanzi, C. R. Lawrence, J. P. Leahy, R. Leonardi, J. Lesgourgues, F. Levrier, A. Lewis, M. Liguori, P. B. Lilje, M. Linden-Vornle, M. Lopez-Caniego, P. M. Lubin, J. F. Macias-Perez, G. Maggio, D. Maino, N. Mandolesi, A. Mangilli, A. Marchini, P. G. Martin, M. Martinelli, E. Martinez-Gonzalez, S. Masi, S. Matarrese, P. Mazzotta, P. McGehee, P. R. Meinhold, A. Melchiorri, J.-B. Melin, L. Mendes, A. Mennella, M. Migliaccio, M. Millea, S. Mitra, M.-A. Miville-Deschenes, A. Moneti, L. Montier, G. Morgante, D. Mortlock, A. Moss, D. Munshi, J. A. Murphy, P. Naselsky, F. Nati, P. Natoli, C. B. Netterfield, H. U. Norgaard-Nielsen, F. Noviello, D. Novikov, I. Novikov, C. A. Oxborrow, F. Paci, L. Pagano, F. Pajot, R. Paladini, D. Paoletti, B. Partridge, F. Pasian, G. Patanchon, T. J. Pearson, O. Perdereau, L. Perotto, F. Perrotta, V. Pettorino, F. Piacentini, M. Piat, E. Pierpaoli, D. Pietrobon, S. Plaszczynski, E. Pointecouteau, G. Polenta, L. Popa, G. W. Pratt, G. Prezeau, S. Prunet, J.-L. Puget, J. P. Rachen, W. T. Reach, R. Rebolo, M. Reinecke, M. Remazeilles, C. Renault, A. Renzi, I. Ristorcelli, G. Rocha, C. Rosset, M. Rossetti, G. Roudier, B. Rouille d’Orfeuil, M. Rowan-Robinson, J. A. Rubino-Martin, B. Rusholme, N. Said, V. Salvatelli, L. Salvati, M. Sandri, D. Santos, M. Savelainen, G. Savini, D. Scott, M. D. Seiffert, P. Serra, E. P. S. Shellard, L. D. Spencer, M. Spinelli, V. Stolyarov, R. Stompor, R. Sudiwala, R. Sunyaev, D. Sutton, A.-S. Suur-Uski, J.-F. Sygnet, J. A. Tauber, L. Terenzi, L. Toffolatti, M. Tomasi, M. Tristram, T. Trombetti, M. Tucci, J. Tuovinen, M. Turler, G. Umana, L. Valenziano, J. Valiviita, B. Van Tent, P. Vielva, F. Villa, L. A. Wade, B. D. Wandelt, I. K. Wehus, M. White, S. D. M. White, A. Wilkinson, D. Yvon, A. Zacchei, A. Zonca)

  67. Scott Says:

    Hannah #66: Thanks! It was entirely plausible to me that I was using terminology incorrectly, but I just looked up the de Sitter universe, and that was indeed what I meant.

    Exactly as you say, in a dS universe subject to the Bekenstein bound, the amount of entropy accessible to any one observer is finite, and is set by the dS radius. (In our universe, assuming the dark energy is constant so that the future is really dS, the bound is about 10122 qubits.)

    Crucially, however, the dynamics within this 10122-qubit causal patch is non-unitary! I.e., our patch is an open quantum system, in which a light ray can escape to infinity, so that it’s no longer accessible to us even in principle. So you can get irreversible decoherence, measurements that can never be undone. And on the picture I suggested in the post, that’s what’s relevant, not whether the number of qubits accessible to a given observer is finite or infinite.

    Incidentally, while it’s true that AdS space is hyperbolic and infinite, my understanding is that a light ray can reach the AdS boundary and bounce off it in finite time (even though anything traveling more slowly than light would take infinitely long to reach the boundary).

    Obviously experts should correct whatever I got wrong.

  68. BLANDCorporatio Says:

    irt. wolfgang:

    I would agree with you. I think a lot of moral debates are informed more by practicality and usefulness than any lofty principles. But I would argue (or, well, tbh hope :P) that there’s more to morality than that.

    Cheers.

  69. Hannah Says:

    Whoa, really? Non-unitary? That’s neat! In a de Sitter space, you really have a non-unitary evolution operator? Does it work like an observation, where it’s random, or does it just deterministically zero out part of the state space?

    I was thinking that you meant that you wanted to be in a universe where entropy was constantly increasing at a high rate, so that the accessible macrostates would each have a fairly well-defined macro-history.

  70. unapologetically_procrastinating Says:

    The Church-Turing Thesis, I said, is so versatile that the only genuine escape from it is to propose entirely new laws of physics.

    I don’t think I understand this properly. Do you mean something like: we can come up with postulates that treated axiomatically we can derive the Church-Turing Thesis as well model portions of the physical world just as well as, say, Class Mech or QM? Wouldn’t this mean that the thesis becomes like Ptolemy’s epicycles in discussions about consciousness?

    But until that happens, I’m unwilling to go up against what seems like an overwhelming consensus, in an empirical field that I’m not an expert in.

    In my limited studying of neuroscience, it seems neuroscientists do not find the computational view useful. Gary Marcus wrote an op-ed in the NYT with the title: Face it, Your Brain is a Computer, precisely decrying this situation. I quote:

    Many neuroscientists today would add to this list of failed comparisons the idea that the brain is a computer — just another analogy without a lot of substance. Some of them actively deny that there is much useful in the idea; most simply ignore it.

    As an example: in the neuroscience of learning and memory we currently simply have no clue about how to connect LTP – the most studied possibility – to learning in vertebrates. (I think this highlights the danger or pop-sciences which you are so well aware of).

    A final question:

    Suppose we simulate addition, say, in an FSM, can what it does be called ‘adding’ when our species have disappeared? Could say certain stalactites dripping be a ‘qadding’ for some long dead alien species?

    Illocutionary disclaimer: re-read the post and I hope it doesn’t sound combative since I am a big fan.

  71. fred Says:

    Whether a “brain”/”mind” can be cloned sounds a bit like hair splitting to me since two separated systems could just happen to be “identical” by chance anyway, no? There’s nothing preventing this, even if very unlikely.

  72. Scott Says:

    Hannah #69:

      Whoa, really? Non-unitary? That’s neat! In a de Sitter space, you really have a non-unitary evolution operator? Does it work like an observation, where it’s random, or does it just deterministically zero out part of the state space?

    It’s probably less amazing than you think. 🙂 All that happens is that a photon escapes past your dS boundary, so then if you want to write the density matrix for the universe inside your boundary, the standard rules of quantum mechanics say that you need to take a partial trace over all the possible states of the photon. It’s exactly like if you’d lost track of the photon in a lab experiment, except that in this case, cosmology tells us that the photon can never again be recovered, even using arbitrarily-advanced technology of the remote future.

  73. Scott Says:

    fred #71:

      Whether a “brain”/”mind” can be cloned sounds a bit like hair splitting to me since two separated systems could just happen to be “identical” by chance anyway, no? There’s nothing preventing this, even if very unlikely.

    I’m imagining a new bumper-sticker slogan, to go along with “you can have my gun when you pry it from my cold, dead hands”: “you can have my identity when you guess it by chance” 😉

  74. fred Says:

    Scott #65

    “I think the probability that we’ll get convincing empirical evidence that we’re living in a computer simulation is close to zero”

    If the simulation relies on limited resources and “multi-threading”, then it’s possible to imagine that one part of the simulation could affect other parts of the simulation in some observable way.

    E.g. we build a massive quantum computer, and when we turn it on we see that physics in other part of our observable universe suddenly runs in a dumbed-down mode.
    In current video games using adaptive physics engines and renderers, when the local computation requirements around the player increase, distant processes degrade (tick rate or level of detail) in order to keep the overall simulation cost constant.

  75. Scott Says:

    fred #74: Yes, one can certainly imagine such experiments. What I assign a probability close to 0 is that running them will give anything other than a null result. 🙂

  76. fred Says:

    Scott #73

    haha, well, I was thinking more along the lines of the “exclusion principle” – two electrons can’t occupy the same quantum state simultaneously. There’s nothing in QM preventing two identical brains to exist in the same universe.
    But with or without cloning, identical systems would diverge very quickly anyway since their surroundings are always different by definition.

  77. JimV Says:

    Apologies to previous commenters if my points have already been discussed, but after reading a long, complex (but fascinating) post I am not up for reading 67 doubtless even also complex and interesting comments, so here goes:

    Two minor thoughts:

    1) A general rebuttal to the need for specific algorithms for an AI (or human brains) to follow is I think provided by the evolutionary algorithm (random ideas, selection criteria, memory–I have detailed these components numerous times so won’t repeat that here) (shorter form: trial and error). It is the naturally-occurring algorithm which produced our amazing nano-tech brains in the first place. And yes, the development of this algorithm in biological nervous systems was a survival trait.

    2) “For imagine someone had such fine control over the physical world that they could trace all the causal antecedents of some decision you’re making. Like, imagine they knew the complete quantum state on some spacelike hypersurface where it intersects the interior of your past light-cone. In that case, the person clearly could predict and clone you!” is not clear to me. If quantum events are random, then knowing all the random events which happened in the past will not allow anyone to predict what will happen in the future.

    Example: suppose there were a (very) tiny piece of radio-active material somewhere in the brain, and a (very) tiny nano-tech geiger-counter monitoring it, and further suppose that your brain contains an algorithm such that whenever an immediate decision is needed among two or more alternative courses of action (from which there is no clear choice), it uses the number of geiger-counter clicks between heartbeats to chose the one to implement.

    (I actually think there could be a similar mechanism in our brains – unpredictability being a survival trait in some circumstances – but would guess it is actually based on the number of nerve cells being triggered by external input, such as the number of photons hitting retinas. Most game programs use some such code.)

  78. fred Says:

    Scott #75

    Ok, what about this – could it turn out that the big discrepancies we see in large scale physics/cosmology (dark energy/dark matter, etc), could be one day explained by the underlying simulation of our universe being “imperfect”?

  79. Scott Says:

    unapologetically_procrastinating (great name BTW) #70:

      I don’t think I understand this properly. Do you mean something like: we can come up with postulates that treated axiomatically we can derive the Church-Turing Thesis as well model portions of the physical world just as well as, say, Class Mech or QM?

    No, I just meant that all the fundamental theories of physics we’ve known—Newtonian mechanics, Maxwell’s electrodynamics, special and general relativity, quantum mechanics, quantum field theory—have had the property that they appear to be simulable on a computer (at least, a computer with a random number generator) to any desired precision. And to whatever extent there have been caveats to that statement, they’ve been because of aspects of one theory that are then revealed to be idealized approximations by the next theory. E.g., in Newtonian mechanics, it’s at least conceivable that you could build a hypercomputer that solved the halting problem in finite time by orbiting point masses around each other faster and faster, but then special relativity rules that possibility out. Or, as the example par excellence, there are many proposals today for hypercomputers, but all of them involve unbounded amounts of energy or unbounded division of time and space, and for that reason, they’re all excluded by the tiny amount we know so far about quantum gravity (e.g., the work of Jacob Bekenstein in the 70s and 80s).

    A priori, it’s possible that there could’ve been an experimentally-confirmed physical theory that robustly predicted that we could violate the Church-Turing Thesis—much like quantum mechanics robustly predicts we can violate the Polynomial-Time Church-Turing Thesis. But that hasn’t been our experience.

      In my limited studying of neuroscience, it seems neuroscientists do not find the computational view useful.

    It probably depends on which neuroscientists (there’s a whole subfield of computational neuroscience…). But to whatever extent what you say is true, I’d explain it in terms of differing levels of description—just like I said in the post. E.g., neuroscientists probably also don’t find it useful to view the brain as an assemblage of leptons and quarks—but that doesn’t change the fact that the brain is an assemblage of leptons and quarks! In exactly the same way, even if neuroscientists don’t find it useful to think in terms of the Church-Turing Thesis, the burden is still on you to say something new about physics, if you think the Church-Turing Thesis doesn’t apply to the brain.

      Suppose we simulate addition, say, in an FSM, can what it does be called ‘adding’ when our species have disappeared?

    Sure, why not?

      Could say certain stalactites dripping be a ‘qadding’ for some long dead alien species?

    Aren’t stalactites usually a mere 50,000-100,000 years old or something? They’re young enough that the aliens who needed the qaddition done could’ve interacted with early humans and shown up in cave paintings… 😉

  80. Philip Calcott Says:

    Hi Scott,

    I dont think your native american comparison is a very good one. We believe that all humans are conscience for the very simple reason that they all have brains built on the very same principles that ours are! It is no stretch at all to suggest that any human is conscious (unless you want to play solipsitic games). In this case the question of what consciousness is and how it arises is not relevant – you are a human with a brain like me, whatever makes me conscious almost certainly makes you conscious as well. The onus is clearly firmly with someone who wants to suggest another human is not conscious to prove it.

    However, it is a huge stretch to apply this to a totally different system (silicon) trying to emulate a process we have no understanding of (consciousness). This stretch relies on lots of assumptions about how information processing relates to consciousness – assumptions that seem fairly weakly anchored considering we dont know what the heck consciousness is and how it arises.

    BTW totally off topic, I played your “guess the f or d” game. You might think my approach was cheating – I rolled a die and submitted my input based on the result. After quite a few plays the prediction algorithm was sitting at a 37% prediction rate. Its obviously better at second guessing humans than dealing with real randomness 🙂

  81. Vadim Kosoy Says:

    Scott #39: Thanks for answering!

    2. If you assume a notion of proto-consciousness, why not prefer the simpler hypothesis that consciousness = proto-consciousness?

    3. “a believer in consciousness could always turn the tables, and say to the eliminativist: even if consciousness were a fundamental feature of the world, it’s still plausible that you’d be giving all the same arguments that it wasn’t, so how can your arguments possibly be persuasive?”

    This seems similar to the theist who says “maybe I haven’t proved god exists but none of your arguments prove god *doesn’t* exist.” If we can manage equally well with concept X and without it then Occam’s razor says we should do without.

    “all our views about personal identity, free will, consciousness, psychophysical parallelism, etc. seem to require that our minds can’t be freely copied, predicted, and rewound like computer code”

    This is a strong statement. At the very least, *my* views on these subjects don’t require those assumptions 🙂

    “To me, it would be absurd not to ask whether the things fit together”

    This is certainly a sentiment I empathize with.

    4. “If the brain has a ‘clean digital abstraction layer’ that captures everything cognitively relevant, and that notices the lower-level stuff purely as thermal noise, then the picture is wrong.”

    To the best of my understanding, this is indeed the leading hypothesis in neurobiology which I therefore find most likely. It is commendable that you’re willing to put your neck on the line like this (unless you mean something more restrictive than what I think you mean?)

    Nevertheless, let’s assume for the sake of the argument that physics looks like you want it to look. In this world, would nothing convince you quantum consciousness is wrong? Imagine that the world would already be populated by some sort of artificial humans that violate your consciousness criteria but that are a part of society as ordinary as “wild type” humans. In such a world, would you still be tempted to articulate a theory of consciousness which rules that a large portion of the world’s population (consisting of normative, often likable and even admirable people) is unconscious?

  82. Scott Says:

    fred #78:

      could it turn out that the big discrepancies we see in large scale physics/cosmology (dark energy/dark matter, etc), could be one day explained by the underlying simulation of our universe being “imperfect”?

    Peter Shor loves to joke that the solution to quantum gravity is that whoever coded up the universe simply never figured out how to make QM and GR mesh with each other, so if you did an experiment that depended on both theories, the universe would crash.

    Personally, though, when I’m given a choice between

    (a) God, or the godlike programmer-aliens who created our universe, were too dumb to figure out how to make something work,

    versus

    (b) We’re too dumb to figure out how it works,

    my probability for (b) is something like 1 – 1/BB(BB(10000)).

  83. fred Says:

    Scott #82

    Isn’t it a bit strange to claim that computability is the most fundamental property of our universe and also think that it’s very unlikely that our universe could be a simulation?

    It doesn’t require the assumption of any god-like/perfect beings, just the observation that we, as flawed as we are, are routinely running Turing complete computations (often world simulations).

    It’s merely the observation that a Turing machine can emulate any other hardware, so there could be many layers of “reality”, existing at various levels of abstraction, and there’s no reason to think we’re “special” and are at the very bottom of this recursion.

  84. fred Says:

    Scott #82

    Forgot to add:

    The evidence also seems to suggest that a universe that’s computable naturally assembles itself into more and more powerful Turing machines.

    E.g. the earth 6 billion years ago, as a molten blob of lava… fast forward … and it’s now covered with billions of computers.

  85. Jay Says:

    Scott #39

    >all our views (…) require that our minds can’t be freely copied, predicted, and rewound like computer code.

    This seems the root idea and motive for “The ghost”, but it’s actually unclear to me (maybe because I don’t share this intuition) whether you think these *are* common views or whether you think these *should be* common views. Why do you think these are or should be common views?

  86. Jim Kukula Says:

    Here is my response to Searle’s argument: http://interdependentscience.blogspot.com/2009/07/machine-intelligence.html

  87. Darien S Says:

    >”if you believe in superdeterminism then you believe the procedures always had to conspire across the universe to generate challenges that for some unexplained reason were easy enough for the particles to pass.”

    That’s the thing, the wording by Bell should’ve been not that everything being determined erases the issue, but that everything being determined and there being a spooky conspiracy erases the issue. I guess he wasn’t quite clear in his wording, as it makes it seems that full determinism alone, without conspiracies, is superdeterminism and that it resolves the issue.

    >In summary, superdeterminism is a miserable failure as a scientific explanation, for reasons having nothing to do with free will.

    Bell described superdeterminism as simply determinism applying to everything. The only escape from determinism is true randomness. And true randomness simply seems nonsensical.

    So whether it explains the issue or not, it seems difficult to believe that reality doesn’t exhibit deterministic mechanisms at its root. If we need to add nonlocality or something else to explain the experiment, then that may be the case. But bringing true randomness into the mix just seems like an unacceptable addition.

    It’s not only adding true randomness, you need to add a fundamental difference between past present and future, and a mechanism that transitions between these, something that turns the uncertain future into the labile present and then into the nonchanging past. You need absolute simultaneity, the existence of the present for this, something whose existence some consider relativity does not allow.

    If past, ‘present’ and future are similar in nature, then just as the past is nonchanging so to is the ‘present’ and the future.

    >Flavio #64: I think the probability that we’ll get convincing empirical evidence that we’re living in a computer simulation (i.e., something some entity purposefully created) is close to zero—

    Wolfram showed that extremely simple programs can produce extraordinary complexity. There have been examples of naturally occurring unlikely things like nuclear reactors, it is not inconceivable that given that extraordinarily simple programs can produce extraordinary complexity, the possibility of a naturally occurring simulation occurred in some natural medium is not zero.

    If it was a naturally occurring simulation, depending on the details of implementation, it may be possible to modify the fundamental rules. At least it would open the greatest potential for unbounded technological progress, if existence, if reality, itself becomes malleable.

    > It’s exactly like if you’d lost track of the photon in a lab experiment, except that in this case, cosmology tells us that the photon can never again be recovered, even using arbitrarily-advanced technology of the remote future.

    I’d say that depends on expansion of space continuing or accelerating. If brains can pop out of the vacuum, then something that reflects the photon can also pop out of the vacuum.

  88. Adam Says:

    If I deliberately overcorrect for “randomness doesn’t look random” biases I can keep the Oracle’s accuracy consistently below 50%. It’s an interesting challenge.

  89. JimV Says:

    On a long walk after my previous comment, a couple more thoughts occurred to me (so naturally I can’t resist posting them, sorry).

    1) The ““Aaronson Oracle” is not in my opinion a good way to test for an intrinsic random function in human brains – because randomization is presented as a specific goal (be as random as possible in your choices) so the next choice will be a function of previous choices (e.g., four of my last five choices were “d” so next I must chose “f”). In a way, it’s like telling people not to think about hippopotamuses. You need to disguise the purpose of the test (e.g., make it about speed of choosing rather than randomness) and present the choices in a way that is unbiased. I thought of a possible way to do this on my walk, but I’m sure everyone here can do so as well or better, so I won’t describe it.

    2) I’m probably missing something in Dr. Penrose’s argument, but if he is saying computers can’t be conscious because humans do it by using gravitized QM in microtubes, isn’t the answer to make computers with gravitized QM microtubes? If evolution could do it, why can’t we (in principle)? Or maybe he means we are going about it in the wrong way (by not using gravitized microtubes).

    As I’ve said many times before, I see consciousness as similar to Windows or other operating systems, which take external inputs, transfer them to internal routines without knowing what those internal routines are doing, and receiving outputs from those routines which it may then transmit externally. That is, much of the mystery shrouding consciousness is due to the fact that there are no nerves which monitor the brain internally and thus no way to know what it is going on in the background. As to how it feels, or how it would feel to a computer, who cares? (Not me.)

    Anyway, as I forgot to say last time, thanks for the interesting post. And please add a donations button so I can reciprocate.

  90. Scott Says:

    Jay #85: If it’s not a common view, I think that’s because most people simply haven’t spent enough time thinking through the full strangeness of a world with copyable minds! Making any plans or predictions for the future requires not only a model of the physical world and how it’s going to evolve, but also a model for how your “indexical pointer” (the invisible arrow pointing out which little part of physical world is “you”) is going to evolve. In all the circumstances we’re used to, it’s so obvious how to handle the indexical pointer that we don’t even think about: it just continues pointing to you, silly! Until you’re dead, or possibly comatose.

    Great, now try to think through how you’d make decisions, as a rational agent, if your indexical pointer could split in two or split into a million, and recombine, or point to two widely-separated physical objects that only become you when recombined, or point to an inert piece of software in transit from earth to Mars, or …

    And please come back to me when you have a coherent theory for all this! 🙂

    For more, see Sec. 2.5 of GIQTM itself.

  91. Scott Says:

    JimV #89:

      I’m probably missing something in Dr. Penrose’s argument, but if he is saying computers can’t be conscious because humans do it by using gravitized QM in microtubes, isn’t the answer to make computers with gravitized QM microtubes?

    Yes! Penrose said explicitly during the discussion—as he’s said before, in Shadows of the Mind for example—that it’s entirely possible that “some future Dr. Frankenstein” (as he put it) will create a conscious AI by building it with gravitized quantum microtubules. He added that he doesn’t want for that to happen, but that his own wants are irrelevant to what his proposal implies is possible.

      Anyway, as I forgot to say last time, thanks for the interesting post. And please add a donations button so I can reciprocate.

    Aw, but I have a comfortable academic salary (and have to file paperwork for any external income). Everyone: if you support this blog, then your praise is the only payment I need! 🙂

  92. Flavio Botelho Says:

    fred #83

    > It’s merely the observation that a Turing machine can emulate any other hardware, so there could be many layers of “reality”, existing at various levels of abstraction, and there’s no reason to think we’re “special” and are at the very bottom of this recursion.

    Scott #65:

    > Arguably, this question is orthogonal to the question of the Church-Turing Thesis, since even if our universe was a simulation, who’s to say that the simulating gods/aliens wouldn’t have access to super-Turing hypercomputers? And conversely, even if we never get any evidence that our universe is being “run” for any computational purpose, we could get (and I’d say, have gotten) very good evidence that it satisfies the Church-Turing Thesis.

    Well, but if we are inside an emulation, probably our emulators have real energy constraints (which still might be so ridiculous high as to appear as if it were infinite to us).
    But there are physical features to an emulated universe that could help economize energy needed for the emulations to run, like for example the Holographic Principle…

  93. Darien S Says:

    >Great, now try to think through how you’d make decisions, as a rational agent, if your indexical pointer could split in two or split into a million, and recombine, or point to two widely-separated physical objects that only become you when recombined, or point to an inert piece of software in transit from earth to Mars, or …

    If you sever a few connections in the brain, the corpus callosum, you lose perception of half of your visual field. The brain is still one object connected by other brain regions and by vascular tissue, yet suddenly half the tissue responsible for consciousness either is nonconscious or if it is conscious must either share or not share your unique identity. So either a new identity emerges out of the ether merely by affecting a few functional macroscopic connections or your identity leaves a large chunk of tissue or your identity divides into two identical identities or it doesn’t the illusion of identity occurs because of nonshared memory.

    My take is there is a possibility that all conscious things share the same unique identity, and this is not realized by conscious individuals due to nonshared memory giving the illusion of there being distinct identities. If two brains were properly functionally connected, perhaps they’d say “wow, we’re the same ‘me’ but with different memories and personalities”.

  94. mjgeddes Says:

    #Scott 82, Fred #83,

    The ideas of different levels of abstraction all being equally ‘real’ and the universe not being totally coherent is exactly what I was suggesting in my own theory of metaphysics (see other thread)

    Peter Shor’s ‘joke’ that GM and QM may *not* in fact totally mesh with each other in ‘reality’ may in fact be correct. Why *should* the universe be totally consistent? My idea is that ‘existence’ is not an all-or-nothing thing, but there are ‘degrees’ or ‘strengths’ of existence, and the universe is still in the process of coming into existence!

    I’m suggesting that the ‘code’ for the universe is indeed being changed on the fly during run-time! And the ‘software patches’ are what we interpret as consciousness! 😉

    (The programmers of the universe are working hard to correct the inconsistencies, so ‘software patches’ are going on all the time. The theory of ‘quantum gravity’ is coming along nicely and improving with each patch)

    There’s actually an easy rebuttal to the hard-core eliminativism of Dennett. Here it is: if consciousness is not real, then nothing is. Because if you ‘eliminate’ consciousness by reducing it to signals in the brain, there is no reason to stop the process of elimination there – after all we can ‘eliminate’ brains by reducing them to neurons. And we can ‘eliminate’ neurons by reducing them to molecules. And so on and on. Finally, we ‘eliminate’ the physical world altogether by reducing it to the pure mathematics of wave-functions.

    It does seem that we have to grant that reality can ‘exist’ at more than one level of abstraction. And when we do that, you end up with something just like the metaphysics I posted on the other thread.

  95. Mark D. Says:

    Scott #79: “Newtonian mechanics, Maxwell’s electrodynamics, special and general relativity, quantum mechanics, quantum field theory—have had the property that they appear to be simulable on a computer (at least, a computer with a random number generator) to any desired precision.”

    That’s a little bit of a strange statement to make when we don’t even have a rigorous mathematical construction for nontrivial quantum field theories at the level of, for instance, the existence part of the million dollar Yang-Mills existence and mass gap problem. I don’t believe nature allows hypercomputation, but one piece of evidence making the problem interesting has to be that the relevant functional integrals of QFT exist in the physical world yet despite enormous effort apparently cannot be defined by taking the limit of lattice approximations.

  96. Scott Says:

    Mark D. #95: I agree that the problem is interesting, and points like yours are very much why I hedged with “appear to be simulable”!

    In the case of QFT, though, we do have (for example) the work by Jordan, Lee, and Preskill, which shows explicitly how to simulate interacting scalar field theories to any desired precision on a qubit-based quantum computer. (See also their extension to the fermionic case.) They haven’t yet extended these results to the full Standard Model, but the difficulties in doing so seem more technical than fundamental, and don’t seem to require solving, e.g., the Yang-Mills mass gap problem. (Someone can correct me if I’m wrong.)

    The truth, though, is that even if someone proved tomorrow that QFT (as currently formulated) lets you solve the halting problem in finite time, I would say that it’s almost certainly just like the constructions of “hypercomputers” in Newtonian mechanics—i.e., an artifact of pushing an approximate theory beyond its domain of validity. I’d point out that QFT has to break down at the latest when you get to the Planck scale, to be replaced by some other theory that satisfies the Bekenstein bound (i.e., that uses a finite number of qubits to describe the state of a bounded region), and that therefore presumably satisfies the Church-Turing Thesis as well. At the end of the day, the Bekenstein bound is the real physics reason why I think Nature satisfies the Church-Turing Thesis, though that fact can also be manifested in effective theories if we’re careful enough in how we formulate them.

  97. Peter Byrne Says:

    This cogent essay shreds a lot of AI hype.

    https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

  98. JimV Says:

    Looking through the comments I see that Wolfgang at #55 said, “And if one thinks that the Copenhagen reduction of the wavefunction is associated with the switch from objective (wavefunction) to subjective (conscious observation) then it is natural to conclude that “gravitized quantum mechanics” has something to do with consciousness.”

    I thought I had heard that experiments have shown that conscious observations are not necessary, just the mere fact of having a detector which could tell whether the photon went through either slit (in the the two slit experiment) is enough to prevent interference fringes, whether or not the measurement is recorded or ever reviewed by a conscious observer (which I assume – perhaps wrongly – that we all agree that an electronic detector is not).

  99. Scott Says:

    Peter Byrne #97: I didn’t find that article cogent at all. In fact, the entire thing seems based on a trivial language confusion.

    In saying that “your brain is not a computer,” Epstein turns out to mean something true but pedestrian: namely, that the brain isn’t organized the way existing digital computers are organized. Nowhere does he even try to show why the brain couldn’t be simulated by a computer—i.e., why the brain violates the Physical Church-Turing Thesis. That’s what we’d need for the brain to be genuinely non-computational, i.e., for reproducing its behavior on a suitably-programmed computer to be impossible rather than merely complicated or hard. Worse, Epstein gives no indication of understanding what computational universality or the Church-Turing Thesis even are—meaning that the actual questions at stake here, the ones that Roger Penrose realizes he needs to answer for the brain-is-not-computational thesis to stand a chance, never even cross the horizon of Epstein’s consciousness.

    Related to that, Epstein never even tries to grapple with deep learning algorithms, which do have a vaguely brain-like organization, and which of course have enjoyed spectacular successes over the last five years. It’s as if someone published a philosophical article, in June 2016, entitled “Why American Democracy is Inherently Stable Against Authoritarian Demagogues,” without even showing awareness of any potential recent counterexample, let alone trying to explain it away. I’d call that borderline intellectual dishonesty, if not for my suspicion that Epstein really does have no more awareness about machine learning than about other parts of CS.

    Peter, you wrote a fine biography of Hugh Everett. Do you not see that Everett would’ve ripped this article to shreds, and would’ve been right to do so (whatever else he was right or wrong about)?

  100. domenico Says:

    I read, some time ago, the neuron manipulations of genetic modified c. elegans, to modify the behavior and their senses.
    If this is possible, then I don’t understand the quantum behavior of a brain: if a laser can change the behavior, then the neurons are a macroscopic objects with chemical reactions, so that many near classical objects have an equal behavior, so that the cloning could be possible.
    If it is possible to write in a c. elegans, I guess it is possible to read the behavior; there is the complete sequencing of the c. elegans and a partial computer cellular simulation (openworm) to predict in the future the worm thoughts and actions.

  101. Tommy Says:

    Hi Scott and thanks for this great post. I have read both Penrose’s books and many posts about your viewpoints in the last few years, but I admit that I have only superficial knowledge about the whole issue of “quantum consciousness”, and I feel a bit dumb by posting in this thread amongst people much smarter than me 🙂

    However, there is something I’ve never really understood. If the whole issue breaks down to “classical AIs might be conscious” vs. “no, they can’t because they lack the quantum effect of nanotubules”, wouldn’t it be reasonable to reconcile both arguments by considering AIs running on a quantum computer?

    Let me explain better: even *if* one assumes that quantum effects are somehow necessary for consciousness to evolve, wouldn’t an AI running on a quantum computer be eligible for being considered conscious? Or, put in another way, would both you and Penrose agree that this is theoretically possible? Does the issue only concern AIs running on “classical” (i.e. non-quantum) hardware?

    I’m asking because, as far as I can see it, this would solve at least part of the problem. Even from an ethical perspective, as you put it, killing such an AI would be wrong because you are destroying something inherently non-replicable – i.e., you could not beam one’s mind on Mars without destroying the one on Earth, or make copies of one’s mind in general, exactly because of the quantum effects given by the nanotubules.

    On the other hand, this would give a somewhat strict criteria to assess consciousness: if your hardware (or wetware) shows quantum behaviour, then you are eligible for being considered a sentient being, otherwise not.

    So, the question is: would you consider such a viewpoint philosophically sound? Would Penrose?

  102. Josh Mitteldorf Says:

    To me, the most compelling argument for Penrose’s side of this debate comes from experiments in which “quantum randomness” is shown to be subject to the influence of intention. Robert Jahn, while Dean of Engineering at Princeton, conducted such experiments and found 5-sigma significance. More recently, Dean Radin has found consistent, overwhelmingly significant results from an experiment of very different design.

    This vitiates the central claim about “90 years of unbroken successes of Quantum theory.” Now we have not only the Copenhagen Interpretation, which explicitly evokes conscious perception to collapse the wave function, we also have the interpretation-free results of several experiments.

    Both Jahn and Radin are supremely conscious that the burden of proof is high, and they have been meticulous and thorough in eliminating every possible confounding influence that might provide an excuse for considering their results an artifact.

    In fact, Jahn and Radin are only the least interesting and most careful tip of a huge body of experimental data in psi research. Once we grant that the results of these two are incontrovertible, it opens our minds to this much larger corpus, that we may have dismissed as improbable.

    I recommend Liz Mayer’s book, Extraordinary Knowing.

  103. BLANDCorporatio Says:

    irt. Scott:

    “now try to think through how you’d make decisions, as a rational agent, if your indexical pointer could split in two or split into a million, and recombine, or point to two widely-separated physical objects that only become you when recombined, or point to an inert piece of software in transit from earth to Mars, or …

    And please come back to me when you have a coherent theory for all this!”

    There’s the passing flame intuition that works well enough for all cases you’ve given (in particular, it includes the possibility to destroy the indexical pointer when no individual can be recognized as priviliged to be the original from the copy process, and once destroyed an indexical pointer must not be relied on to come back). And that’s just one way.

    Conversely, hinging consciousness (and by your definition of it, a good chunk of moral debate) on non-copyability seems dangerous. Not even QM would preclude the existence of “pretty good” copies that would behave pretty much exactly like the original, for any amount of time. The brain is likely sufficiently robust (a-la error codes) that whatever makes you you doesn’t depend on quantum twitches. And is it really a putative ability to pick up start-of-the-universe static the thing that makes a person valuable? Now that I’d find weird.

    Also, if I remember correctly your ghost in the Turing machine post was filled with caveats about those being not completely rigorous philosophical musings and a request to not treat and argue against them as if they were. So to your speculations we should only answer with coherence. Tsk. Tsk I say!

  104. Hannah Says:

    Scott #72:

    All that happens is that a photon escapes past your dS boundary…

    Okay, so tell me if I have this wrong. The positive cosmological constant will make us roughly a flat de Sitter universe when time is larger, positively curved even if it is spatially not curved. (I misinterpreted Ω_K as the spacetime curvature, but now I have gotten the idea and realize that it is only the spatial curvature and that a positive cosmological constant really does mean a de Sitter future.)

    A point in a de Sitter universe has a cosmological horizon, i.e. a maximum distance that it can transmit information to, and this horizon is constantly shrinking. Gallant and Goofus are sitting in chairs on Earth next to each other playing Cookie Clicker with neurological processes that could correspond to qualia. Goofus thinks deterministically; Gallant thinks randomly with true quantum noise. Their neurologies get entangled with photons radiated from Earth into space, and those travel outward to the future cosmological horizon.

    The photons never actually cross the future horizon, but after a certain amount of time (about ten billion years?) they have gone so far that they are causally disconnected; at some point they cross the past horizon and they can never get back to affect our two heroes. We decide that those photons have left the quantum system as far as we are concerned and do a partial trace over them to get a mixed state.

    Then we decompose the mixed state into pure states with a probability distribution, and pick one at random. This choice collapses the superposition and tells us something about what happened ten billion years ago. Gallant’s thought processes depend on the mixed state, so he was conscious and had qualia. But poor Goofus had deterministic thought processes that do not depend on the mixed state, so he did not have any qualia. Is that the right idea?

  105. Scott Says:

    Hannah #104: Yes, that’s close. (Except that what would what matter, on this account, is not Gallant’s mere dependence on quantum randomness, but rather dependence on quantum initial states about which we have Knightian uncertainty—see GIQTM Section 3 for more on the distinction.)

    And yes, it sounds insane when you put it the way you did, but there’s an alternative perspective that makes this view seem natural and inevitable, whether or not it ends up being correct!

    Namely, an outside observer can “read Goofus’s code.” Given the right technology, the other people around Goofus know everything he’s going to do before he does it. If they kill Goofus by accident, they can restore him from backup. They can keep a local copy of him when he’s out of town. They can modify their local copies to see what happens (assuming Goofus is under a Creative Commons license or whatever 🙂 ). If they can’t figure out how to tell Goofus they think he’s a robot without angering him, they just need to try 100 different ways in simulation. Etc.

    With Gallant, outside observers can’t do any of these things. Just like with us, people who Gallant better can predict him more accurately than those who know him less well, but if you want extremely accurate knowledge of what Gallant would say about something, the only way to get it is to ask him.

    Notice that none of these are wifty metaphysical distinctions, about Gallant having some magical quantum pixie dust that Goofus lacks. Rather, they’re actual empirical distinctions about how you can interact with them.

    Yet I claim that these distinctions, if you trace through what they actually entail, boil down to the cosmological considerations that I mentioned. If Gallant weren’t amplifying quantum states subject to Knightian uncertainty, then given sufficiently advanced technology, Gallant would be predictable by outside observers just like Goofus is. And as for what it means to “amplify” a state—well, there could be some other principled criterion for that; the cosmological criterion involving our deSitter boundary is merely the most “conservative” one I know about, the one that doesn’t do any violence to orthodox linear QM.

  106. Scott Says:

    Tommy #101:

      If the whole issue breaks down to “classical AIs might be conscious” vs. “no, they can’t because they lack the quantum effect of nanotubules”, wouldn’t it be reasonable to reconcile both arguments by considering AIs running on a quantum computer?

    For whatever it’s worth, that’s exactly Penrose’s position. (Except that a mere quantum computer is way less than what he needs! He needs a device that would be sensitive to uncomputable effects from gravity-induced state vector collapse.) See my comment #91.

    The general issue many participants in this discussion are trying to answer (and are answering differently) is this: besides passing the Turing test, what (if anything) does a computational process need to do before we ought to consider it conscious?

    And yes, several of the proposed answers mention quantum mechanics for one reason or another, but they’re still extremely different from one another (as Penrose’s is different from mine)! In any case, though, mere “quantum behavior” can’t possibly give us a non-vacuous criterion, since everything in the universe shows quantum behavior! 🙂 So at some point, one needs to say something about what kind of behavior, and what role it plays in the computation.

  107. Scott Says:

    Vadim #81 and domenico #100: My understanding is that a “clean digital abstraction” layer, of the sort I’m talking about, has not been shown for the C. elegans worm, even though we now know its entire connectome. I.e., that people have built neuron-by-neuron models, but that they currently fail to reproduce the worm’s behavior, suggesting that lower-level or otherwise unmodeled degrees of freedom must be important. Any experts who can correct me or fill in details, please do so!

    Let me, as Vadim put it, “put my neck on the line” and say: when biologists manage to isolate the digital abstraction layer for C. elegans—or better yet, when they show how to model and predict a specific C. elegans, with whatever idiosyncrasies it has (worms must have idiosyncrasies… 🙂 ), without killing the worm in the process—that won’t refute the picture I’m discussing, but it will be clear progress in the direction of refuting it.

  108. Kenneth Augustyn Says:

    Freebit picture: Do you see a relationship to Henry Stapp’s discussion of Figure 1 in his 2007 “Whitehead, James, and the Ontology of Quantum Theory”? http://www-physics.lbl.gov/~stapp/WJQO.pdf

  109. Scott Says:

    Vadim #81:

      This seems similar to the theist who says “maybe I haven’t proved god exists but none of your arguments prove god *doesn’t* exist.” If we can manage equally well with concept X and without it then Occam’s razor says we should do without.

    FWIW, even among the most rock-ribbed atheist reductionists (Richard Dawkins, etc.), very few have suggested that we can manage without the concept of consciousness. Of course, sharp disagreement persists as to what kind of concept we need (does it need to be a basic feature of reality, or can it be emergent, etc.), and a related but orthogonal question, which physical entities we should or shouldn’t take to be associated with it.

      Imagine that the world would already be populated by some sort of artificial humans that violate your consciousness criteria but that are a part of society as ordinary as “wild type” humans. In such a world, would you still be tempted to articulate a theory of consciousness which rules that a large portion of the world’s population (consisting of normative, often likable and even admirable people) is unconscious?

    See my response to Hannah, comment #105. Again, though, I completely reject any view that has the implication that, if two human populations A and B are behaviorally identical, except that B lacks “magic quantum pixie-dust” or whatever, then for that reason alone we’re justified in treating B as unconscious. That’s one of my reasons (not the only one) for rejecting the microtubule view. I suspect you and I are in agreement here.

    For me, though, the crucial point is that in your scenario, the “wild type” and “artificial” humans would not be behaviorally identical, because the artificial ones could be predicted and copied and reset to their initial states while the wild-type ones couldn’t be. And so, for example, if you were on trial for murdering one of the artificial humans, you almost certainly wouldn’t need to convince the jury that your victim lacked quantum pixie-dust and was therefore unconscious—a scenario that I find repugnant and that you probably do too. Instead, you could simply restore the victim from backup (along with 10 more copies for good measure), pay a fine or whatever, and everyone could go home!

    In short, I see it as a central selling point for the picture I’m discussing that it would not lead to any class of intelligent beings being arbitrarily discriminated against, because of magical/metaphysical criteria of no empirical relevance (“quantum pixie-dust”). Rather, the beings in question would already have a radically unfamiliar status in law and morality, before we even entered into theoretical questions about consciousness, for purely empirical reasons related to the beings’ copyability.

  110. wolfgang Says:

    @Scott #65

    >> the probability that we’ll get convincing empirical evidence that we’re living in a computer simulation … is close to zero

    If The Donald wins the election I would take it as strong evidence that we are trapped in a cheap video game.
    Apprentice: The Presidency 3.0 – now with more 3D realism and reduced artificial intelligence.

  111. Alex R Says:

    (I must admit that I haven’t read through the entirety of the thread, but I did search to see that this had not yet come up)

    You bring up the no-cloning theorem as an impediment to non-destructively copying a brain state (supposing that there is something quantum going on). I’m curious as to what you have to say about the argument presented here (with pretty pictures), which can be summarized as follows: by observing sufficiently many measurements, one can infer the current quantum state of something that began in an unknown quantum state without violating the no-cloning theorem; this means that copying brains is not physically impossible, merely computationally intractable (exponential in the number of qbits).

    I could be doing a poor job summarizing, though.

  112. William Hird Says:

    Wonderful posts all , but I feel obliged to rain on the parade a little bit here and remind everyone that there is a YouTube video of Richard Feynman being asked by a journalist to explain in simple terms the attraction and repulsion of magnets (action at a distance) and was unable to do so. So if one of the greatest scientists of the 20th century can’t even explain magnets, FORGET about trying to explain consciousness , it’s laughable. Sure we have electro-magnetic theory that can give accurate results of various experiments but that doesn’t necessarily mean that we UNDERSTAND what the underlying principals of nature “really are”.

  113. Scott Says:

    Alex R #111: Thanks for the link! While I appreciate that post for seriously engaging the question, one thing it ignores is that the quantum system you’d be trying to learn here is an open system—i.e., one that’s constantly getting refreshed by new microscopic degrees of freedom from its environment, which interact with the old ones in some complicated way. So the relevant timescale is less like whatever’s needed for all the qubits in some isolated brain to decohere, than like whatever’s needed for all the qubits in the universe to decohere!

    On the other hand, even if we were just talking about an isolated brain, the relevant timescale for this sort of tomography (which incidentally, is closely related to what’s done in my BQP/qpoly paper) could easily be orders of magnitude longer than an ~80-year human lifespan.

    Conversely, I explicitly noted in GIQTM that, in a deSitter patch with ~10122 qubits, all of the qubits from the early universe will eventually either recede past our horizon or else get amplified to macroscopic scale: eventually, there will be no Knightian uncertainty left, so on the picture suggested here, no “opportunity for free will” either! But ~10100 years, the time for black holes at galactic centers to evaporate, seems like a lower bound on how long that would take.

  114. Scott Says:

    wolfgang #110:

      If The Donald wins the election I would take it as strong evidence that we are trapped in a cheap video game.

    Just yesterday, as it happens, Matt Damon made closely-related comments in his MIT commencement speech—a speech that explicitly discussed the simulation hypothesis and the views of Nick Bostrom and Max Tegmark, as well as the possibility of moving to a different simulated universe where Trump didn’t win the primary.

    Damon reasoned, correctly I think, that we should try to live interesting, awesome lives regardless of whether we’re living in a simulation or not. Personally, I’d go even further than him, and say that the simulation hypothesis has no consequences for anything whatsoever—until it’s coupled with a proposal for testing it, at which point it has to play by the rules of any other kind of science (“reason to take seriously, rooted in actual details of the observed world, or GTFO”).

    In a yet further coincidence (evidence for the simulation hypothesis? 🙂 ), yesterday Vox ran an article that explains the same positivist insight very nicely. (Though the ending veers into philosophy that’s much more tendentious and arguable.)

  115. JollyJoker Says:

    Forgetting the “how is this related to consciousness” problem for a moment, how about a simulation scenario where the simulation maintainers, although not having anything superior to BQP, affect our decisions based on what happened in previous simulations; information we can’t have because it doesn’t exist in our universe? We can’t recreate that information because someone running loads of universes will always have better data than we do. Perhaps we can’t ever get enough data to nail down what they’re trying to achieve.

  116. Gil Kalai Says:

    Following up on my #25,

    “I’d expected to be roasted alive over my attempt to relate consciousness and free will to unpredictability, the No-Cloning Theorem, irreversible decoherence, microscopic degrees of freedom left over from the Big Bang, and the cosmology of de Sitter space. Sure, my ideas might be orders of magnitude less crazy than anything Penrose proposes, but they’re still pretty crazy! But that entire section of my talk attracted only minimal interest. With the Seven Pines crowd, what instead drew fire were the various offhand “pro-AI / pro-computationalism” comments I’d made—comments that, because I hang out with Singularity types so much, I had ceased to realize could even possibly be controversial.”

    I don’t know about consciousness and in my opinion predictability is only one aspect needed for understanding the free will issue. But predictability is, of course, an important issue on its own. A possible explanation for non-predictability, a couple orders of magnitude less crazy than irreversible decoherence and microscopic degrees of freedom left over from the Big Bang (whatever it means), is a failure of quantum fault-tolerance and “quantum supremacy.” Such a possibility proposes a description for how do local quantum systems behave, a topic not settled from first principles from QM. (Moreover, this explanation is supported by the standard noise models for noisy quantum circuits.)

    Of course, this possibility will be tested (and could be refuted) by various experimental efforts in the next few years.

    (Regarding #96 it is indeed a plausible conjecture that QC can perform efficiently computations based on the standard model, but proving it, even just for early basic such computations from the 60s, will be a formidable task related and possibly of a similar difficulty to the gap mass problem.)

  117. Hannah Says:

    Hmm. So I have advanced technology that can sneak into “wild” people’s brains and look at what they are doing in detail, but I can’t predict their behaviour in advance because there are unpredictable random numbers coming in.

    What if I sneak into the room of a wild-type human one night, use my sufficiently advanced technology to replace the unpredictable RNG in her brain with a pseudorandom number generator so that she is now predictable, wait 24 hours, and then copy her brain, shoot her, and resurrect her? Would a court consider that murder, or criminal identity tampering equivalent to murder, and if so, when did I murder her: today, when I shot her, or yesterday, when I destroyed her unpredictable RNG?

    (There’s a jurisdictional issue. Today we were at a philosophy conference in New York, and yesterday we were at a philosophical electronics expo in Indiana. Indiana has the death penalty―New York doesn’t. Will I go to the chair?)

    If I murder her when I turn off the unpredictability:

    1. Will she die if I turn it off and back on right away?

    2. Will she die if I turn it off, back her up, shoot her, resurrect her from backup and then turn it back on?

    3. Will she die if I replace her random number generator with a different random number generator?

  118. Scott Says:

    William Hird #112: On the contrary, I’d say that we understand an enormous amount, probably more than almost any intelligent person before the scientific revolution would’ve imagined it possible to understand. To complain that we don’t know the “true natures” of things seems like begging the question to me, because it ignores the central insight that made the scientific revolution possible in the first place—namely, that to know what a thing does, under all possible circumstances, is to know its true nature!

    Feynman, to take your example, did know something about the electromagnetic field, and even wrote whole books on the subject—have you looked at any of them? 😉 I don’t know the details of the incident, but if he declined to explain E&M to a TV interviewer, doesn’t it seem likely that that had less to do with his inability to explain it at all, than with his inability to explain it in one or two minutes?

    (I do know that, when someone asked Feynman to explain in 15 seconds what he’d won the Nobel Prize for, he replied, “buddy, if I could do that, it wouldn’t have been worth the prize.”)

  119. wolfgang Says:

    @Scott

    Thank you for the link to the Vox article, I like the part where it says “What humans can assess for themselves is always partial, always provisional, always a matter of probabilities.”
    We had a debate about that a while ago, when I asked about the probability that 3*5 is not 15 – as the sheeple in this Matrix believe.
    😎

  120. Scott Says:

    Hannah #117: Personally, I’d say that your original tampering with the person’s brain could justly be prosecuted as murder. There’s a wide spectrum of defensible views about personal identity, uploading, AI, etc., and my speculations in this post might be completely wrong.

    But to render someone’s brain mechanistically predictable, you’d essentially have to have nanorobots rebuild it neuron-by-neuron out of predictable components—something that seems physically much more extreme and invasive than just scooping out their cortex with an ice cream scoop! (To reverse the ice cream scoop would “merely” require neurosurgery far beyond anything doable today. To reverse the nanorobots would take an entirely different order of technology.)

    Now, presumably we agree that the ice cream scoop counts as murder! In the nanorobot case, the harm is claimed to be mitigated by your creation of a new, mechanistically-predictable being, who’s behaviorally almost identical to the one you destroyed. But is it really behaviorally almost identical? How could you prove that? Even if you could, the idea that you extinguished (or perhaps vastly altered or impoverished?) the person’s consciousness—that for one reason or another, consciousness would fail to transfer smoothly from the original substrate to the new, predictable one—seems well within the range of reasonable opinions. Even if the person you did this to happened to think otherwise, still, you had no right to make a possibly-life-or-death metaphysical judgment on the person’s behalf.

    Verdict: Guilty. 🙂

  121. Jay Says:

    Scott #90

    Thanks for clarifying your motive, e.g. that you feel rationality can’t hold for computable agents. All we’re saying (Vadim made a similar comment in #81) is that this thought represents a very personal contribution, not a mainstream idea.

    //

    Unrelated completly, you said you explained a bit how alphago and Watson do work. Alphago, it’s easy to find any technical detail one may dream of. But finding the technical details for Watson seems far more challenging. Do you know (or anyone!) where I can find it?

  122. William Hird Says:

    @Scott #118
    “I don’t know the details of the incident, but if he declined to explain E&M to a TV interviewer, doesn’t it seem likely that that had less to do with his inability to explain it at all, than with his inability to explain it in one or two minutes?”

    Well Scott no, did you see the video ( sorry I can’t do hyperlinks on my computer), it seems to me that he his clearly stating that he can’t explain action at a distance in general, not because he thinks the answer would be too long.
    Of course he (Feynman) is also famous for saying that no one understands quantum mechanics either. So I reiterate my previous comment that it is silly to speculate on the physics of consciousness when ( according to RPF) we can’t really explain QM or magnets, or maybe lots of other things that would be pre-requisites for understanding consciousness.

  123. Scott Says:

    Jay #121: I don’t actually know what you mean by “rationality can’t hold for computable agents.” (For one thing, I don’t think anything in our universe violates the Church-Turing Thesis.)

    A better statement would be: I don’t know how to discuss rational decision-making for copyable agents. (It might be possible, but I don’t know how, and I think people severely underestimate the difficulties.)

    Yes, I agree that whatever is original in what I said represents a personal contribution (have I ever made any other kind? 😉 ).

    On your other question: I attended a couple of talks by the Watson creators, and was mostly going on that, but here’s a paper that seems helpful.

  124. Scott Says:

    William #122: It feels weird to debate what Feynman said or didn’t say if you can’t give me a link to the video.

    But if I do get to see it, my prediction is that I’ll find what you said to be a serious mischaracterization of what he said. (Or if I’m wrong about that, then I’ll have no hesitation in saying that Feynman was wrong in this instance!)

    We really can go much deeper than “action at a distance,” by explaining it in terms of ripples in fields that propagate outward from a source and travel no faster than light. And magnetism is just the logical consequence of combining the Coulomb force with special relativity. And obviously Feynman knew all that. And those are true, substantive things that he could have said in a minute or two.

  125. Joscha Says:

    Scott, this is a beautiful talk, summing up many of the relevant arguments in this debate. I especially liked that you point out that Penrose’s objection would not only ask for him to provide an argument for functionality that cannot be realized computationally but could be achieved in other ways, but that Penrose actually requires us to go beyond current physics, thereby showing how bold and daunting his proposal really is, and how far he is willing to go in his rejection of AI’s premises.

    I also agree with you and with the anti-priestly audience member that you would have deserved to be grilled primarily for hinging your notions of free will and identity on determinism and non-copyability. After all, free will and identity originate in our subjective feeling of agency, and our subjective construction of identity. It is not clear that these mental representations are in any way indicative of a deeper metaphysical reality that reflects cosmological properties.

    In other words, it is unconvincing to me that my agency would feel subjectively different if my cognitive processes supervene on a probabilistic or a deterministic universe computation. ‘Free will’ should probably be treated as a label on my own mental representations of intentional actions that indicates ‘causally embedded with the discursive levels of cognition’ or something like that, not as the amplification of probabilistic quantum processes.

    Likewise, my sense of identity seems not to hinge on the fact that I am unaware of successful low-level cloning experiments, but on mental representations that encodes my own personal narrative as an individual with a certain biography, a world-line that allows me to learn from previous experiences and extrapolate future developments. For this kind of identity, it seems to be necessary and sufficient that I believe to be sharing a biography with previous and future instances of what I consider to be me. If someone goes ahead and changes my mental representation to make me believe that I am Scott Aaronson, then I cannot see how I will not identify as Scott Aaronson, regardless of how well Scott Aaronson is cloneable. (It seems to be somewhat superfluous to point out that the Joscha that wakes up tomorrow will not be an exact clone of the Joscha that goes to bed tonight, and that I fully expect tomorrow’s Joscha to not care about that when it comes to experiencing his sense of identity.)

  126. Hannah Says:

    …something that seems physically much more extreme and invasive than just scooping out their cortex with an ice cream scoop!

    Okay, that is fair. The death penalty for me. But I think that that is relying on the intuition that the human brain is fragile and difficult to modify, and we want ideas that apply even in a machine intelligence era. So let me change the question to get rid of this problem. (Actually, I was originally going to say it like this, but the setup was too elaborate.)

    Suppose that even the wild people don’t have normal human brains any more. Everyone’s brain is modular. The main piece is a brain simulation running on a computer with no source of true randomness. There is a thermal noise module that generates unpredictable random numbers for the brain program to use. If the module is missing or broken, the computer uses pseudorandom numbers instead.

    The wild and tame people have identical brain architecture, except that the wild people have a functioning module that generates an unpredictable stream of random numbers, and the tame people don’t have a module and they are working with a predictable stream.

    So now I want to ask the same questions. Suppose that I have the kind of technology necessary to predict predictable people under predictable sensory input, and I do so on a regular basis for my hit reality TV show, “I Know What You’ll Do.” If I break someone’s unpredictability module so that I can predict them, is that murder, or just great television?

  127. Scott Says:

    Hannah #126: Check out GIQTM Section 5.3 (“The Gerbil Objection”), which puzzles over exactly this sort of issue. Briefly, though, on the account we’re discussing, it’s not clear that either of the two types of humans would be conscious in your scenario! Obviously, pure random numbers (let alone pseudorandom ones) aren’t enough for interesting unpredictability; instead we want irreducible Knightian uncertainty. But even Knightian uncertainty seems like it can’t be enough either, if whatever produces it can be “cleanly decoupled” from the intelligent information processing—so that it’s feasible just to swap out the Knightian unpredictability source for something else, while leaving the cognitive part unaffected. If you want it in slogan form, this is an account according to which consciousness requires fragility.

  128. mjgeddes Says:

    Scott,

    Well , I think, at the end of the day…. the simplest hypothesis is that there’s no quantum magic: no quantum-gravity microtubule Penrose nonsense, no uncomputability and no ‘Knightian unpredictability’ either. I really think people are tying themselves in knots over this, and massively over-complicating and over-thinking this. They want something that makes them ‘special’ and ‘magical’, but the harsh reality is….well…it’s probable that there really isn’t any ghost in the Turing machine.

    Consciousness is probably generated by a good old-fashioned algorithm describable as a classical Turing machine, probably quite a simple one.

    But the ‘bad news’ (no quantum mystery-magic that makes us special) is also, if you think about it, ‘good news’ in another sense: it means that creating AGI may actually be much simpler than is commonly believed! It means that we can generate consciousness simply by typing in the right java program on our PC! 😀

    So why not just, for the sake of argument, *assume* that the solution to consciousness is a very simple algorithm, and see where that gets us?

    I’m inclined to think that the stuff you deal with in your own field (information, complexity) is probably highly relevant. My strong suspicion is that consciousness is probably directly describable as some sort of practical complexity issue, which should be right up your ally!

    Have you considered investigating various algorithms to find ones where some sort of weird complexity limitations quickly come to the fore? – these sorts of algorithms would be the ones I’d suspect as having something to do with consciousness.

  129. J. Sketter Says:

    Nothing to comment on the current subject, Dr. Aaronson, but I write to shortly thank you. I read yesterday a couple of your posts and interviews + one thesis I think you linked. They really unlocked the QC for me.

    Even if your views of humankind seem to be somewhat pessimistic, sometimes 🙂 – you seem to be a remarkable populariser of your science. Were already as a young man, it seems. Keep up the good work both in your educational as well as in your scientific efforts.

    And this oracle – the best I managed to get in long sessions was < 55%. Slightly annoying, as the program appears to be very dumb. Like losing every game to your PocketFritz… But I put the link forward to annoy my friends, instead.

    Greets from Finland.

  130. Hannah Says:

    Ah, sorry, thank you. Okay, it seems like that definition is thought-experiment-proof. Actually I think we are practically saying “if there is a thought experiment that causes a philosophical problem, then it’s not consciousness.” So let me go in a different direction. What if it is physically possible in theory to predict someone, but practically impossible, so that these philosophical problems never come up in real life?

    I’m not sure if these concrete examples are interesting or annoying, but… Suppose that it is 2100, we have a complete description of physical law (and we know that somehow), and it is possible to predict or copy a human brain, but you need a solid sphere of detectors, computronium, supports, and cooling systems extending out to about a light-minute, with a little seat in the middle for the person, and that is the theoretical best you can do. And there is no chance of building something like that any time soon.

    (Even assuming perfect technology, a solid sphere of that size would mass something like 12000-32000 times the mass of the sun, and it would take hundreds of years just to collect enough mass together for relativistic reasons.)

    However, there’s a remote possibility that a Kardashev III civilisation with peculiar motivations have built an intelligent-being-predictor out of a few thousand of their solar systems and it is now on its way to Earth.

    Am I conscious, or is it necessary to eliminate this remote possibility? If I manage to die without being predicted or copied, and my brain is unrecoverable after my death, then can we say that I was conscious in retrospect?

    (If I can get away with that, then here’s a followup question: what if it is physically possible and easy to predict a human brain with the technology available to my civilization, but it is very illegal, and so it never actually happens to anyone? What if I work at the only company that can do it, and we are strictly prohibited from using the machines to do that? “No philosophical crises about consciousness for 196 days!”)

  131. Jay Says:

    Scott #123

    Yes, I meant copyable agents… although, on second thought, isn’t any copyable agent computable, and any computable agent copyable? Can you provide a counterexample?

    No, sometime you make a contribution (such as “Shor I’ll do it”) that I wouldn’t call “very personal”, because the science behind it is already mainstream even if the presentation is deliciously original. The point is, saying that there is a difficulty for copyable agent (such as a computer program) to behave rationnaly is something that is not mainstream at all. But sometime you talk as if this was a given. Maybe you should dedicate a paper just on this topic.

    Thank you for the link. That’s the best I read on Watson, even though the lack of meat is completly frustrating.

  132. Andrew Foland Says:

    If free photons turn out to have a mass of a pico-femto-eV then it will break the argument-from-principle that recoherence is forbidden. It would probably leave an argument from practicality intact. That the speed-of-light is an absolute barrier is better put that the speed-of-massless-particles is an absolute barrier. There is nothing in quantum mechanics that requires there to be massless particles.

    The thought experiment that goes with this would be: can you make a conscious entity out of particles that don’t couple to any massless degrees of freedom?

  133. Scott Says:

    Andrew #132: Thanks; that’s another possibility for falsifying this account that I hadn’t thought of! Or perhaps more precisely: I suppose the “cosmological/multiverse interpretation of quantum mechanics” would be falsified if, not only were there no massless particles, but there were no single particle of maximum speed, only an infinite hierarchy of particles that get closer and closer to c.

  134. Scott Says:

    Jay #131: The account I’m discussing here is precisely one according to which agents would be “computable but not copyable”—in the sense that, if you knew their state, then there’d be no obstruction whatsoever to simulating them on a computer, but you can’t learn their state to a high enough accuracy without destroying them. I don’t know whether our world is like that, but it’s very clearly a logically possible world.

    Conversely, we can also imagine a world where agents would be copyable but not computable! (E.g., a world where we were all pieces of software running on classical digital computers, but ones that had the ability to tap into an oracle for the halting problem…)

  135. James Cross Says:

    So many comments here that I may be hitting topics already touched on.

    “And what’s done with wetware, there’s no reason to think couldn’t also be done with silicon. If your neurons were to be replaced one-by-one, by functionally-equivalent silicon chips, is there some magical moment at which your consciousness would be extinguished? ”

    I really never understood how this would actually be done. Communication between neurons is done with flows of ions (sodium, potassium I think primarily) through membranes. The silicon neuron would need to have a similar interface if it were to communicate with other living neurons although later it might be converted to wire or some other form of interface. What would power the silicon neurons? Living neurons derive there power from chemicals. The chip would need to be able power itself from the same source. So for both power and input/output it would need to have something that is quasi-biological if not actually biological. What would the boundary be like between silicon and living material to enable the silicon chip to actually replace the biological neuron.

    I think part of the problem is that we are looking neurons as simple input and output machines. Perhaps they are but how do we go from that assumption or belief to the idea that consciousness arises simply from these actions of input and output?

    In some ways we are thinking too hard about this. Consciousness is something that arises precisely from the complex carbon molecules that constitute living material. In that case it would have an evolutionary origin and couldn’t be replaced by silicon.

    I think there is also frequently a lack of clarity in many different ways the term “consciousness” is used.

    Consciousness is frequently associated with intelligence but intelligent behavior can even be found in slime molds.

    http://www.nature.com/news/how-brainless-slime-molds-redefine-intelligence-1.11811

    Consciousness is often taken to mean self-awareness. In this case we use things like the mirror test to distinguish conscious creatures. This limits consciousness to a select group – humans, some other primates, dolphins and some whales, and bizarrely (according to Wikipedia) ants.

    https://en.wikipedia.org/wiki/Mirror_test

    Others want to think of consciousness associated with an advanced capability of symbol manipulation. In this case, only humans would likely be conscious and the origin of this ability probably goes back to use of tools and development of language. Mathematics arises from this ability at symbol manipulation and the idea is just an inadvertent side-effect is probably not correct since logic and structure are inherent in language, ritual activity, and social structure. Numbers are language and mathematical concepts.

    Consciousness, as I think of it, arose as a control mechanism for the organism. The brain is near the mouth, the eyes, and the ears and the spinal cord runs the length of the digestive system. It isn’t hard to imagine the purpose it serves. Its operation certainly involves algorithmic-like activities but that doesn’t mean it can be reduced to them.

  136. unapologetically_procrastinating Says:

    Scott #79

    Thanks for the reply and I see you handled what I really wanted to ask like the true zen master that you are 🙂

    I simply feel that TMs are too powerful a concept to be useful in understanding the mind and its too early to commit to the kind of math needed to talk about the mind. We need to do more neuroscience.

    On the other hand, there is a new preprint out: Eric Jonas, Konrad Kording, Could a neuroscientist understand a microprocessor? [doi: http://dx.doi.org/10.1101/055624%5D that you may find interesting.

    Here’s the abstract:

    There is a popular belief in neuroscience that we are primarily data limited, that producing large, multimodal, and complex datasets will, enabled by data analysis algorithms, lead to fundamental insights into the way the brain processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. Here we take a simulated classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the processor. This suggests that current approaches in neuroscience may fall short of producing meaningful models of the brain.

  137. wolfgang Says:

    @Scott #133

    >> no massless particles

    But this shows that something is not quite right with your interpretation imho.
    The interpretation of quantum theory depends on whether photons have exactly zero mass or almost zero mass?
    And understanding the consciousness of some apes on a tiny planet living usually less than 100 years significantly changed when the redshift of far away galaxies was measured in the late 90s ?

  138. Jay Says:

    Scott #134

    Ok, so let’s try to recap the worldview you suggest:

    1) human mind can have an effective description, computable and copyable, so that a computer can pass the Turing test but would not be trully conscious (notwithstanding whether we *should* treat it as if it was).

    2) human mind is best described by a quantum computer which use a source of freebits, so that it is computable but not copyable, and, critically, the contribution of the freebits is non trivial (notwithstanding the precise mechanism that then provides consciousness).

    We can certainly *imagine* that this is the reality. However, it would be far easier if you could provide an example for which one can *demonstrate* that this set of properties is indeed possible (namely, a quantum algorithm that does something non trivial using freebits, but for which an effective, computable description holds even without the freebits contribution).

    In the same vein, your point that there are situations in which a computer program would find hard to decide rationaly would be far easier to understand if you could provide a concrete example (free from self-contradictions). Extra Turing points if it depends on freebits. 🙂

  139. Gil Kalai Says:

    Another thing related to predictability that was raised in my 2012 debate with Aram Harrow Aram’s second thought experiment (from the debate’s 4th post). Of course, a noiseless quantum computer is predictable, and Aram proposed that starting with a noisy quantum computer (or another quantum system) we can embed it into a noiseless larger quantum computer of a sort, by including the environment that governs the noise in the description of the “computer.” (This argument is similar to some arguments in this discussion.) After a long exchange of comments (ending with this one) we came to the conclusion that pretty much the only way to identify a non-trivial noiseless (or predictable) quantum system using this idea is to do something as drastic as taking the entire universe.

  140. jonas Says:

    Scott, you mention that you weren’t roasted alive about your speculation in the Ghost article, about Knightian uncertainty from some cosmological phenomenon being a source of free will. Part of this might be because the topic of this double talk was Penrose’s theories, not yours, and the audience was too well-behaved to bring up your crazy theories, for it would count as an ad hominem attack unless you mentioned them first. But I’d like to mention why I am not roasting you about those theories.

    I do think that both of your theories are crazy. I have read Penrose’s book: I found it when searching for Martin Gardner in the library catalog, shortly after Gardner died in 2010, and found ”The Emperor’s New Mind” which has a foreword by Gardner. It’s a nice book that tries to explain a lot of science concepts in a popular way. But that thing about humans being able to break out of the limits of Gödel’s theorem, and solving uncomputable problems, that sounded totally crazy to me at that time. It still sounds crazy after this blog entry. And as for your Ghost paper, I’m not qualified to judge the physical part that there are particles left over from the big bang whose quantum states have not been examined since then, but I don’t think interpreting these as bringing Knightian uncertainty is not a useful notion, and I don’t think it’s relevant for trying to predict that you can’t simulate humans accurately, and I don’t think it helps understand free will either.

    But what saves both Penrose’s book and your Ghost paper is that you presented them as a humble speculation, not as some undisputable truth and revelation. Penrose was very clear in his book, he distinguished which parts of what he tells are accepted scientifical fact, and which part are just his speculation that even he wasn’t sure about. Your article was similarly clear. Neither of you tried to do anything malicious with these speculation: you weren’t trying to set up yourself as humanity’s next prophet, nor did you try to sell cosmic radiation sensor enhancing free-will balm. You weren’t trying to tell that Einstein was wrong, or accuse the whole institution of academia with conspiring to make your results hidden, or make insulting statements like “cubeless americans deserve – and shall be exterminated”. You didn’t even try to dub your speculation “a new kind of science”. This is the reason why I didn’t call for burning you on stake, I didn’t stop reading your blog when you wrote Ghost, and I hope it’s also the reason why the spectators liked your panel with Penrose.

    —-

    Now I have a question for you, Scott. You say that part of the reason why you disagree with Penrose is that you are sure the rules of physics are computable, and gravitational quantum effects won’t let you give physical tools to actually solve the halting problem. I guess from your previous entries about black firewall paradox and its information theory connections that you also think the rules of physics must also be computable in quantum polynomial time, and that the physical reality won’t just let you solve PSPACE problems by using time travel based on general relativity. Do you believe this one? Is your belief in the former (computable physics) much stronger than your belief in the latter (quantum polynomial time physics)?

  141. Esso Says:

    Scott,

    I read your philosophical essays a couple of years back, great stuff! I was convinced that unitarily evolving quantum processes don’t seem to be alive. But aren’t your arguments for that position also obvious and serious criticisms of Many-worlds interpretation, which (I imagine) is about positing unitary quantum theory as the fundamental stuff of the world?

  142. Scott Says:

    Esso #141: Reread the post (or try Could A QC Have Subjective Experience?). In the “cosmological interpretation,” the whole point of talking about our deSitter boundary is to give a definite criterion for when a measurement has happened, in a completely unitary picture of physics that would otherwise become unadulterated Many-Worlds.

  143. Scott Says:

    jonas #140: Yes, I do conjecture that BQP is the limit of what the physical world lets us compute in polynomial time (or at least, I regard investigating the truth or falsehood of that conjecture as one of the great scientific quests of our time). More strongly, I conjecture that no physical process lets us solve NP-complete problems in polynomial time. And more strongly still, I conjecture that no physical process lets us solve uncomputable problems. I don’t know the exact rate at which the beliefs become stronger as you go down the list.

  144. Scott Says:

    wolfgang #137:

      The interpretation of quantum theory depends on whether photons have exactly zero mass or almost zero mass?

    All else equal, I’d rather have an interpretation of QM that depends on nontrivial facts about the physical world, and that could be falsified if those facts turned out otherwise, than one that’s isolated by design from everything else we know about the universe. (Though I wouldn’t say that’s the goal with the cosmological view. The goal is just to give a definite criterion for when a measurement has happened, without breaking the rules of unitary QM. The vulnerability to future empirical findings then comes out as a byproduct.)

      And understanding the consciousness of some apes on a tiny planet living usually less than 100 years significantly changed when the redshift of far away galaxies was measured in the late 90s ?

    Our understanding of where the apes came from significantly changed when Charles Darwin examined the beaks of finches in the Galapagos islands. Our understanding of what the apes are made of changed significantly when people in early 20th-century Europe did experiments on radioactive ores (and also, as it happens, spectroscopy on distant stars). Understanding, you might say, “doesn’t follow the rules of causality,” in the sense that learning about A can suddenly illuminate B even though B is far in the past or the future, or billions of light-years away.

  145. Flavio Botelho Says:

    Scott #143:

    I agree with you on “no physical process lets us solve uncomputable problems”, but I don’t agree about the others. Could you elaborate why you feel NP-complete problems should not be solvable in polynomial time? Inductive reasoning? What about major theories in Physics that might change things (provide new operators)?

  146. Scott Says:

    Hannah #130:

      Actually I think we are practically saying “if there is a thought experiment that causes a philosophical problem, then it’s not consciousness.”

    Exactly. 🙂

    (Or we might say: as long as the known laws of physics make it totally unclear whether the thought experiment could even be done to humans, first clarify that issue, before the philosophical problem urgently presses itself on our attention…)

      What if it is physically possible in theory to predict someone, but practically impossible, so that these philosophical problems never come up in real life?

    In all my writing about these issues, my working assumption has been that whatever criteria we give had better be rooted in fundamental physics, rather than in contingent facts about what is or isn’t technologically feasible for humans. And you’re absolutely right that that influences the choices I make: someone who rejected that assumption would choose differently.

    For me, the decision tree looks something like this:

    (1) If consciousness somehow reflects a fundamental aspect of reality, then our criteria for it had better not be rooted in the contingencies of present-day technology!

    (2) Conversely, if consciousness doesn’t reflect a fundamental aspect of reality, then any attempt to give criteria like these is doomed from the start. The universe is then just one damn quantum field configuration after another, and certain computations in these fields are what we elect to call “consciousness.” However, you don’t get the liberty of saying this without then worrying about all the thought experiments my Singularity friends worry about, and that I mentioned in the post: would you get into the teleportation machine or not? would you consent to be painlessly euthanized and then replaced by a giant lookup table? If it were a thousand lookup tables, would that be a “net win” for you? Etc.

    Having said that, even if you accepted view (2), you could still re-ask many of the same questions that I asked in these writings, except now no longer “absolute” but “relativized to a particular level of technology.” (Exactly as you were doing in your comment.) E.g., even if it’s physically possible to predict everything a given person will say or think without harming them—they can just sit in a seat in the middle of the prediction contraption, as you said—still, how big would the prediction contraption need to be? This sort of question would have roughly the same relation to the questions I was asking, as complexity has to computability.

  147. Scott Says:

    Jay #138: Sorry, you went somewhat off the rails at (2). I don’t think there’s evidence that the brain is a quantum computer in any interesting sense. What it appears to be, is a classical computer with chaotic sensitivity to amplified microscopic events (also known as an analog computer). To this view, which you might call the 100%-standard neuroscience view, all I add is the further question of whether the microscopic events can described completely by a thermal noise source that “cleanly decouples” from the digital part, or whether there’s also a Knightian component.

  148. Scott Says:

    Flavio #145: Read my NP-complete Problems and Physical Reality (10 years old, but still reflects a lot of my thinking about these things).

    By far the biggest thing missing from our picture of fundamental physics, of course, is quantum gravity. But far from creating possibilities for hypercomputation, quantum gravity (because of the Bekenstein bound) seems to close off hypercomputation proposal opened up by naïve consideration of previous theories! AdS/CFT also played a large role in convincing many physicists that, whatever is the right quantum theory of gravity for our world, it ought to be describable as an “ordinary quantum theory,” which would then presumably be efficiently simulable by a quantum computer.

  149. jonas Says:

    Scott re #146: The problem is, it’s difficult to regard those problems in abstract, rather than connecting to our present or near future technology level and other realistic complications. I wouldn’t get into a teleportation machine (unless I’m really desperate), but that’s nothing to do with how consciousness works. I wouldn’t get into it because I beleive the teleportation machine is made by a profit-hungry company who pushed an early prototype to production without making enough steps for security, and there’s a high chance that using the teleportation machine has side effects harming my body and mind, possibly in a way that the symptoms won’t be obvious until two decades later.

  150. Darien S Says:

    >In short, I see it as a central selling point for the picture I’m discussing that it would not lead to any class of intelligent beings being arbitrarily discriminated against, because of magical/metaphysical criteria of no empirical relevance (“quantum pixie-dust”). Rather, the beings in question would already have a radically unfamiliar status in law and morality, before we even entered into theoretical questions about consciousness, for purely empirical reasons related to the beings’ copyability.

    The thing with predictability here is, how far back are you posting the necessary predictability? nanoseconds, microseconds, milliseconds, seconds, minutes, hours, days, years? If it is just a few microseconds of predictability, once the spine has sent down the action potentials barring noise in muscular junctions or muscular variability, the prediction should be 100% with or without any quantum effect occurring in the brain, it should also be indistinguishable from that of a machine activating the spine’s nerves directly in terms of follow up unpredictability.

    There are things that might make indefinitely back predictability impossible even if the system were entirely deterministic even in a computer, a cosmic ray could flip some value or result in a mutation killing some neuron(for the biological case).

    Also resting the distinction of consciousness on predictability, that would only be a functional distinction if it was absolutely impossible to predict the evolution of a system with the postulated quantum phenomena. But it is not impossible, it would only be impossible if there was an infinity of possible states at each transition, but it is claimed that the number of possible configurations is finite. Thus even though the probability is relatively low there is always a possibility of actually successfully predicting the evolution of any possible sequence no matter what’s causing it.

    After time has passed and it lay in the unchanging solid as rock past, it will either prove false, partially correct or hundred percent correct. In the cases were it proves one hundred percent correct, it is a successful prediction that might have indefinite length.

    >the person’s consciousness—that for one reason or another, consciousness would fail to transfer smoothly from the original substrate to the new, predictable one—seems well within the range of reasonable opinions.

    It’s not just predictability. For example the number of possible utterances is finite, even a random sampling could turn out to produce a prediction, even an arbitrarily long one, though the probability is low. So it is not just the ability to predict, as that is possible albeit by chance(unless true randomness does not exist in which there may exists a mode to do so with absolute certainty), but the ability to predict with a certain justification of whether or not some specific quantum phenomena is present.

    And again with regards to prediction we apply it in the future direction, not the backwards direction as looking towards the past it is easy to ‘predict’ all the prior actions. Isn’t this assuming that past, present and future are different in kind? Aren’t we bringing absolute simultaneity in to create this division, some would have issue with that?

    >In some ways we are thinking too hard about this. Consciousness is something that arises precisely from the complex carbon molecules that constitute living material. In that case it would have an evolutionary origin and couldn’t be replaced by silicon.

    This depends on what the actual cause of consciousness is. Is it due to some form of information processing , storage or a combination of storage and processing? If so billiard boards, electrons, water, it doesn’t matter the substrate it will only change the performance but not the nature of information processing. Now if it depends on some form of phenomena, perhaps a physical property, then it may truly need something that taps into that to actually be conscious.
    Way I see it, it is already ‘known’ that what the brain receives through the senses can be reproduced by an entirely digital system. So all that we know from primary senses can actually come in principle from digital data and algorithms operating on it. The supposition that conceptions about entirely nonrepresentable ideas is real, with ‘special’ phenomena, undefinable things, well those might simply be imaginary with no correspondence in reality.

    PS

    Regards Boltzmann brain, which I think it was suggested wouldn’t be conscious even if it was biologically and physically identical. Some have suggested the universe pop-ed out of the vacuum in a similar way, perhaps I’ve misheard. But why would we consider a collection of matter able to become conscious after emerging from the vacuum, and another collection of matter not be able to do so after doing the same? Is it the volume of matter involved? the time? The inflation field that accompanied the universe? are we assuming all of time and everything started with the universe’s origin making it different from an isolated event such as a future Boltzmann brain?

  151. Anonymous Says:

    Scott #148: [Quantum gravity seems to close off hypercomputation.]

    From our limited knowledge of effective quantum gravity it appears different from renormalizable quantum field theories in requiring an infinite number of measurements to determine all couplings and make predictions at very high energies. Isn’t that itself already at least vaguely analogous to hypercomputation, where we must run a computer program for an infinite number of steps to determine it doesn’t halt?

  152. James Cross Says:

    #150

    “Way I see it, it is already ‘known’ that what the brain receives through the senses can be reproduced by an entirely digital system. So all that we know from primary senses can actually come in principle from digital data and algorithms operating on it. ”

    Reproduced seemly means being about to produce the same digital output from the same input. It doesn’t mean that it produces the same qualia that is involved with consciousness.

    I think there is a good deal algorithmic processing involved in seeing, for example. But almost all of it occurs below the level of consciousness.

    https://broadspeculations.com/2015/08/30/blindsight/

    “The supposition that conceptions about entirely nonrepresentable ideas is real, with ‘special’ phenomena, undefinable things, well those might simply be imaginary with no correspondence in reality.”

    Not sure what is meant here but it seems to me you are on shaky ground if you are arguing that sensual perceptions are more real than conceptions or nonrepresentable ideas. The eyes and the brain do not see reality but actively shape it. See Visual Intelligence: How We Create What We See.

  153. Scott Says:

    Anonymous #151: No, you’re conflating two completely different things. The non-renormalizability of GR is why it’s so hard to guess the right quantum theory of gravity by applying standard “quantization” procedures; it’s a sign that new ideas are needed. It says nothing about the computational complexity of simulating a quantum gravity theory once you did have it.

  154. AI | An Interior Forest Says:

    […] Or, here’s my personal favorite, as popularized by the philosopher Adam Elga: can you blackmail an AI by saying to it, “look, either you do as I say, or else I’m going to run a thousand copies of your code, and subject all of them to horrible tortures—and you should consider it overwhelmingly likely that you’ll be one of the copies”? (Of course, the AI will respond to such a threat however its code dictates it will. But that tautological answer doesn’t address the question: how should the AI respond?) […]

  155. Darien S Says:

    >Not sure what is meant here but it seems to me you are on shaky ground if you are arguing that sensual perceptions are more real than conceptions or nonrepresentable ideas. The eyes and the brain do not see reality but actively shape it. See Visual Intelligence: How We Create What We See.

    My point is, that the brain could easily be connected to machinery that through some manner of stimulation reproduces all the sensory input. Aka, our ‘known’ reality. To postulate that within reality lay unknown perhaps unrepresentable phenomena, is what I take lay on less solid footing.

  156. fred Says:

    Scott #90
    “thinking through the full strangeness of a world with copyable minds!”

    But would it be any stranger than a world where we could clone dog turds perfectly?
    If a mind/brain is nothing more than a certain arrangement of atoms, still following the same laws of physics as the atoms in a dog turd, why would it matter?
    “Making decisions as a rational agent” is still nothing more than the atoms of my brain following their dynamics, unless you do believe that the mind is a secret sauce having causal power on the matter that instantiates it, somehow taking unremarkable initial conditions and turning them into some special state.

    Does the nature of a CPU change based on what high level language/algorithm it’s running?
    Are the algorithms telling the CPU what to do or are the transistors just going about their business (as per spec of the hardware designers), whether they’re running “hello world” or some fancy deep learning algo?

    All matter is interconnected, in time and space, so systems don’t magically exist/appear in isolation. The problem is that “systems” as a finite clump of space/time/matter are a leaky abstraction, the boundary of a system is the initial condition of another. But we try to ignore the initial conditions!

    Whatever property of consciousness/rational-decision-making matter exhibits right now, it must have been present all along, in the initial conditions of the universe.

  157. Scott Says:

    fred #156:

      But would it be any stranger than a world where we could clone dog turds perfectly?

    This will be my last comment in this thread, since I’d like to move on to a new topic rather than going around in circles repeating what I said in GIQTM.

    I don’t want to know “the nature of consciousness” as a matter of inert metaphysical curiosity. Rather, I want to know the answers to questions like: should I get into the teleportation machine, or shouldn’t I? should I agree to having myself scanned and uploaded into a computer? what about having myself replaced by a huge lookup table? should I sign up for cryonics, expecting to ‘wake up’ in a far future where nanobots have reconstructed me?

    Indeed, as far as I’m concerned, there might be nothing that it means to understand consciousness, other than to be able to answer all possible questions of that kind.

    And if there’s a dog turd that’s capable of formulating these sorts of questions, then let it worry about whether it should get into the teleportation machine. Indeed, even if the dog turd were merely capable of intelligent behavior, I might worry about these questions on the dog turd’s behalf.

    But one thing that I know for certain: you can’t escape the need for some criterion to decide which physical realizations of “your computer program” are or aren’t able to instantiate your consciousness. If you think you can escape this, then why not simply kill yourself now? After all, the laws of physics are time-reversible, so the state of the universe will still contain all the information needed to reconstruct you. In that sense, we could say that the universe will continue to contain a program that perfectly simulates you; indeed, even a program that simulates you living a fulfilling life and having a wonderful time (as well as programs that simulate you having a horrible time, etc.). Someone could even claim that the universe is “executing” that program; they just need to apply a method of interpretation to the output that involves uncomputing everything that’s happened between when you killed yourself and the current time, then running the equations forward under a different condition.

    So, again: some criterion is necessary. You could try for a complexity-based criterion, like I did in Section 6 of Why Philosophers Should Care About Computational Complexity. Or you could try for an unclonability-based criterion, like I did in GIQTM. But if you want to take the line that we’re just evanescent patterns of computation, and therefore nothing more needs to be said about us than about dog turds—well then, a sorcerer comes along and offers to transform you into an astronomical-sized dog turd, one that happens to contain a lookup table caching your responses to all possible questions you could be asked within (say) a 100-year lifespan. Do you agree to that?

    If you don’t, then you’ve implicitly conceded that something more does need to be said.

  158. fred Says:

    Scott #157

    “you can’t escape the need for some criterion to decide which physical realizations of “your computer program” are or aren’t able to instantiate your consciousness.”

    According to Hofstadter in “I Am a Strange Loop”, instantiating a consciousness is probably not a binary thing, but a continuous spectrum.

    E.g. when someone dies they don’t totally disappear instantly, but a tiny bit of who they are lives on for a while through the consciousness/memories of those who knew him/her.

    And as you start copying your own brain better and better, you’ll end up with versions of yourself doing a better and better job at proving they’re you and debating your ideas as to why they should be terminated!

  159. fred Says:

    For the record, I’m not terribly interested in the question “should I get into a teleportation machine or not?”.

    I’m more interested in what it means to be conscious/alive as one’s body and brain degrade.
    It could alter the way we treat the less fortunate among us and animals.

  160. Jair Says:

    I do not believe the kind of nihilism that equates humans and dog turds can be answered purely by logical argument. It requires introspection. I know that there is a deeper question to consciousness just by looking around (or “experiencing qualia.”) It seems perfectly obvious that “I” am experiencing something rather than nothing. I am certain this is a mystery that will never be dispelled by science.

  161. mjgeddes Says:

    I really don’t see why copying is paradoxical ; weird-yes, puzzling implications for ethics and laws-yes, paradoxical-no.

    In object-oriented programming, you have the abstract ‘class’ , and the ‘objects’ themselves, which are concrete instantiations of the class. If I could be copied, presumably it would be the same thing as that – there would be one abstract class ‘Marc Geddes’, which would be the main entity recognized as a single ‘person’ in terms of law, and multiple separate concrete objects ‘MJG1’, ‘MJG2’ etc., that would have some lesser rights as separate entities with conscious experiences.

    Scott has made a valiant try, and I highly recommend his paper ‘GIQTM’, which is a fantastic read! At the end of the day though, I’m just not buying it.

    The ‘free-bit’ idea is certainly intriguing, but just doesn’t seem that likely to me, because I just find it hard to believe that entirely isolating myself from these ‘free-bits’ is going to turn me into an unconscious automation.

    I have already posted a general sketch of my own solution to the puzzle of consciousness in the comments here on Scott’s blog and elsewhere.

    I need to clarify one idea I suggested, which was a break-down of reductionism. Some people might think I was suggesting uncomputability, but that’s definitely not the case!

    In fact, I do totally agree with Scott that the physical world is entirely computable. A failure of reductionism does NOT mean physical uncomputability, I want to make that point absolutely clear.

    In a failure of reductionism what you would actually have is separate layers A, B, C, each of which can be entirely computable. Any uncomputability would NOT make an appearance in empirical reality – it would be confined to the abstract ‘bridging laws’ that explain how the layers connect – any ‘uncomputability’ is just a wholly abstract mathematical property that can’t ever appear physically. In short, if something is physical, it’s computable.

    To summarize my own view on consciousness: it can be generated by a fairly simply Turing machine and is entirely computable, with no quantum elements needed. I think the program needs to be a recursive self-modelling program with 3 levels of abstraction (that mirror the 3-level split in reality): the 3 levels are: an evaluation system, a decision-making system and a planning system. Any program that has those systems and is doing information transmission across the 3 layers of abstraction, I think is conscious.

  162. David Pearce Says:

    I know it’s tempting to make fun of “quantum pixie-dust”, yet we might just as well make fun of “classical pixie-dust”. If our ordinary understanding of matter and energy as set out in the Standard Model is correct, then consciousness ought to be impossible. We should all be p-zombies. Nor can consciousness “arise” at some level of computational abstraction in a digital computer on pain of spooky “strong” emergence (How? When? Where? Why?] At least one “obvious” presupposition or background assumption that scientifically literate people are making must be mistaken. The challenge is to work out what this error might possibly be.

    In recent years, a minority of theorists have explored the prospects of non-materialist physicalism. (cf. Galen Strawson http://www.nytimes.com/2016/05/16/opinion/consciousness-isnt-a-mystery-its-matter.html) Non-materialist physicalism isn’t old-fashioned panpsychist property-dualism, but rather the conjecture that experience discloses the intrinsic nature of the physical, the mysterious “fire” in the equations on which physics is silent. This proposal is bizarre, but not demonstrably false. However, as David Chalmers (cf. http://consc.net/papers/combination.pdf) and others have recognised, neither panpsychism nor non-materialist physicalism seem capable of explaining why we’re not just patterns of classical Jamesian mind-dust – so-called “micro-experiential zombies” with no more experiential unity than a termite colony or the population of the USA. Regardless of whether experience is fundamental to the world, the phenomenal binding of a pack of discrete, decohered, membrane-bound feature-processing neurons into perceptual objects, let alone an entire classical world-simulation, is classically impossible. Hence Chalmersian dualism.

    Giving up on monistic physicalism is surely a last resort. Unfortunately, the prospects of a quantum-theoretic explanation of phenomenal binding look dire too. If (fancifully!) the effective lifetime of coherent superpositions of distributed neuronal feature-processors (edge-detectors, motion-detectors, colour-neurons, etc) in the CNS were milliseconds, then there would be an obvious candidate for the missing structural match between the formalism of physics and the phenomenology of our minds: binding-by-synchrony is actually binding-by-superposition. But of course that kind of extended lifetime is off by a dozen orders of magnitude or more. (cf. Max Tegmark’s http://www.sciencedirect.com/science/article/pii/S0020025500000517) Decoherence timescales of neuronal superpositions in the CNS must be insanely rapid – femtoseconds or less. Intuitively, such timescales are the reductio ad absurdum of quantum mind.

    Well maybe. One man’s reductio ad absurdum is another man’s experimentally falsifiable prediction. For what it’s worth, I think if we combine (1) non-materialist physicalism with (2) Zurek’s “quantum Darwinism” (cf. https://arxiv.org/pdf/0903.5082v1.pdf) applied to the CNS (i.e. the decoherence program of post-Everett QM) then we’ll find that selection pressure ensures a perfect structural match between the classically impossible phenomenology of our minds and the formalism of QFT. Phenomenally-bound biological minds have been quantum computers not just since Democritus but since the early Cambrian. Such a perfect structural match is insanely implausible, for sure; but I look forward to tomorrow’s molecular matter-wave interferometry that puts our naive commonsense intuitions to the test.
    And classical digital computers? Just invincibly ignorant zombies, IMO.

  163. Darien S Says:

    > I know that there is a deeper question to consciousness just by looking around (or “experiencing qualia.”) It seems perfectly obvious that “I” am experiencing something rather than nothing. I am certain this is a mystery that will never be dispelled by science.

    >And classical digital computers? Just invincibly ignorant zombies, IMO.

    Thing is unless you take consciousness to be epiphenomenal, and such luck that nature happened upon a process that produced such an epiphenomenon, then we must take consciousness to be functional in nature. We also know that conscious experience, or qualia, has content, variable possible very rich content implying a need for memory to store such content, vast memory.

    In explaining consciousness you need not only explain why there is consciousness, but what gives the different qualia their specific quality and makes these qualia different from one another. You also need to explain how the various conscious chunks of experience are put together into a coherent whole.

    The vast amount of content of consciousness implies it cannot be stored within a single neuron, but throughout a large ensemble probably spanning a large quantity of brain tissue. Consciousness also does not correspond to the present but to the recent past, it is also not synonymous with an ongoing physical flow or process as there is evidence of postdiction, implying the contents of consciousness are computed somehow after the fact.

    Computers can incorporate physical random number sources making them as unpredictable as anything can be, assuming physical sources can produce the mythical true randomness. If it turns out there is some quantum process giving some functional advantage to brains, it should be seen in future experiments, but if the brain is matched or surpassed in terms of function in the real world, it will be doubtful that consciousness has not occurred if processes similar to those of the brain have been simulated.

    If as I and many others believe consciousness is functional, then as we gain knowledge of the function of the brain it will elucidate its nature and perhaps due away with the mystery whatever the cause may be. Vitalism and the mystery of life is said to have had a similar view in the past with philosophical debates about it, but our present understanding of biological life mostly explained away the mystery, perhaps similar can occur with consciousness as more is understood about the brain.

  164. Albert Says:

    I would have settled for a simple yes or no 😉

  165. fred Says:

    For me one difficulty with consciousness being rooted in the physical is that our definition of matter is a collection of point particles – atoms, quarks, electrical/magnetic fields (photons), etc.

    If that is all there is to physical reality, how can a “global” property such as consciousness emerge from a soup of point particles?
    If one atom, taken in isolation, isn’t a conscious system, then why would *any* number of atoms be conscious? (you start with one, add another, then another, then another… why would adding one more particle at any point make a difference?).
    It’s the equivalent of those starling flocks
    https://www.youtube.com/watch?v=eakKfY5aHmY
    It looks like there’s a global “object” there, but when you look closely you realize it’s an illusion.

    I would think that if consciousness is truly an emergent property, it would have to be something that’s a property of the system as a whole, and be continuous.
    The wave function of a QM system seems to fit the bill, but apparently it’s just a mathematical tool (?)

    The other approach is to say that consciousness is pure information (digital), but then it could just be instantiated by the mere concept of numbers (then physical reality is an illusion, the universe is mathematical, etc).

  166. adamt Says:

    Darien S #163,

    Do you take consciousness to be a side effect of a functional thing or a functioning thing itself? Your comment seems to imply the former and I take it that you believe the functioning thing would be the brain, yes?

    However, the brain – and the body that it relies upon – have many different functions. Can you isolate which one(s) produce the side effect of consciousness? I don’t think so. But perhaps you think that in principle this should be possible. That future advances will let us draw a circle around exactly those necessary and sufficient functions of the brain – and the body it depends upon – that give rise to the side effect of consciousness, yes?

  167. James Cross Says:

    fred

    “If that is all there is to physical reality, how can a “global” property such as consciousness emerge from a soup of point particles?”

    Take an eddy in a river for example. The eddy isn’t a property of the water molecule but arises during the flow of many water molecules. The eddy is in a sense unsubstantial but still real.

    Darien S.

    “The vast amount of content of consciousness implies it cannot be stored within a single neuron, but throughout a large ensemble probably spanning a large quantity of brain tissue.”

    An open question how large a quantity. Are birds conscious? Very small brains and some have estimated intelligence of crows equivalent to 7 year old humans. Intelligence isn’t consciousness but their behavior suggests consciousness as much the behavior of other humans we interact with suggest they are conscious.

    My thought has always been that most of the brain is taken with monitoring internal state, mapping sensory input, and coordinating movement. The amount of brain actually spent on consciousness may be relatively small and that most maybe all organisms with brains have some degree of consciousness.

    “In explaining consciousness you need not only explain why there is consciousness, but what gives the different qualia their specific quality and makes these qualia different from one another.”

    Never have quite bought the idea that we need to explain the specify quality of qualia. Take the qualia of redness. We probably developed an ability to discriminate redness from its advantage to select ripe fruit. As to why redness looks the way it does probably involves whatever path evolution took to generate it from the predecessor genes and structures in earlier primates without color vision. Other species that took a different path to color vision may not see the same red we do. For that matter, you and I may not see the same red either if the actual quality of qualia is dependent on relatively small genetic differences. So short answer I think is that the quality of qualia is somewhat arbitrary.

  168. James Cross Says:

    adamt

    “Can you isolate which one(s) produce the side effect of consciousness?”

    Actually you can. It is a small group of cells in the brain stem. If you damage them there is permanent coma even with the rest of the brain intact.

    The cells are what was formerly known as the reticular activating system and have a long evolutionary history. They play a role in integrating external stimuli and internal state. They regulate wakefulness and sleep. The coordination and control of the rest of the brain by these cells probably is what produces consciousness.

  169. Jay Says:

    Scott #147

    I really don’t know ho to answer that. Whether we should call that a quantum computer is a point of details which has no impact on the substance of my post, plus… it’s not my convention, it’s the convention you used for GIQTM’s title!

    BTW, if that was a question, no, I wouldn’t call that a standard neuroscience view. First, because most neuroscientists just don’t know don’t care enough to form an opinion. Second, because many key neural processes involve discrete quantities (for example: the number of synaptic vesicules that fuse to release neurotransmitters in the synaptic cleft). It’s hard to see how small analog errors could survive and spread from one neuron to the next despite these mechanism.

  170. JimV Says:

    All thinking that involves problem-solving and/or decision-making is a type of math. Example: I need to do three things in the next hour – buy some groceries and ice cream, pick up my dry-cleaning, and get a prescription filled at a drug store; where should I go in what order? (I could repose the problem in set theory to make it seem mathier if necessary.) Computers can do such problems as well or better than humans given the necessary data and rules, and they can learn data and rules from experience like babies do.

    Then there is motivation, drive, purpose. Ours is programmed into us by evolution and by our upbringings. Computers would have to have some prime directives programmed in. (Basic survival is not a universal drive – you have to want to survive.) But as evolution programmed us, we could program it into computers

    Then there are feelings: the scent of a rose, accomplishment, hormonal drives. How to program the functional equivalent of positive and negative and descriptive sensations into a computer (to recognize the scent of a rose, for example) does not seem like an insurmountable problem to me. Will they “feel” the same to computer as they do to us? No. This is the where I think most of the arguments from incredulity given above stem from. The key point, I think, is not to be a chauvinist and assume that they way humans do it is the “right” or only way it is possible.

  171. fred Says:

    Btw, the Feynman thing about magnets is this:

    https://www.youtube.com/watch?v=MO0r930Sn_8

    The guy asks if he could explain what you feel when you try to press two magnets against each other, what’s going on there.
    Feynman answers by looking at the nature understanding and explanation, i.e the difficulty of a “why?” question.

  172. adamt Says:

    James Cross #168,

    “Actually you can. It is a small group of cells in the brain stem. If you damage them there is permanent coma even with the rest of the brain intact.”

    So these cells are necessary, but are they sufficient? Are there no other parts of the brain/body that are necessary for consciousness to arise? If there are, then it can’t be said that consciousness arises from just these cells, right?

    Also, can we find these same cells or analogous ones in all other things we conventionally regard as consciousness?

  173. mjgeddes Says:

    Dave #162,

    One of the things I like about Scott is that he tries to keep everything rooted in empirical reality. You are just throwing largely meaningless metaphysical terms around.

    In terms of the consciousness conundrum, I accept the arguments of Chalmers : a purely functional account of the brain indeed completely fails to explain consciousness. So no sort of physicalism can help us.

    The point Scott makes very well is that turning to quantum effects simply can’t help explain consciousness in the slightest. So even if everything you (Dave) said made sense, all you’d have done is replaced one form of matter shuffling (classical physics) with another one (quantum physics). The mystery of consciousness would remain untouched. So if you’re looking to understand consciousness, bringing quantum physics into it just complicates and obfuscates the issues for absolutely no reason.

    Chalmers ‘property dualism’ is entirely correct my view. My view is that we just have to broaden our ontologies and accept ‘mental properties’ as a fundamental ‘primative’ property of reality: we have no trouble accepting that ‘physical reality’ and ‘mathematical reality’ are just ‘brute facts’ of existence, so why should there be any problem adding ‘mental reality’ to the list?

    Although taking consciousness as a ‘brute fact’ (primitive) of reality prevents us from ever obtaining a *full* explanation, we *can* still obtain a *partial* scientific understanding of consciousness by finding out what mathematical and physical properties consciousness is correlated with (i.e, we can still learn about the ‘bridging laws’ that connect consciousness to matter and math). And I think this the best we can ever do. And that’s OK. The IIT (Integrated Information Theory) is an example of this sort of theory.

    Strong emergence (a breakdown of reductionism) and/or panpsychism is the natural and inevitable consequence of accepting that consciousness is a just a ‘brute fact’ (primitive) feature of reality.

    In my view, the ‘value’ of minds is not based on how unique or ‘copyable’ they are, but rather on the degree of consciousness present, and a partial theory of consciousness of the type I describe above should still be capable of resolving the mind-copying puzzles by explaining what configurations of matter count as consciousness and defining personal identity appropriately.

    I don’t see any real paradoxes with copying. There is the single abstract ‘class’ of me, and multiple possible instantiations of that class, which are the concrete ‘objects’ me. The different concrete objects of me can all be distinguished by very simple macrofacts: namely, that they have different locations in time and space! (This is no different to the fact that in programming, different ‘objects’ of a single ‘class’ can easily be distinguished simply by the fact that they occupy different locations in memory).

  174. James Cross Says:

    adamt 172

    Actually as I said these cells have an evolutionary long history and go back, I think, at least to reptiles before the split into birds and mammals. It certainly makes sense that what regulates sleep and wakefulness would have a key role in consciousness since deep sleep is another time when we become almost if not completely unconscious. It also makes sense that the development of attention would have evolutionary value for searching for food and avoiding predators.

    Your point is well-taken from one standpoint. The peculiar quality of consciousness – its content – is probably governed by structures in other parts of the brain. So the large amount of human brain devoted to visual processing makes human consciousness visually oriented whereas a bat’s brain probably makes theirs more auditory.

  175. adamt Says:

    James #174,

    Actually I didn’t have a point there I was just asking questions 🙂

    Sounds like the answer is that these cells may be necessary, but are not sufficient to draw a circle around and point and say -> that’s what gives rise to consciousness. But let’s say they were…

    What we’ve done from the scientific reductionist perspective is to draw the circle ever tighter around lumps of matter. We went from the brain as a whole to this clump of cells with a long evolutionary history.

    If you put these cells in the void of space with only a minimal environment to keep them alive … do you think consciousness would arise? Would these cells have qualia? Would they know what it feels like to be?

    Ok, that’s a fun thought experiment. Let’s try another one using the Sorites paradox…

    Take a human being in a space suit and put them in the void of space with no other support. Now start removing cells from the human’s body one-by-one. We remove the cells in order of dependence necessary for consciousness so that the least necessary are removed first and so on. With removing precisely which cell do we remove the consciousness?

    Scientific reductionists would I presume believe that there is a specific minimum number of necessary and sufficient cells that must remain in any human’s body to regard it as having consciousness and then we’d point to these cells and say -> there, there this is the lump of matter that gives consciousness. Presumably they’d go even further and could say this specific number of atoms in these specific energy states. Is that right?

    Certainly a scientific reductionist would say that at some specific point in the womb a fetus becomes conscious and before that they were not conscious, yes? And that it must be some *physical* change that takes place between those two moments that give rise to the spark of consciousness?

  176. Darien S Says:

    >Do you take consciousness to be a side effect of a functional thing or a functioning thing itself? Your comment seems to imply the former and I take it that you believe the functioning thing would be the brain, yes?

    I take it consciousness is the result of some functional aspect of brain function, either relating to special data structures, mechanisms or combination of mechanisms and data structures. Once the brain is fully understood the mystery should be clarified.

    > I don’t think so. But perhaps you think that in principle this should be possible. That future advances will let us draw a circle around exactly those necessary and sufficient functions of the brain – and the body it depends upon – that give rise to the side effect of consciousness, yes?

    Well my point is not that it is a side effect, but that it is functional, and in eventually knowing the true function and how it is undertaken we will get to know the nature of consciousness.

    >So short answer I think is that the quality of qualia is somewhat arbitrary.

    To some degree it may be, but if we were to recreate a specific sensation in a newly designed organism we would need to know how to generate such. There is the famous quote that says “What I cannot create, I do not understand”. If humans can’t design brain like systems with specific qualia, perhaps some that don’t even occur in nature, how can they be said to fully understand the nature of consciousness?

  177. fred Says:

    mjgeddes #173

    “This is no different to the fact that in programming, different ‘objects’ of a single ‘class’ can easily be distinguished simply by the fact that they occupy different locations in memory”

    Yeah, and if you go one step further, you also have to define various “equality” methods on your class.

    Equality for identity – if any of these fields are different between two objects, it means that you’re dealing with objects of different identities (Bob vs Scott).

    Equality for value – to compare a given object against a different mutated version of itself (e.g. Scott of 1996 vs Scott of 2016).

    That seems obvious, but it’s often a source of subtle bugs – e.g. lots of classes try to provide a generic hash code for their object, but what fields to hash really depends on the context.

  178. fred Says:

    Darien #176

    Regarding qualia, I find it interesting to try and go back to the earliest of emotions that must have evolved, like Pain and Pleasure.

    Pain – when the organism needs to move away from a negative (dangerous) context to a neutral context, like a bacteria avoiding lethal sun light.
    Pleasure – when the organism needs to move from a neutral context to a positive (beneficial) context, like a bacteria swimming towards some food source.

    If we consider an organism that’s a basic “mindless” automaton, those reactions would be hardwired directly from the sensing cells to the muscle cells, triggering various chemicals.

    But if the organism was magically conscious and thought it had free will, but somehow could’t feel pain and pleasure while at the same time still have the reactions hardwired, then there would be a strong dissonance going on – how could it reconcile having free will in a body that is (re)acting automatically “on its own”?

    Have pain and pleasure evolve so that the automatisms and free will can be “aligned” and coexist?
    What came first? Emotions or consciousness? Are they the same thing? Can there be consciousness without any emotion/feelings?
    What would be the purpose of emotions/consciousness if the organism had no “degree of freedom” to act upon, like a tree.

    And emotions like pain and pleasure aren’t absolute either. The line between the two is blurry, e.g. too much of a good thing can be bad, or pain can turn into pleasure (masochism).

  179. James Cross Says:

    adamt

    “If you put these cells in the void of space with only a minimal environment to keep them alive … do you think consciousness would arise? Would these cells have qualia? Would they know what it feels like to be?”

    This is somewhat like putting a television in lead lined box with no cable connection and asking what image would be showing on the screen.

    The cells coordinate external stimuli and internal body states. If you disconnect them from the body and sensory organs, it would be like static on the television screen.

    fred

    Your comments about pain and pleasure are important. It reminds us that consciousness isn’t an abstraction but something that happens in a physical organism.

  180. adamt Says:

    James Cross #179,

    So it just can’t be said that these cells entirely account for what we call ‘consciousness’. Other things are required.

    I wonder if sensory organs needed. They certainly don’t seem to be necessary for qualia in dreams. What do you mean by ‘internal body states.’ Is this well defined?

    For the non-dual physicalist who believes that consciousness arises through a physical process I think it is a good question to ask exactly what physical matter and energy in what configuration is the minimum viable to produce consciousness in a human. You mentioned these cells as perhaps a starting point, but what you’ve said since reveals that the cells may be necessary, but they are far from sufficient so it really can’t be said that consciousness arises from these cells, right?

    Another question for the non-dual physicalist… do you accept that if it can be shown that a physical process that gives rise to a mental state are discordant temporally that this would be evidence that there must be some other non-physical cause for the consciousness? In other words, if we could measure precisely *when* a mental qualia arises and the associated physical state and found that they were not simultaneous, would this be evidence that something non-physical must have been a precursor cause to the mental state?

  181. Here's why some computer scientists suspect your brain is just a fancy computer | IOT POST Says:

    […] a full transcript of their discussion isn’t available, but Aaronson has published his response to Penrose on his blog, along with a thorough summary of the entire […]

  182. James Cross Says:

    adamt

    Read about the recticular formation here:

    https://en.wikipedia.org/wiki/Reticular_formation

    It controls the heart, blood vessels, sleep, wakefulness, attention. pain modulation, posture, integrating sensory input.

    My working theory is that consciousness arises in the mediation of external stimuli and control of internal state. Of course, in humans the recticular formation engages many other parts of the brain to do this. I think it is more like that central relay switch in the process.

    Dreams, of course, arise during sleep which seems to be mandatory requirement of animals with brains during which sensory input is temporarily attenuated. During deep sleep, of course, we are close to unconsciousness. During sensory deprivation, also, frequently dreamlike and hypnagogic states arise. None of this is well understood but it might be there are other parts of the brain creating images because of the reduced sensory input.

    Regarding minimum requirements for consciousness. I believe possible the rudiments of consciousness might be found in the first worms. I’ll quote from something I wrote a time ago:

    “The same basic body plan found in the worm is found in fish, reptiles, amphibians, mammals, and, of course, humans. We, like the worm, are a digestive tract with a mouth and brain at one end, an anus at the other end, and a neural cord running the length parallel to the digestive tract. The evolutionary path between the worm and ourselves is primarily a path of increasing elaboration on the worm body plan. With vertebrates over 500 million years ago, the neural cord becomes more protected by becoming encased in vertebrae and we have what the origin of what can truly be called a brain. With mammals about 200 million year ago, the brain increases in size and the neocortex develops. Primates about 65 million years ago develop an enlarged cerebral cortex.

    The path to consciousness may have begun with the moment a primitive worm-like creature sensed something in its mouth and a nerve impulse triggered it to close and consume the food. It is not an accident that the key organs of four of the five senses and the core neurological mass – the brain – are all located near the oral cavity. From an evolutionary standpoint, consciousness began to find and consume food. It combines the sense organs to find food with the neurological apparatus to direct the organism to catch it and consume it. Later with sexual reproduction the brain and neurological system served to assist in the attraction and selection of mates. Evolution’s somewhat haphazard process of converting, enhancing, and modifying things serving one purpose to other purposes has driven not only increasing neural mass (larger brains) but also the development of specialized structures. Consciousness has not been achieved simply by accumulating additional neurons.”

    https://broadspeculations.com/2012/10/21/floating/

  183. Here’s why some computer scientists suspect your brain is just a fancy computer – InBusiness Says:

    […] a full transcript of their discussion isn’t available, but Aaronson has published his response to Penrose on his blog, along with a thorough summary of the entire […]

  184. Django Says:

    Scott: “To see why, I’d like to point to one empirical thing about the brain that currently separates it from any existing computer program. Namely, we know how to copy a computer program. We know how to rerun it with different initial conditions but everything else the same. We know how to transfer it from one substrate to another. With the brain, we don’t know how to do any of those things.”

    Is there any other physical thing that is inseparable from a computer program? Setting aside notions of consciousness, is there anything that can’t be thought of as the physical laws of the universe being run as an algorithm? Why would anyone then expect to find some process in the brain that isn’t simply a computable phenomenon?

  185. Sergei Says:

    One argument in favor of the quantum brain would be evolutionary – if nature could manage supplying us with the power of quantum computation, it would.
    A subtle point is whether you actually need a continuous unitary state evolution for the definition of “self” throughout your lifetime, or could you “turn on” the quantum computational sub-unit after a sleep or a coma, and still be yourself. Do we need our memories to be quantum. If not, we could still be cloned, to a quantum hardware.

  186. David Pearce Says:

    mjgeddes #173

    Many thanks the comments. OK, I’m slightly puzzled. How is a plea for an interferometry experiment “metaphysical”? A positive result may strike you as wildly implausible – credible decoherence timescales of neuronal superpositions in the CNS are insanely fast. But surely the problem we face isn’t a bunch of theories of consciousness that make bizarre predictions, rather it’s that most theories of consciousness don’t make any novel, experimentally falsifiable predictions at all. I’m probably mistaken; but there’s no shame in being experimentally refuted.

    You say that invoking quantum theory to explain how consciousness emerges from insentient matter and energy is hopeless. Yes, I completely agree. This isn’t the possibility I was exploring. Recall that non-materialist physicalism proposes that experience discloses the intrinsic nature of the physical, the mysterious “fire” in the equations on which physics has nothing to say. (See the NYT Galen Strawson link above.) As David Chalmers and others have recognised, the real problem with non-materialist physicalism isn’t its implausibility, but rather that such a conjecture seems demonstrably false. Phenomenal binding of discrete neuronal feature-processors into perceptual objects, let alone an entire subject of experience or phenomenally-bound world-simulations, is classically impossible. However, in order for the Chalmersian refutation of non-materialist physicalism to work, it’s not enough to demonstrate a structural mismatch in three dimensions or four-dimensional spacetime. A structural mismatch must also be demonstrated between the phenomenology of our minds and the fundamental high-dimensional space required by the dynamics of the wavefunction. And here is where experiment comes in. Any story of quantum mind that does – e.g. the Penrose-Hameroff Orch-OR theory – or doesn’t supplement or modify the unitary Schrödinger dynamics should be experimentally falsifiable with molecular matter-wave interferometry. The telltale non-classical interference signature is as “rooted in empirical reality” as we can get.

    You propose that we must accept as inexplicable, irreducible “brute fact” that consciousness emerges from the brain, and as an inexplicable “brute fact” that consciousness minds will arise in digital computers. Maybe you’ll turn out to be right. David Chalmers would agree, as I gather does Scott. It’s also a counsel of despair. Imagine if some philosopher asked us to accept, as a matter of inexplicable “brute fact”, that the USA is a pan-continental subject of experience undergoing sunsets, symphonies and migraines (cf. Eric Schwitzgebel’s “If Materialism Is True, the United States Is Probably Conscious” – http://faculty.ucr.edu/~eschwitz/SchwitzAbs/USAconscious.htm) – or imagine (slightly more believably) that some philosopher asked us to accept as a “brute fact” that the USA would become such a pan-continental subject of experience if 320 million skull-bound American minds were appropriately interconnected to implement any given computation. It’s not that we can show that this extraordinary proposal is false. Such brute “strong” emergence would still be a catastrophe for the unity of science. The problem we face in the case of the CNS – a pack of supposedly discrete, decohered membrane-bound classical neurons – is analogous to the skull-bound population of the USA, but it’s worse, because each of us knows from personal experience that pan-cerebral subjects of experience are real, not just a philosophical thought-experiment. _Even if_ non-materialist physicalism or traditional panpsychism is true, and _even if_ neurons support rudimentary “pixels” of consciousness, then on the classical story we should merely be micro-experiential zombies. Classical physics cannot explain the properties of our minds.

    Yet should we be surprised? Why expect a false theory of the world, i.e. classical physics, to yield a true account of consciousness? By contrast, to the best of knowledge wavefunction monism is true; and we need to adapt our theory of mind accordingly.

  187. fred Says:

    Django #184

    When Scott says “Namely, we know how to copy a computer program. We know how to rerun it with different initial conditions but everything else the same. ”

    We may know how to copy computer programs (digital information, i.e. “numbers”), but each running computer is itself a QM system – no two computers are the same, and they can’t be cloned perfectly.
    Now, I get the thing that information/boolean logic programs are “isolated from their implementation”, but that’s an arbitrary line we draw between the computer state (the state of each of its atom) and that abstraction layer.
    And we just don’t know where to draw that line for the brain – to what extent are the symbols and thought processes isolated from the quantum and chemical “noise” in the neurons, is consciousness digital information or depending on the state of every quark in my brain?
    But in general it seems that life tends to do a good job at creating processes/structures that are pretty stable (but not too stable since it all needs to evolve).

  188. mjgeddes Says:

    Dave,

    Well, physics doesn’t need to explain why matter exists in the first place to progress does it? It just took the existence of ‘matter’ as a given and worked from there. Similarly, mathematics doesn’t need to explain the origin of ‘numbers’, it just takes them as a given and works from there. So why does a theory of consciousness need to explain where consciousness comes from in the first place? It doesn’t. It’s perfectly OK just to take ‘consciousness’ as a given and work from there.

    That said, in a recent sudden flurry of intense philosophical reflection, I feel I may finally have the solution to the puzzle of consciousness! If I have finally done it, it will mean that I’m the living embodiment of ‘accelerating change’, in that a series of new ideas I had one day overturned all my ideas from the previous day 😀

    I’m going to post an abstract of my new ideas here. See what you think, and whether any of these thoughts could fit in with your own ideas

    Abstract: ‘Fundamental Metaphysics : Solutions to the Hard Problems’

    Marc Geddes, Auckland New Zealand, 10th June, 2016.

    I will show that all the ‘hard problems’ of metaphysics involving the ultimate origins of mathematics, physics and consciousness and the relationships between them, can be solved by assuming that there are 3 fundamental properties of existence that can none-the-less still be considered to be reducible *in the limit*. Each *fundamental* property can *also* be considered to be a *composite* property composed of 2 of the other properties in combination; in this ontology reality ‘eats its own tail’ in a closed loop.

    The long-standing tension between non-physicalists on the one hand, and hard-core reductionists on the other hand is finally resolved, with both sides being proven to be ‘half right’, yet the final solution being much more subtle than either side imagined.

    Three informal ‘plain English’ equations give a sketch of the final solution. Let us call these ‘The Fundamental Equations of Metaphysics’.

    Eq. (1) Physics + Math = Mind
    (This says that conscious awareness is a combination of matter and information)

    Eq. (2) Math + Mind = Physics
    (This says that matter is a combination of information and conscious awareness)

    Eq. (3) Physics + Mind = Math
    (This says that information is a combination of matter and conscious awareness)

    Equation (1) tells you what ‘Consciousness’ (Mind) is.
    It says that the property of ‘consciousness’ can also be considered to be a *combination* of the other 2 properties; matter *and* information. Consciousness can be viewed as *both* a fundamental property *and* a composite property. This paradoxical sounding statement is resolved by granting that consciousness can be considered as *both* a single property not 100% reducible to any other properties, *and* a composite property reducible to the other 2 properties *in the limit* (i.e., you can give a computable *approximation* of consciousness to any desired degree of accuracy in terms of matter and information, but never achieve a 100% complete reductive account).

    The solution is similar for the other two properties of existence (Matter and Information).

    Equation (2) tells you what ‘Matter’ (Physics) is.

    It says that the property of ‘matter’ can be considered to be a *combination* of the other 2 other properties, information *and* consciousness. The new composite property (matter) is reducible to the other 2 properties *in the limit*

    Finally, Equation (3) tells you what ‘Information’ (Math) is.

    It says that the property of ‘information’ can be considered to be a *combination* of the other 2 properties, matter *and* consciousness. The new composite property (information) is reducible to the other 2 properties *in the limit*.

    The circle is complete and reality ‘eats its own tail’, with each of the 3 ‘fundamental’ properties of existence (consciousness, matter and information) being ‘reducible in the limit’ to 2 of the others.

  189. James Cross Says:

    mjg

    Not quite following the equations but it looks to me like you are just suggesting a sort of Holy Trinity of Math, Mind, and Physics.

    Something like a “world-stuff”.

    https://broadspeculations.com/2016/04/26/world-stuff/

  190. mjgeddes Says:

    James,

    Let me give you a concrete example to illustrate the first equation and the point I’m trying to get across.

    So take Math to mean information, and consider it to be a concrete thing (for instance a disk), not something floating around in some abstract realm somewhere. Take Physics to mean a material process. And take Mind to mean anything having a conscious experience of some sort (a cat, dog etc.).
    The equations tell you what happens when any two of these things interact. The (+) sign just means that two of the above things are combining or interacting. The (=) sign is referring to the results or outcome of the interaction.

    Let’s look at equation (1):

    Eq. (1) Physics + Math = Mind

    Says that whenever you take something that stores information (Math) and combine it with a material process (Physics) , the outcome will be something that has a conscious experience (Mind).

    Here’s the concrete example:

    Imagine taking a complete scan of the workings of your brain and storing all the information as a program on a computer disk say. So that’s information (Math). Now you need the right kind of material process (Physics). Take a working super-computer. Now combine the two things (insert the disk with the information about your brain into the super-computer and run the program). The outcome will be that a thing having a conscious experience will be created (namely, you will have transferred a copy of your mind to the computer).

    So Physics (a working supercomputer) + Math (the disk with the information about your brain) = Mind (a copy of your mind is created on the computer)!

    The point here is that if there *is* any way of explaining consciousness (which we view as a sort of basic ‘element’ of existence), the only hope of doing so is to describe its relationship or interaction with *other* things that we regard as equally basic ‘elements’ of existence. And all these things need to be grounded in concrete empirical reality.

  191. If you think your brain is more than a computer, you must accept this fringe idea in physics | IOT POST Says:

    […] Penrose (who, at 84, is responsible for a substantial chunk of our understanding of the shape of the universe) has argued since the 1980s that conventional computer science and physics can not explain the human mind. He laid out his argument in a pair of books published in the late ’80s and early ’90s, and more recently in a debate with Aaronson at a conference in Minnesota. (Unfortunately, no complete transcript of that debate exists, but Aaronson summarizes it thoroughly on his blog.) […]

  192. James Cross Says:

    mjg

    I don’t know whether you’ve been following my argument but I think consciousness sits between the body and interpreted sensory input. So copying a brain would create a brain disconnected from the body and senses. It would soon degenerate even assuming it was an exact copy, much as people in extended sensory deprivation begin to hallucinate.

    Maybe if you copied the whole body but this would mean recreating the entire body, not a disk of information that describes the body.

  193. Edward Binns Says:

    I want to throw a monkey wrench at this entire argument. I believe it absolutely possible that the problem is that PRESENT computers can never simulate human thinking because they are von Neumann machines — they have a central processing unit which the human mind doesn’t effectively have. Suppose we go beyond the CPU with computers. The re-engineered architecture may well be able to achieve, ah, actuarial prophetic conjectures as the human brain does. So I speculate that Mr. Aaronson isn’t correct YET because AI needs a post-CPU architecture.

  194. If you think your brain is more than a computer, you must accept this fringe idea in physics – InBusiness Says:

    […] Penrose (who, at 84, is responsible for a substantial chunk of our understanding of the shape of the universe) has argued since the 1980s that conventional computer science and physics can not explain the human mind. He laid out his argument in a pair of books published in the late ’80s and early ’90s, and more recently in a debate with Aaronson at a conference in Minnesota. (Unfortunately, no complete transcript of that debate exists, but Aaronson summarizes it thoroughly on his blog.) […]

  195. Ian Wardell Says:

    I’m afraid I’m unable to discern any arguments to suppose computers can become conscious. We can pull a computer/robot apart and see why it responds the way it does. If it says it is unhappy, it’s not saying it because it is actually unhappy, but rather it does say as a consequence of executing algorithms. So why would anyone suppose it is actually conscious? There is nothing in what the author writes which supplies an answer to this question.

    Apparently people say:
    “If your neurons were to be replaced one-by-one, by functionally-equivalent silicon chips, is there some magical moment at which your consciousness would be extinguished?”

    a) Since consciousness must necessarily have causal powers, it’s not clear to me that we could have “functionally-equivalent silicon chips”.
    b) But even if we could how would this show that consciousness is mere computation? It might be the brain needs to be in a certain functional state to allow the manifestation of consciousness. Replacing the parts of a TV set with functional equivalent parts wouldn’t demonstrate that the TV programme is what those parts do, or cause.

    There’s also irrelevant stuff like no-cloning theorem. Doesn’t seem to prevent identical iphones being created or any other particular manufactured goods. What’s happened to the quantum no cloning theorem there? if the author claims one needs to be an exact copy to infinite decimal places, then why does he think this? And if he does think this then we change every single infinitesimal fraction of a second since our bodies are in a constant state of change. Teleportation thought experiments decisively demonstrate that under materialism there can be no persisting self. For that you require a non-physical self.

    And why does the notion that one is predictable mean that one is clonable, or that we are mere machines? Why shouldn’t the actions my non-physical self takes be predictable? If I spot a £50 note on the pavement, people will be able to predict I will stoop down and pick it up. My predictability is perfectly compatible with libertarian free will. Indeed I would argue that free will is perfectly compatible with both the past and future actually existing.

  196. Kit Adams Says:

    Thank you for publishing this interesting blog.

    Given that we don’t yet even know how to simulate spacetime such that QM and GR arise naturally, it seems quite possible that QM is like thermodynamics before Boltzmann, a correct theory, but one that turned out to be emergent rather than fundamental. In particular, it seems possible that the continuous spacetime that forms the essential background for QM (and QFT) is in fact an emergent macroscopic phenomenon built on top of some underlying fabric, which, when understood will make entanglement, non-locality and measurement process seem natural and objective (i.e. I think locality is the thing that has to give).

    While I am a little skeptical of Penrose’s ideas that (gravitised) quantum effects are essential for consciousness, the fact that we cannot yet successfully simulate the behaviour of the simplest neural systems consisting of just a few hundred neurons indicates that we cannot rule such effects out.

    Talk of cloning the code for an AI misses the point for me. A sentient being may be comprised of code and data in the future, but if it is anything like an animal/human brain, the code and data is continually evolving through interaction with the real world. Cloning an AI almost immediately results in a new, unique sentient being, due to its environment being different from the original.

    An prerequisite for consciousness is being able to both interact with and simulate within that consciousness a sufficiently interesting environment. In particular, a conscious being must be able to simulate within itself, other conscious beings, including itself.
    Compared to the human brain, computers are currently way too far from being able to do that type of simulation (for any environment remotely complex enough) for AI consciousness to arise in the near future. But that is just a technological problem.

  197. mjgeddes Says:

    One idea I really like is that fundamental ‘big picture’ questions about the nature of reality are answerable, but only in the abstract sense of being ‘the end of a limit’ – that is to say, we can get closer and closer to the answers, but never quite get there. This is really quite a satisfying picture. For any ‘big picture’ question, there are 3 possibilities:

    (1) The question is totally unanswerable (brute facts)
    (2) There is a clear-cut answer of finite complexity
    (3) There’s an answer in the abstract, but only ‘in the limit’ – we can converge ever closer to the answer, but never quite get there

    It’s (3) that I think might be the case for ‘ultimate’ questions such as an explanation of consciousness or why there is something rather than nothing.

  198. Ian Wardell Says:

    In the postscript Scott says:

    “The first third of the discussion wasn’t about anything specific to my or Penrose’s views, but just about the definition of consciousness. Many participants expressed the opinion that it’s useless to speculate about the nature of consciousness if we lack even a clear definition of the term”.

    I keep hearing this, and it doesn’t sound any less daft on repeated assertions.

    We are all acquainted with our own consciousness and in the most intimate way possible. We all know what fear is, hope is, most of us know what love is. We all know what pain is, we all know what it’s like to experience greenness.

    We all know what these experiences are, and we all know what consciousness as a whole is since we are all conscious! But we can’t *define* consciousness, nor any aspect of consciousness. But why does that matter given that we know our own consciousness more than anything else possible?

    Of course it seems to me that people have in mind that we lack a *scientific* definition of consciousness. We don’t know how consciousness fits in with the rest of reality.

    So effectively some of the participants were saying there’s no point in trying to scientifically understand consciousness because we don’t have a scientific definition of consciousness.

    Sighs . .

  199. sf Says:

    An interesting new book, relevant to the discussion here:
    “Soul Machine: The Invention of the Modern Mind” by George Makari.

    Its nicely reviewed by Christof Koch on May 1, 2016 at
    http://www.scientificamerican.com/article/constructing-the-modern-mind/
    review title:
    Constructing the Modern Mind
    From Aristotle to Watson, views on the mind, brain and soul have evolved. A brilliant new book adds perspective.

    The book also touches on relations to political turmoil in the periods discussed, making for a bit of a connection to the other discussion about Trump now active on the blog. Basically this involves how natural, or intrinsic, it is for the mind to stick with rationality.

  200. Ian Wardell Says:

    “I . . firmly believe that .. the burden is on [strong AI skeptics] to articulate what it is about the brain that could possibly make it relevantly different from a digital computer. It’s their job!”.

    Essentially a robot/computer responds/behaves as it does because it’s programmed that way. We can pull the sucker apart to see precisely why a robot says it feels lonely. We will discover it has nothing to do with any feeling of loneliness. Or, to put it another way, even if it *does* feel lonely we have absolutely no reason to suppose it does. It uttering the words provides no reason or evidence.

    On the other hand if I utter those words, it’s because I do genuinely have that feeling. But why can’t it just be the physical processes occurring in my brain rather than the feeling of loneliness *per se*?

    Because I’m *immediately* aware of the causal efficacy of my own consciousness. In order to deny that it is the conscious feeling *itself* responsible for my words, we would have to either suppose:

    a) Consciousness plays no causal role
    b) Consciousness is one and *the very same thing* as such physical processes or the functional role carried out by such processes.

    But as for “a” materialism cannot account for the causal efficacy of consciousness. Consciousness cannot simply follow some physical process or an algorithm, otherwise we wouldn’t be able to reason. I explain this in an essay:

    http://ian-wardell.blogspot.co.uk/2016/03/materialismphysicalism-is-incompatible.html

    And not only can materialism not account for the causal efficacy of consciousness, it cannot *in principle* be scientifically explained (NB science as *currently conceived*) An essay I wrote, and which also addresses “b”, will be relevant here:

    http://ian-wardell.blogspot.co.uk/2016/04/neither-modern-materialism-nor-science.html

    To quote my conclusion near the end:

    “Consider a prism. The mixture of coloured lights obtained is not wholly produced by the prism all by itself. Something extra is involved; in this case, the white light which enters the prism. Or consider a TV set. The internal components do not produce the programmes. Similar to the prism something else is involved, namely TV signals. I submit that reason strongly suggests that this is the situation we have with the brain and consciousness. Similarly to these 2 examples something else, apart from the brain, must be involved. Some extra ingredient.

    I would suggest that this extra ingredient is what we call the self or the experiencer. It is that which has conscious experiences; it is not identical to them. I (the self or experiencer) am not identical to either my thoughts, or my moods, or my interests, or all of them as a collective whole. To use the prism as an analogy. The mixture of coloured lights represents my current thoughts, moods, interests and more generally my present consciousness. But the white light stands for my self. And of course, necessarily it will be consistent with our intuitive conviction that we are literally the same self throughout our lives since this is the very entity we are stipulating must exist. Note too that, similarly to conscious experiences, this self is also not a physical thing or process. It is an extra ingredient apart from all the physical processes. Hence (iv) cannot be a materialist position – at least not materialism as currently conceived”.

    So I have no idea what about the brain makes it different from a computer. But we know that it is indeed different.

    I don’t know what Penrose’s arguments are, but it’s interesting he also thinks an extra ingredient is required if his arguments are wholly different from mine!

  201. mjgeddes Says:

    Ian #200

    The brain is most definitely a computer. Read carefully read Scott’s blog post again. Basically, the Church-Turing thesis and the most leading-edge physics theories that imply that the world can be viewed as ‘information’ pretty much do overwhelmingly demand that the entire physical world must be computable. But you are correct that materialism isn’t entirely true either. Read on….

    Dave #186 and Scott

    I’m now very very confident that I have indeed solved the puzzle of consciousness. I have ‘the hard problems’ licked! It’s all over 😀 For now, here’s a quick ‘popular’ summing up of my views…

    “An answer to the hard problems of metaphysics”

    To the mystic it’s self-evidently true that consciousness is the ultimate ‘fabric’ of reality. To the computer scientist, it’s self-evidently true that information is the ultimate ‘fabric’ of reality. To the neuroscientist, it’s self-evidently true that matter is the ultimate ‘fabric’ of reality.

    And the neuroscientist, computer scientist and mystic are all utterly convinced they’re right.

    There a way to solve the paradox: take the 3 elements of reality (consciousness, information and matter) and *combine* them into pairs. Then, say that it is *combination* of each pairing that creates the other missing element. Yes, this *is* circular, but it is no longer paradoxical :

    Information = {consciousness, matter}
    Consciousness = {information, matter}
    Matter = {consciousness, information}

    Each element of reality can be viewed in *two* different ways: *one* way is to view it is as something that really exists ‘out there’ as an objective ‘thing’ in itself (a holistic viewpoint, the left hand-side of the equations above). And it *is* that!

    The *other* way is to view it as something that not a real thing in itself, but is built out of a combination of the other elements – we can break it into parts (a reductionist viewpoint, the right-hand side of the equations above). And it *is* that to! It is because of the circular nature of the above pairings that we can solve the paradox and accept both viewpoints as correct!

    So what then is consciousness? The equations above tell us! In one sense it is a real fundamental property of reality that cannot be reduced to anything else (the left-hand side of the equation). But in the other sense it is a combination of material processes and information (the right-hand side of the equation).

    In short, consciousness is built out of a combination of material processes *and* information. But although it is *composed* of these things, it is not entirely reducible to them. It has a validity in and of itself as a real thing that transcends them both.

    If the viewpoint of reality I outline above is correct, it is quite understandable why consciousness seems so hugely perplexing to us. It is because true understanding requires a synthesis of *three* quite different mind-sets ( the mystic *and* the computer scientist *and* the neuroscientist). You need to grasp the validity of *all* 3 viewpoints to finally ‘get it’.

  202. This is the most important difference between your brain and a computer | News Verge Network Says:

    […] published his response to Penrose on his blog, along with a summary of their discussion. And the most interesting part is where he says the two […]

  203. This is the most important difference between your brain and a computer | Newsbunch Says:

    […] published his response to Penrose on his blog, along with a summary of their discussion. And the most interesting part is where he says the two […]

  204. If you think your brain is more than a computer, you must accept this… | FreeAppSupply Says:

    […] published his response to Penrose on his blog, along with a outline of their discussion. And a many engaging partial is where he says a dual of […]

  205. James Cross Says:

    mjg 201

    Or maybe “consciousness creates brain activity, and indeed creates all objects and properties of the physical world.”

    After all matter and information are just simplifications of reality and creations of consciousness.

  206. mjgeddes Says:

    James #205

    That would indeed be a valid viewpoint. My theory is saying that there are 3 elements of reality (matter, information and consciousness) and all 3 elements are on an equal footing. You can pick any one of them and say that’s the foundation of reality if you want to. There are 3 different, equally valid viewpoints.

    Also, you can view each element as a real thing that exists outside you, or you can view each element as something that is in part a creation of your own mind. Again, it’s your choice how you want to see it. Both view-points are equally valid.

  207. Ian Wardell Says:

    mjgeddes, if the brain is a computer, then where does consciousness come from?

    The “hard problem” can only be got rid of by abandoning the mechanistic philosophy. Or, in other words consciousness, cannot in principle be scientifically explained since current science only deals with the quantitative, and consciousness is, in its essence, qualitative.

    It’s really quite simply and I don’t intend to keep repeating myself. You either get it or you don’t.

  208. Neil Bates Says:

    No. It isn’t really the universe that puts brains in this “trap” – it is an unwarranted pretense about how things must work. The pro-AI side is pimping the assumptions, implying their mechanistic model should be taken as default about the universe. But it’s already wrong – because of QM as we already know it even without special new physics – to think a computer can simulate the universe (first hint: such simulations could not handle wave function collapse and entanglement without contrivances.) The quantum world is mysterious, not classical. My own argument about this subject, more abstract:
    http://fqxi.org/community/forum/topic/2119.

  209. Krzysztof Pietroszek Says:

    I claim that a potato has a consciousness and the burden of proof is ON YOU, Dr. Aaronson, to show how the human brain differs from a potato, so that a potato cannot be conscious while human brain is. Good luck 😉

  210. David Pearce Says:

    Mjgeddes (and anyone else who thinks they may have cracked the mystery!) can you think of any novel, precise and experimentally falsifiable predictions that could put your conjecture to the test – by prior agreement of critics and proponents alike?
    [I’ve outlined mine – although decoherence timescales of neuronal superpositions in the CNS are so insanely rapid I struggle to take the conjecture seriously myself. Alas I know of no other way to rebut the Chalmersian structural mismatch argument for dualism – generally accounted a fate worse than death.]

  211. mjgeddes Says:

    Dave and Scott

    You’ll be interested to hear that as a side-effect of solving consciousness, I solved quantum-gravity as well 😀

    Remember how space and time are relative and combined into [space-time] in relativity theory?

    It’s a similar sort of thing with my own theory. The basic idea is that [Information-Matter-Consciousness] form a ‘holy trinity’ where each alone is not objective in itself, but relative to an observer.

    So for example, what appears as ‘Information’ to one observer can be reinterpreted as ‘Matter+Consciousness’ to another observer, and vice-versa.

    I see where the uncertainty principle in QM comes from. you can pick any 2 out of 3 elements (Information, Matter, Consciousness) that you want to ‘pin down’ or describe, but the 3rd element always get fuzzy. You can never ‘see’ all 3 elements at once.

  212. Here's why some computer scientists suspect your brain is just a fancy computer - Age Times Says:

    […] a full twin of their contention isn’t available, yet Aaronson has published his response to Penrose on his blog, along with a thorough outline of a whole […]

  213. Stuart Hameroff Says:

    Hi Scott

    I read your reply to Sir Roger Penrose in ‘Shtetl Optimized’, and have a few comments.

    1) You (and AI in general) gloss over the ‘hard problem’ of consciousness, phenomenal experience, qualia, or ‘feelings’ in a kind of shell game obfuscation.

    2) The assumption that the brain is a digital computer is full of holes and doesn’t work. Brain-mapping someone’s neuronal connectome to copy his/her essential features has no supportive evidence. Indeed, simulating the already-mapped 302 neuron brain of the worm C elegans has failed to show any worm-like behavior. Moreover a single cell paramecium, with no synaptic connections swims nimbly, learns, finds food and mates and has sex, using its internal microtubules as sensors and actuators. These same microtubules, quasi-crystal lattices in brain neurons are essential to memory and cognition, and the structures in which Sir Roger and I suggest quantum processes essential to consciousness occur in the brain.

    3) Christof Koch, Giulio Tononi, David Chalmers and others have given up on computational emergence to explain consciousness, and rely instead on various forms of panpsychism, the notion that consciousness is an intrinsic feature of the universe. But consciousness as a property of matter doesn’t work either. Rather (and this is Roger’s key point, which you failed to mention), consciousness is identified with his proposed ‘objective reduction’ (‘OR’) events, ubiquitous self-collapses of the quantum wavefunction (similar philosophically to Whitehead’s ‘occasions of experience’).

    ‘Gravitizing quantum mechanics’ means Penrose bravely tackles quantum superposition, in which particles exist in multiple locations simultaneously. In general relativity, mass is equivalent to spacetime curvature. Roger suggested superposition of mass in two locations is equivalent to two alternate curvatures, a separation in fundamental spacetime geometry. One might imagine that if such separations were to continue, each such curvature would evolve its own universe, fulfilling the multiple worlds interpretation. However Roger proposed such separations would be unstable, and undergo reduction to a single state at an objective threshold, hence objective reduction, OR. That threshold is the uncertainty principle E=h/t (where E is the gravitational self-energy of the superposition, h, hbar actually, is the Planck-Dirac constant, and t the time at which OR will occur, accompanied by a moment of subjective experience.

    OR puts consciousness into reality, avoids multiple worlds, and turns Copenhagen upside down. Rather than consciousness causing collapse, collapse causes consciousness (or is equivalent).

    In our theory of ‘orchestrated objective reduction’ (‘Orch OR’) microtubules inside brain neurons organize and ‘orchestrate’ quantum computions which halt by OR to produce full rich conscious experience with causal power. See ‘Consciousness in the universe – Review and update of the Orch OR theory http://www.sciencedirect.com/science/article/pii/S1571064513001188

    The evidence is actually on our side. Anirban Bandyopadhyay’s group (referenced in paper cited above) has shown microtubule quantum resonances in fractal-like, self-similar hierarchical cascades in terahertz, gigahertz, megahertz, kilohertz and hertz. Sir Roger and I proposed in our 2014 paper that EEG rhythms are ‘beat frequencies’ of faster vibrations in microtubules within neurons, specifically dendritic-somatic microtubules in cortical layer 5 pyramidal neurons. (Despite nearly a century of use, the origin of EEG remains unknown.)

    Another approach to consciousness involves anesthetic gases which selectively erase consciousness, and should point to its origin. Anesthetics bind and act by quantum London forces in non-polar, olive oil-like regions of pi resonance clouds within proteins, regions defined by the ‘Meyer-Overton solubility correlation’ and conducive to quantum processes. Most assume that anesthetics act in non-polar regions inside membrane receptor and ion channel proteins, but evidence points to anesthetics acting by dampening terahertz quantum vibrations in microtubules, in what we call the ‘quantum underground’. See ‘Anesthetics act in quantum channels in brain microtubules to prevent consciousness’ http://www.santaroszinios.lt/wp-content/uploads/2015/06/CTMCms-2.pdf

    In my view, rather than a computer, the brain is more like a quantum orchestra, tuned to the universe through Roger’s ‘objective reduction’ (‘OR’) mechanism (‘gravitizing quantum mechanics’). For a light-hearted summary of this view (and brief history of the ‘Tucson’ consciousness conferences) see http://www.interaliamag.org/articles/stuart-hameroff-is-your-brain-really-a-computer-or-is-it-a-quantum-orchestra-tuned-to-the-universe/

    Metaphorically at least, consciousness is more like music than computation.

    Now for replies to some of your specific comments:

    Scott
    (referring to skeptics of brain-as-digital-compute) ”the burden is on them to articulate what it is about the brain that could possibly make it relevantly different from a digital computer. It’s their job!”

    Stuart
    I just did. The brain produces consciousness (digital computers do not) and includes quantum processes in microtubules. The ideas are fleshed out in papers including our 2014 review referenced above.

    Scott
    And the answer can’t just be, “oh, the brain is parallel, it’s highly interconnected, it can learn from experience,” because a digital computer can also be parallel and highly interconnected and can learn from experience.

    Stuart
    Agreed

    Scott
    Nor can you say, like the philosopher John Searle, “oh, it’s the brain’s biological causal powers.” You have to explain what the causal powers are!

    Stuart
    OK. Collapse is causal, and OR is collapse. The choices are not random, but influenced by (resonate with?) what Roger calls non-computable Platonic values inherent in the structure of spacetime geometry.

    Scott
    Or at the least, you have to suggest some principled criterion to decide which physical systems do or don’t have them.

    Stuart
    We have. Any quantum system will undergo OR by the uncertainty principle E=h/t (where E is the gravitational self-energy of the superposition, h [hbar actually] is the Planck-Dirac constant, and t the time at which OR will occur on average, like radioactive decay). The vast majorities of such ubiquitous events will have random, disconnected and meaningless subjective experiences. Only OR events which are ‘orchestrated’ will be like our consciousness. Metaphorically, random OR events (which essentially replace decoherence) are like the sounds, notes and tones of an orchestra warming up. Orchestrated OR events – consciousness – is the music when the band begins to play.

    Scott (regarding evolution)
    …arguments of the form “such-and-such couldn’t possibly have evolved” have a poor track record in biology.

    Stuart
    Evolution has fundamental shortcomings. It doesn’t explain the origin of life, nor what life actually is. Moreover the notion of self-replicators and organisms driving evolution through survival of genes doesn’t make sense. Why would replicators and organisms exhibit survival behavior without reward and feelings (the basis for psychology)? What’s the motivation? The answer should be obvious. If Roger is correct and OR provides feelings, including pleasure, such feelings and reward would have been present before life began, and could have served as the driving force for the origin and evolution of life, e.g. in the ‘Primordial Soup’. I’ve described this in a recent chapter ‘The quantum origin of life – How the brain evolved to feel good’. Behavior of organisms from paramecium to university professors serve to optimize pleasure and reward, whether hedonistic, altruistic, spiritual, or academic recognition.

    Scott
    The final place where I part ways with Roger is that I also want to be as conservative as possible about neuroscience and biochemistry. Like, maybe the neuroscience of 30 years from now will say, it’s all about coherent quantum effects in microtubules. And all that stuff we focused on in the past—like the information encoded in the synaptic strengths—that was all a sideshow. But until that happens, I’m unwilling to go up against what seems like an overwhelming consensus, in an empirical field that I’m not an expert in.

    Stuart
    The ‘overwhelming consensus’ can’t simulate a worm nor a single cell paramecium, can’t account for consciousness nor real-time behavior (rendering consciousness epiphenomenal when it needn’t be so), and isn’t so much a consensus as hodgepodge of ill-informed misconceptions.

    AI and neuroscience have reinforced each other’s fantasies while Roger has potentially solved the problem, explaining consciousness and quantum measurement by linking them to general relativity. Among prominent physicists he alone includes consciousness as a necessary ingredient in a theory of everything. Our Orch OR theory adds biological specificity, far more detail than any other theory.

    The overwhelming consensus at one time was that the earth was flat, and the sun revolved around it. How did that work out?

    Scott

    “give me a big enough computer and the relevant initial conditions, and I’ll simulate the brain atom-by-atom.”

    Stuart
    Simulating quantum dipole oscillations, OR, non-computable resonance and consciousness may not be possible, though it could be implemented artificially.

    But treating neurons as simple bits (the current approach) will get you nowhere.

    Scott
    The Church-Turing Thesis, I said, is so versatile that the only genuine escape from it is to propose entirely new laws of physics, exactly as Penrose does—and it’s to Penrose’s enormous credit that he understands that.

    Stuart
    As Roger says, additional laws of physics are required in any case to solve the measurement problem, and reconcile QM and GR. Consciousness is the solution.

    Scott
    Afterwards, an audience member…added, “…do not become the priest of a new religion of computation and AI.” …I’d expected that his warning would be the exact opposite of what it turned out to be. To wit: “Do not become the priest of a new religion of unclonability, unpredictability, and irreversible decoherence. Stick to computation—i.e., to conscious minds being copyable and predictable exactly like digital computer programs.” I guess there’s no pleasing everyone!

    Stuart
    Your dogmatic bias is showing. I’d take that guy’s advice. What evidence is there suggesting conscious minds are copyable and predictable? You can’t even say what a conscious mind IS. If you can’t simulate behavior of a worm or single cell organism based on the digital computer analogy, what rationale do you have for that statement?

    The sarcastic title of Roger’s 1989 book poking fun at AI (‘The emperor’s new mind’) still stands. You guys are bluffing.

    Cheers,
    Stuart

    Stuart Hameroff MD
    Professor, Anesthesiology and psychology
    Director, Center for Consciousness Studies
    Banner-University Medical Center
    The University of Arizona, Tucson, Arizona​

  214. ChisH Says:

    Stuart Hameroff #213 “The brain produces consciousness (digital computers do not)”

    How do you know?

  215. mjgeddes Says:

    Hi Stuart,

    A number of points here regarding (3) panpsychism versus OR theories.

    You say panpsychism doesn’t work but you completely fail to give any reasons. In fact it seems to me and many others that panpsychism (the notion that consciousness is an intrinsic property of the universe) is far more plausible than OR!

    You say that ‘consciousness as a property of matter doesn’t work’, and I agree, but in fact it is OR that tries to turn consciousness into a physical property! You can’t get more physical than quantum gravitational effects physically interacting with the brain! The problem is that even if the theory of OR in microtubules was completely true, all you’ve actually done is replace one theory of physical actions with another one. Roger’s idea doesn’t solve anything – even if it was totally true, it would be just as much of a mystery as to why OR in microtubules resulted in consciousness as it is now in the standard theory of computation.

    Roger had some great ideas about metaphysics with his discussion of ‘3 worlds’ and the relationship between them – I’m sure you’re familiar with the diagram? (The circles representing ‘3 worlds’ – Platonic, Physical and Mental). I think this was very insightful.

    The trouble is that the OR theory is totally inconsistent with that diagram! If Roger wanted to know whether the Riemann hypothesis was true, he wouldn’t build a super collider and search for the answer in particle physics would he? Of course not! Is the number ‘4’ hiding under Roger’s table? Of course not! Then let me ask you and Roger this: why on earth do you think you can identity consciousness with some action in fundamental physics? If the two of you would just reflect on what I’ve said here, I’m confident that it will eventually dawn on you both that it actually makes no sense.

    Look at Roger’s diagram of the ‘3 worlds’ again – the ‘Mental World’ is in a separate circle, just as ‘Platonic World’ is – they *do* connect to the ‘Physical World’, but they are separate circles. Clearly, the number ‘4’ can’t be located in particles- it exists in the ‘Platonic’ world, not the physical one. So it is therefore totally inconsistent with Roger’s own diagram to try to equate consciousness with some physics action – the ‘Mental’ world is a separate circle from the physics one.

    It is panpsychism and the ideas of Koch, Tononi and Chalmers that *are* consistent with Roger’s diagram. They accept that consciousness is an intrinsic property of the universe and can’t be directly identified with physics action, exactly as Roger’s diagram of the ‘3 Worlds’ implies.

  216. ds Says:

    This might be a strange request, but you mention “complicated optical equipment he was recently issued by Britain’s National Health Service”. Do you have more information about it (website, a better name, picture, etc.)? A google search yielded nothing substantial.

  217. Scott Says:

    ds #216: I wrote above that I’d stop participating in this thread (to move on to other things, not just Donald Trump but, like … actual work! 🙂 ). But I guess I’ll make an exception for your optical inquiry. Roger had what looked like a little four-inch-long telescope, which he’d take out of a case and peer through with one eye to see the projector during talks. He also had, I believe, different pairs of glasses that he switched between. He mentioned that there were also electronic options, but that the NHS didn’t cover them. I joked that maybe the NHS could make an exception to its coverage policies for a Knight of the British Empire. Sorry, I don’t have any more information than that.

  218. Jay Says:

    Stuart Hameroff #213

    > evolution through survival of genes doesn’t make sense.

    Is this view endorse by Penrose too or you speaks in your name only?

    >You (and AI in general) gloss (…) hodgepodge of ill-informed misconceptions (…) AI and neuroscience have reinforced each other’s fantasies (…) Your dogmatic bias is showing. (…) You guys are bluffing.

    That’s unusual terms for academic discussion. Do you feel offended by Scott, AI in general, or neuroscience in general?

  219. adamt Says:

    Scott, oh, c’mon, I think you have to make an exception for Hameroff as well since he is Sir Roger’s co-author on some of these papers. To stick to the spirit of your ‘no more participation’ could you make it a new blog post with reply to Dr. Hameroff in #213? I ask because I think there is a not-insignificant possibility that I might learn something from your reply so please take pity on me, ok? 🙂

  220. Stuart Hameroff Says:

    I’ll be happy to respond to the comments by various people above, but agree with Adamt that Scott should reply to my post rather than proceed to dump on Trump, as worthy as that may be. Scott also ridiculed me in Michael Cerullo’s blog, and when I responded to both of them he didn’t reply.

  221. Rick Mayforth Says:

    Dr Aaronson, you asked for a qualitative difference between a computer and a brain. How about this — the brain is a self-modifying system that is in constant flux, both memory and processing elements. Its electrochemical processes run constantly, meaning that a brain changes at the speed of the reactions going on inside it. There are hugely many and happening in parallel. However long it might take to make a copy, something will have changed before the copy process completes, thus no copy of a living entity can ever be exact.

  222. Scott Says:

    Stuart Hameroff #213: I didn’t “ridicule” you in my interview with Michael Cerullo; I explained my areas of agreement and disagreement with your and Roger’s ideas (just as I did here). The meanest thing I said there was that I find the microtubule theory to be implausible.

    I’ll confess, your tendency to personalize every intellectual disagreement, to cast yourself as a Galileo (“the overwhelming consensus at one time was that the earth was flat, and the sun revolved around it” … “your dogmatic bias is showing” … “you guys are bluffing”)—something that your collaborator, Roger, never ever does, but that you do constantly—can make it rather unpleasant to interact with you. The hostility is all the more ironic when you’re talking to someone who goes further than almost anyone else in the direction you and Roger want, the direction of saying that debates about quantum mechanics and cosmology and thermodynamics and the Church-Turing Thesis can’t be entirely sealed off from debates about consciousness.

    But since you’re a guest here, and since you asked, a few responses:

      Why would replicators and organisms exhibit survival behavior without reward and feelings (the basis for psychology)? What’s the motivation?

    The “reward” for a replicator is that more copies of it get made, and hence there are now more replicators on earth that exhibit the same survival behavior. No consciousness, psychological rewards and feelings, or even intelligence per se needs to be anywhere in the loop. I’m not a life scientist, and you are, but this, it seems to me, was Darwin’s great insight, which still stands today.

      The ‘overwhelming consensus’ can’t simulate a worm nor a single cell paramecium…

    So then, if the simulations of C. elegans that people are doing start acting like real worms, will that cause you to reevaluate your ideas? I’ve already conceded, in comment #107, that simulations of more and more complex organisms would be clear progress in the direction of refuting my speculations about the nonexistence of a “clean digital abstraction layer.”

      The overwhelming consensus at one time was that the earth was flat, and the sun revolved around it. How did that work out?

    You can’t be serious. Look, there are valuable stocks today that will be worthless 10 years from now, and vice versa. In that sense, the “overwhelming consensus of the market” is massively wrong. Yet despite that, any financial adviser will still tell you that, unless you have deep knowledge about some particular industry or there’s some other very special circumstance, you, the individual investor, will not beat an index fund. (Well, you might get lucky, but not in expectation.) Understanding why these two statements are perfectly compatible with each other is the cure for Galileo-ism.

    In my case, I have no special knowledge of neuroscience, and therefore little choice but to invest in a “neuroscience index fund”—and I’ll gain exposure to Orch-OR when and only when the neuroscience index funds invest in Orch-OR.

      Simulating quantum dipole oscillations, OR, non-computable resonance and consciousness may not be possible, though it could be implemented artificially.

      But treating neurons as simple bits (the current approach) will get you nowhere.

    Wait, when you say “could be implemented artificially,” are you conceding that the physics underlying the brain might be possible to simulate on a computer?

    In any case, the point that I was trying to make, in that exchange, was that whether the behavior of an individual neuron is simple or complicated (like you, I think the answer is “complicated”) is just completely irrelevant to the question of whether the brain violates the Church-Turing Thesis. For the latter question, the only thing that matters is whether there are yet-unknown, uncomputable laws of physics that are relevant to the brain—i.e., precisely the question that Roger rightly focuses on.

      As Roger says, additional laws of physics are required in any case to solve the measurement problem, and reconcile QM and GR. Consciousness is the solution.

    Yes, everyone agrees that the currently-known laws of physics can’t be the whole story, for one thing because of the problem of quantum gravity. But it’s only a tiny minority of physicists who think quantum mechanics itself requires fundamental change—let alone that the replacement theory would be implicated in any way in biochemistry, or would have any uncomputable aspects. Roger has certainly earned the right to an idiosyncratic view of quantum mechanics, but he knows that his view is idiosyncratic, he respects the opposing views, and I never heard him say anything even nearly as dogmatic as you say above (not “consciousness might be involved” in the reconciliation of QM with GR, but “consciousness is the solution”).

  223. Douglas Knight Says:

    Scott #107, simulating C elegans is a good future benchmark, but past failures show nothing and you should stop claiming otherwise. A smaller example is lobster stomach ganglia (30 neurons). I think that it has been simulated and this shows that sometimes neurons are simple, but I’m not sure. It is a bad sign about just about everyone that they fail to discuss this example.

    The connectome just isn’t enough information to try simulating. It is an undirected graph. People have “simulated” it as a lark. The first thing they do is flip a coin to orient every edge, adding almost as much information as was already there.

    Most people who earnestly propose simulating C elegans are CS people who think that they don’t need to do any more lab work and they quietly give up when they realize that the graph is not even directed. More serious proposals are about bulk extraction of more information about connections and fail at that point. You can tell such proposals because they don’t take the connectome as input, but plan to replicate it as their first step. Their failure to extract a simple connectome could be due to assuming the existence of a digital abstraction layer, but in the cases that I know about, it is that various tools, eg, microscopy, are inadequate.

  224. Scott Says:

    Douglas #223: I actually agree with you that almost nothing is shown by the current failure to reproduce C. elegans—only that, if the worm has a clean digital abstraction layer, then it must be more complicated than the most naïve guesses that anyone has taken seriously enough to try. Which is a nonzero amount of information, but very little. I.e., I’d say that we’re in a similar epistemic situation as we were after the failures of various early AI experiments in the 60s and 70s.

  225. Stuart Hameroff Says:

    Scott, thanks for your response. And I do apologize for my persecution complex and behavior which stems from 21 years of ill-informed and unfair criticism of our theory. And you’re right, you didn’t ridicule me in the Cerullo blog, but you were dismissive and didn’t reply to my comments directed back to you.

    I’ll reply later to your specific points. Right now I’m between cases in surgery and have to get ready to anesthetize another set of someone’s microtubules.

    Peace, out
    Stuart

  226. Martin Says:

    As Scott says (and I basically agree with this statement):

    “Personally, I dissent a bit from the consensus of most of my friends and colleagues, in that I do think there’s something strange and mysterious about consciousness—something that we conceivably might understand better in the future, but that we don’t understand today, much as we didn’t understand life before Darwin. I even think it’s worth asking, at least, whether quantum mechanics, thermodynamics, mathematical logic, or any of the other deepest things we’ve figured out could shed any light on the mystery. I’m with Roger about all of this: about the questions, that is, if not about his answers.”

    Personally I’m not keen about the idea of objective reduction via Quantum Gravity either. Though I’m open to quantum biological processes in Microtubules making interesting contributions. (I think subjective seduction is just fine, in part orchestrated by neural networks in self observation)

    But, unlike Scott, I am not conservative over mainstream neuroscience (it’s hard to be conservative with a science that has not had it’s Darwin ephiphany, as he said :))!

    In particular given the attachment of many to eliminative materialism, that is ideologically convenient and doesn’t fit to our primary qualia experience of consciousness, from which speculation like this derives.

    As for the unlikely possibility of an emergent Boltzmann brain, I think that maybe resolved by retro-causality in a Wheeler Participatory Anthropic Cosmology. A Cybernetic (Cyclic Loop) Quantum Ontological Cosmology.

    PS: I think Strong-AI, even to the extent of having a conscious internal narrative, is viable but not necessarily via a simplistic Turing Machine computer. A Super Strong-AI may be very useful simulation, exceeding performance of humans in specific areas. But if we assume the visceral qualia of our consciousness evolved for a reason, then we should not throw it away as an illusion. A Super Strong-AI may be a very useful P-Zombie, and pass a Turing Test. But to be as conscious as we are (and gain whatever the benefits that this has, e.g., free will/non-algorithmic determinism?) we need to resolve the Hard problem of consciousness. Confront it seriously, not invoke magical thinking, rather use the physics we know (even the misnamed “spooky” bits if necessary.)

  227. Barak A. Pearlmutter Says:

    You mention that “One of the many reasons I admire Roger is that, out of all the AI skeptics on earth, he’s virtually the only one who’s actually tried to meet this burden, as I understand it! He, nearly alone, did what I think all AI skeptics should do, which is: suggest some actual physical property of the brain that, if present, would make it qualitatively different from all existing computers, in the sense of violating the Church-Turing Thesis.” This is a very good point. I draw you attention to another such attempt, namely Tony Bell’s paper “Levels and Loops: the future of Artificial Intelligence and Neuroscience”, Phil. Trans. R. Soc. Lond. B (1999) 354, 2013-20, http://www.cnl.salk.edu/~tony/ptrsl.pdf

  228. Douglas Knight Says:

    Scott #224, I must protest. No one has seriously tried any naïve guesses. I called the papers “larks” because that is what the ones I read said. I insist that this is zero information. Maybe there are other people who take seriously the digital abstraction layer that a neural network’s behavior does not depend on its orientation, but this is a moronic math error. You don’t have to ridicule morons, but I don’t count them as taking things seriously. And you shouldn’t include them in your diversified portfolio of neuroscience.

  229. James Cross Says:

    Stuart,

    I have a few questions in regard to your theory.

    At what point in evolution does consciousness begin?

    As you have pointed out anesthetics affect even single celled organisms and render them immobile. If I am mistaken even single celled organisms gain mobility and sensing of their environment through microtubules. Are these organisms conscious at least in a primitive way? Are plants conscious?

    Much of brain activity is unconscious but the unconscious part is likely to have a great role to play in what does become conscious? What does your theory have to say about this?

    How does your theory address stages of coma, deep sleep, dreaming sleep, hallucination, and normal wakefulness?

  230. Stuart Hameroff Says:

    Hi Scott

    Thanks for your replies to my comments, to which I’ll respond below.

    First, I came across your previous blog ‘Could a quantum computer have subjective experience’ which I found quite relevant. It is indeed good of you to take consciousness seriously, and bring it into these discussions. Thanks for that. You should come one of our conferences ‘The Science of Consciousness’. How about Shanghai, June 6-9, 2017? Seriously. I’ll send you an official letter of invitation. The possibility of downloading will be an important theme, as the Chinese, lacking religion, are apparently looking to technology for immortality.
    For example see this about downloading through your cellphone. I’m highly skeptical, but it will make for a good discussion.
    http://www.breitbart.com/national-security/2016/05/11/atheist-china-seeks-immortality-downloaded-human-consciousness/

    I do think consciousness downloading is possible, but not to a classical computer (more about that later)

    So here are some excerpts from that blog with my comments.

    Scott
    Either we have to say that consciousness is a byproduct of any computation of the right complexity, or integration, or recursiveness (or something) ……or else we could say, like Roger Penrose, that we’re conscious and the other things aren’t because we alone have microtubules that are sensitive to uncomputable effects from quantum gravity.

    Stuart
    I’d say evidence favors the latter. But your characterization of Roger’s idea misses two key points.

    One is that consciousness is not a property of matter, but rather an event, or sequence of events, which are self-collapses (decoherences, if you like) of quantum computational wavefunctions which evolve to a critical objective threshold (given by the uncertainty principle, E=h/t) and then undergo quantum state reduction to classical states (objective reduction, ‘OR’). Such events are proposed to be accompanied by/identical to, moments of subjective (proto-)conscious experience. and occur ubiquitously in the world as decoherence.

    In most thermal, polar, charged environments, superpositions would entangle with environment and quickly reach threshold by E=h/t with subjective proto-conscious moments which are merely random, disconnected and lack meaning and resonance with no non-computable Platonic influences. These events are indistinguishable from decoherence.

    How do such proto-conscious OR events become full, rich conscious moments? This question is similar to the combination problem in panpsychism, except here subjectivity is not a property of matter, but a process, a sequence of collapses, or decoherence which determines particular states of matter (Roger claims OR and decoherence are essentially the same process). Each OR event is a quantum of phenomenal experience. BING!

    Metaphorically I liken these disjointed proto-conscious events to the sounds, tones and notes of an orchestra warming up. How do we get Brahms, or Hendrix?

    Orch OR suggests microtubules, their vibrational resonances, inputs and memory ‘orchestrate’ OR events to process information and do quantum computing with cognitive content by the Schrodinger equation. The chosen states regulate neuronal function, including axonal firings whose thresholds vary from spike-to-spike, indicating deviation from Hodgkin-Huxley due to internal influences. However this would seem to require superpositions sustained for significant time durations, e.g. those relevant to neurobiological processes, before reaching threshold by E=h/t.

    Brain processes in EEG based on membrane depolarizations range from 0 to a few hundred hertz. Looking inside neurons, Bandyopadhyay’s group has shown microtubule quantum resonances in a hierarchical series from terahertz to gigahertz to megahertz to kilohertz to hertz.

    In perspective, in 2000 Max Tegmark published an influential paper which calculated quantum decoherence in a microtubule quantum state to be 10^-13 seconds, far too fast for useful biology. But he made a big mistake and disproved his own theory, not Orch OR. He based superposition separation of the ‘qubit’ as a soliton propagating along the microtubule and being separated from itself by 24 nanometers (3 tubulin lengths). This value is in the denominator of his formula for decoherence time t.

    In Orch OR, superposition separation is at the level of atomic nuclei, Fermi length, 7 orders of magnitude less, bringing decoherence to 10^-6 secs. We found a few more errors in permitivitty and got to 10^-4 secs, and published a year later in Phys Rev E, the same journal in which Tegmark published. http://www.ncbi.nlm.nih.gov/pubmed/12188753
    Subsequently, Bandyopadhyay found microtubule coherence times of as long as 10^-4 secs, verifying our prediction. (Nonetheless I still hear how Tegmark ‘disproved Orch OR’.)

    In writing our 2014 Orch OR review paper, Roger brought in negative resonances, or ‘beats’ among quantum vibrations in microtubules. In dendrites and soma of neurons, microtubules are uniquely arrayed in mixed polarities, so resonances will lead to ‘beats’ because of slight energy differences. So for example, we suggested, microtubule vibrations in, say, megahertz, in two or more microtubules would lead to slower beats, e.g. below 100 Hz, which could be neurophysiological. That meant decoherence/premature OR need be avoided only 10^-6 secs, so this ‘decoherence’ part of Orch OR is on solid ground experimentally. And the slower beats could account for EEG rhythms.

    When we started in the mid 90s everyone said biological quantum coherence is impossible, you need absolute zero temperatures for quantum computing. But now we know photosynthesis uses quantum coherence in warm sunlight. And microtubules seem to be quantum computers whose terahertz quantum vibrations are dampened by anesthesia which leads to loss of consciousness.

    Philosophically, the view is consistent with Russell’s neutral monism (both mind and matter derive from quantum superposition), and Whitehead’s process philosophy, in which consciousness consists of sequences of ‘occasions of experience’.

    The choices of classical states, and the conscious actions and perceptions selected in each OR event may be influenced by non-computable Platonic values intrinsic to the universe. You could think of these Platonic values as a giant ‘look-up table’ encoded in the fine scale structure of the universe which may be fractal-like, information repeating every few orders of magnitude. Orch OR events perhaps resonate with non-computable Platonic values, and deviate from the Born rule.

    Scott
    Consciousness has to fully participate in the Arrow of Time.

    Stuart
    I agree. In my opinion, consciousness IS the arrow of time, sequences of irreversible OR steps ratcheting through spacetime. (However temporal nonlocality and apparent backward time effects may occur in uncollapsed systems.)
    See
    Time, consciousness and quantum events in fundamental spacetime geometry, In: The nature of time: Physics, geometry and perception
    Proceedings of a NATO Advanced Research Workshop, ed R Buccheri, M Saniga and Stuckey; 2003. I can send you a copy.

    Scott
    …decoherence is necessary for consciousness.

    Stuart
    I would say decoherence is OR, and OR is (proto-)consciousness.

    Scott
    For starters, if you think about exactly how our chunk of matter is going to amplify microscopic fluctuations, it could depend on details like the precise spin orientations of various subatomic particles in the chunk.

    Stuart
    Microtubules appear to be perfect quantum amplifiers. But not all chunks of matter are the same, differing for example by solubility characteristics. This is how pharmacologists and anesthesiologists look at where particular drugs go, where they are soluble and bind. Anesthetic gases are of various molecular types, e.g. ethers, halogenated hydrocarbons, xenon… but all bind in the same type of medium, akin to olive oil (the ‘Meyer-Overton correlation’), near pi resonance clouds in aromatic amino acids inside certain proteins. Oil and water don’t mix. These regions are non-polar (not wet) and support quantum processes. Anesthetics bind by quantum London forces in interconnected, intra-molecular regions favorable to quantum biology (similar non-polar pi resonance rings enable quantum coherence in photosynthesis proteins).

    Since anesthesia is fairly selective, consciousness seems likely to occur in what we’re calling the Meyer-Overton quantum underground, where anesthetic gases bind to prevent consciousness. This looks to be in quantum channels spiraling round microtubule lattices, with oscillating pi resonance quantum dipoles, either London force electric dipoles, or magnetic spin dipoles.

    Scott
    ..this isn’t assuming any long-range quantum coherence in the chunk: only microscopic coherence that then gets amplified.)

    Stuart
    You do get mesoscopic quantum coherence in microtubule pi resonance quantum channels, and macroscopic quantum coherence, perhaps through gap junctions.

    Scott
    It might be objected that there are all sorts of physical systems that “amplify microscopic fluctuations,” but that aren’t anything like what I described, at least not in any interesting sense: for example, a Geiger counter, or a photodetector, or any sort of quantum-mechanical random-number generator.

    Stuart
    Scott, you’re looking for microtubules. They’re quantum computers which can exert causal action in the classical world. They’re right on the edge.

    Scott
    It seems possible that—as speculated by Bohr, Compton, Eddington, and even Alan Turing—if you want to get it right you’ll need more than just the neural wiring graph, the synaptic strengths, and the approximate neurotransmitter levels. Maybe you also need (e.g.) the internal states of the neurons, the configurations of sodium-ion channels, or other data that you simply can’t get without irreparably damaging the original brain—

    Stuart
    You need microtubules. That’s where the quantum information processing and dynamic control are.

    Scott
    If we think about consciousness in the way I’ve suggested, then who’s right: the Copenhagenists or the Many-Worlders? You could make a case for either.

    Stuart
    Orch OR turns Copenhagen upside down. Rather than consciousness coming in from somewhere to cause collapse, collapse causes consciousness or IS consciousness. And because OR stops separations in spacetime geometry, multiple worlds are avoided. You could make a case for neither Copenhagen nor Multiple Worlds. Just OR (and Orch OR).

    Now on to your recent replies to my comments.

    Stuart (previously)
    Why would replicators and organisms exhibit survival behavior without reward and feelings (the basis for psychology)? What’s the motivation?

    Scott
    The “reward” for a replicator is that more copies of it get made, and hence there are now more replicators on earth that exhibit the same survival behavior. No consciousness, psychological rewards and feelings, or even intelligence per se needs to be anywhere in the loop. I’m not a life scientist, and you are, but this, it seems to me, was Darwin’s great insight, which still stands today.

    Stuart
    It stands because consciousness, feelings and reward haven’t been considered. Outwardly, it would look nearly the same. But how did early life get to replicators? Why would a simple organism do anything to survive without reward? If an organism doesn’t have feelings, it doesn’t know that it hurts to get eaten, or feels good to eat something else, nor that it feels really good to have sex (sex without pleasure is a big problem for evolutionary biologists).

    Here’s a Huffington Post blog I wrote defending Deepak Chopra’s statement that consciousness drives evolution. Deepak meant it in a top-down way, but I intend it as bottom-up pleasure seeking.

    http://www.huffingtonpost.com/stuart-hameroff/darwin-versus-deepak-whic_b_7481048.html

    It concludes:
    So I think in this case Deepak is correct that consciousness drives evolution. And as Penrose suggests, conscious quantum events intrinsic to the universe solve other problems like the ‘Anthropic principle’, why the universe is perfectly tuned for life and consciousness (avoiding any need for the ‘multiverse’ idea). Problems in evolution, brain science, quantum physics and cosmology all fade away with consciousness as an intrinsic feature of the structure of reality

    I’m not advocating ‘intelligent design’ which I see as a top down effect, but of bottom-up pleasure-seeking, and resonance. Microtubules, centrioles and cilia are sensitive to all sorts of energy, terahertz/IR, optical, gigahertz, megahertz etc. We’re using transcranial ultrasound (megahertz) in a clinical study for Alzheimer’s in which microtubules disassemble as tau protein is lost. Tau is a microtubule-associated protein which serves as a traffic signal for motor proteins in synaptic plasticity.

    Stuart (previously)
    The ‘overwhelming consensus’ can’t simulate a worm nor a single cell paramecium…

    Scott
    So then, if the simulations of C. elegans that people are doing start acting like real worms, will that cause you to reevaluate your ideas?

    Stuart
    Yes, if it doesn’t require a whole lot of fudging. The real test is to simulate a single cell paramecium.

    Scott
    I’ve already conceded, in comment #107, that simulations of more and more complex organisms would be clear progress in the direction of refuting my speculations about the nonexistence of a “clean digital abstraction layer.”

    Stuart
    So let me get this straight. You’re saying there’s no need for a ‘clean digital abstraction layer’ which means (I got this from one of your blogs)

    Scott
    “ more than just the neural wiring graph, the synaptic strengths, and the approximate neurotransmitter levels. Maybe you also need (e.g.) the internal states of the neurons, the configurations of sodium-ion channels, or other data that you simply can’t get without irreparably damaging the original brain”

    Stuart
    You can’t be serious. You’re describing microtubules, far and away the best candidates for memory, epigenetics, synaptic plasticity, consciousness/anesthetic action… There are 10^9 tubulins per neuron switching at ~10 megahertz to give 10^16 ops/sec in microtubules per neuron. We already know epigenetic memory in C elegans is stored in centrioles comprised of microtubules. Yes, of course you need this layer!!!

    If you’re worried about a way to download without damage, there’s a much easier potential way, for example non-invasive photon echo through the eye to the retina (part of the brain).

    Stuart (previously)
    The overwhelming consensus at one time was that the earth was flat, and the sun revolved around it. How did that work out?

    Scott
    You can’t be serious….
    In my case, I have no special knowledge of neuroscience, and therefore little choice but to invest in a “neuroscience index fund”—and I’ll gain exposure to Orch-OR when and only when the neuroscience index funds invest in Orch-OR.

    Stuart
    Your ‘neuroscience index funds’ are based on numerous false assumptions. Just to name a few:
    1) Neurons in awake animals have a high deviation from Hodgkin-Huxley integrate-and-fire behavior, apparently due to internal factors regulating firings (your clean digital abstraction layer)
    2) Firings are assumed to be the relevant bit-like’ information currency, but the evidence points to dendrites and soma where integration occurs as sites for consciousness. And anesthetics act in dendrites/soma, EEG comes from them, microtubules in dendrites/soma are uniquely arrayed in mixed polarity networks conducive to beats,
    3) Memory is assumed to be in synaptic plasticity, but those proteins are transient, and memories can last lifetimes. We think CaMKII encodes synaptic memory in microtubules by phosphorylating 6 tubulin bits at a time.
    4) There are way more gap junctions, electrical synapses and windows between neurons which synchronize membranes, than previously considered. We’ve suggested quantum states might entangle between microtubules in adjacent neurons connected by gap junctions. C elegans has way more gap junctions than expected. Reasonable models need consider gap junctions.
    5) Long range zero phase lag synchrony (e.g. gamma synchrony) is difficult to explain by synaptic transmissions and even gap junctions
    6) Activity apparently corresponding to conscious perception occurs after we respond to it, seemingly consciously. Without quantum effects (allowing backward time correlations) consciousness must be epiphenomenal. Forget free will. It needn’t be that way. See my paper ‘How quantum brain biology can rescue conscious free will’ http://www.ncbi.nlm.nih.gov/pubmed/23091452
    7) Inability to (thus far at least) simulate a worm nor a single cell organism, cure mental and cognitive diseases very well, and wasting billions on mapping and preserving brain connectomes with no microtubule layer which changes every nanosecond anyway.

    Stuart (previously)
    Simulating quantum dipole oscillations, OR, non-computable resonance and consciousness may not be possible, though it could be implemented artificially.
But treating neurons as simple bits (the current approach) will get you nowhere.

    Scott
    Wait, when you say “could be implemented artificially,” are you conceding that the physics underlying the brain might be possible to simulate on a computer?


    Stuart
    Not simulated, but implemented in a special type of quantum computer which self-collapses by Orch OR, i.e. E=h/t, possibly fullerene or graphene based on pi resonance.

    Josephson junction type quantum computers like the D Wave have superposition of electron fluxes, so the low mass of electrons (corresponding with the gravitational self-energy E in E=h/t) means a relatively long t, at which time OR would occur. At the Tucson conference, Hartmut Neven from Google Quantum AI talked about their D Wave quantum computer, and I asked him whether it could reach threshold for OR by E=h/t and he said he certainly hoped not. He said there is some coupling/entanglement between the electrons and atomic nuclei, so E could be near significant, and they are looking into it. So quantum computers could be conscious by OR.

    Scott
    whether the behavior of an individual neuron is simple or complicated (like you, I think the answer is “complicated”) is just completely irrelevant to the question of whether the brain violates the Church-Turing Thesis.

    Stuart
    Complexity isn’t the point, in my opinion. Its OR and Orch OR.
    You need quantum states.

    Scott
    Yes, everyone agrees that the currently-known laws of physics can’t be the whole story, for one thing because of the problem of quantum gravity. But it’s only a tiny minority of physicists who think quantum mechanics itself requires fundamental change—

    Stuart
    The large majority also accepts multiple worlds and denies consciousness (your Everett-Dennett axis), in which case no change is required to quantum mechanics. But then you’re stuck with all these messy universes and no consciousness.

    Scott
    Roger has certainly earned the right to an idiosyncratic view of quantum mechanics, but he knows that his view is idiosyncratic, he respects the opposing views, and I never heard him say anything even nearly as dogmatic as you say above (not “consciousness might be involved” in the reconciliation of QM with GR, but “consciousness is the solution”).

    Stuart
    Sorry I’m at times hyperbolic, blunt, am very busy and never, ever, as polite as Sir Roger. But potentially at least, OR puts consciousness into the universe, reconciles QM and GR, explains the origin of life, evolution and the Anthropic principle, and avoids multiple worlds. You’re (rightfully perhaps) criticizing my style, but please consider fairly the scope and detail of what we’re saying.

    Cheers,
    Stuart

  231. adamt Says:

    Stuart #230,

    “It stands because consciousness, feelings and reward haven’t been considered. Outwardly, it would look nearly the same. But how did early life get to replicators? Why would a simple organism do anything to survive without reward? If an organism doesn’t have feelings, it doesn’t know that it hurts to get eaten, or feels good to eat something else, nor that it feels really good to have sex (sex without pleasure is a big problem for evolutionary biologists).”

    With as much respect as I can convey, I think this is the weakest part of your argument and can just be dispensed with. I am not in a position to evaluate the rest of your argument re: consciousness, but I think appeals that evolution is not possible without Orch-OR style consciousness is really not a good way to go. Like Scott, I’m no life sciences master, but I fear that your argument here is both unnecessary and likely to raise eyebrows that might preclude consideration of the rest of your argument.

    Evolution as I understand it does not require that an organism have qualia at all and the abscence of qualia is no impediment to evolution. If a self-replicating organism could have arisen by random chance, then I don’t think qualia is necessary to preserve the self-replicating facet and advantageous mutations through environmental entropy could get the ball rolling.

    Another way to say it is that evolution *has* been modeled to very great success by computer scientists and there is an entire field devoted to exploiting the power of evolution in the search space for ever more powerful algorithms. And to the extent that you would claim computer scientists haven’t been able to demonstrate qualia… well, I think it isn’t a powerful argument to say that evolution of living creatures requires qualia that evolution of computer algorithms does not.

  232. James Cross Says:

    adamt 231

    At almost any point in evolution beyond the simplest organisms, the ability to move and sense the environment would provide an evolutionary advantage. Cilia and flagella are composed of microtubules and are found in simple organisms. Movement and sensing capabilities are found in most organisms including plants. Qualia and consciousness may simply be evolutionary advance in higher animals on the basic movement and sensing capabilities of simpler organisms. It wouldn’t be surprising if they arose from the same structures (significantly modified) that were used in simpler organisms.

  233. adamt Says:

    James #232,

    Nonetheless, surely no one is proposing that qualia is necessary for the mechanical sensing of the environment and adaption based upon it, no? Are Google’s self-driving cars experiencing qualia then?

  234. James Cross Says:

    adamt

    But qualia exactly is a representation of the environment provided by the sensory organs and the brain. Necessary or not that is what it is.

  235. Stuart Hameroff Says:

    Hi everyone

    I’m responding here to Adamt’s post 231

    I had said:
    “…Why would a simple organism do anything to survive without reward? If an organism doesn’t have feelings, it doesn’t know that it hurts to get eaten, or feels good to eat something else, nor that it feels really good to have sex (sex without pleasure is a big problem for evolutionary biologists).”


    Adamt
    With as much respect as I can convey, I think this is the weakest part of your argument and can just be dispensed with. I am not in a position to evaluate the rest of your argument re: consciousness, but I think appeals that evolution is not possible without Orch-OR style consciousness is really not a good way to go.

    Stuart
    Thanks. But how can you “dispense with it”? That would imply disproving it. And it’s not just Orch OR style consciousness. The idea that positive feelings preceded life, and prompted its origin and evolution, is consistent with all forms of panpsychism. The others just haven’t thought of it yet.

    Adamt
    Like Scott, I’m no life sciences master, but I fear that your argument here is both unnecessary and likely to raise eyebrows that might preclude consideration of the rest of your argument.


    Stuart
    So what else is new? You still haven’t said what’s wrong with the suggestion that feelings and reward were the fitness function, the ‘optimization’ parameter, the particular ‘Shtetl’ being optimized in the origin and evolution of life.

    Adamt
    Evolution as I understand it does not require that an organism have qualia at all and the abscence of qualia is no impediment to evolution. If a self-replicating organism could have arisen by random chance, then I don’t think qualia is necessary to preserve the self-replicating facet and advantageous mutations through environmental entropy could get the ball rolling.


    Stuart
    Maybe. That’’s a theory.

    Adamt
    Another way to say it is that evolution *has* been modeled to very great success by computer scientists and there is an entire field devoted to exploiting the power of evolution in the search space for ever more powerful algorithms.

    Stuart
    I think you could say an approximation of evolution has been modeled to very great success with survival and reproduction as the fitness functions, or optimization factors. And maybe, as you say, that’s enough. But survival behavior would also be optimizing pleasure.Survival feels good. Reproduction feels good.

    Adamt
    And to the extent that you would claim computer scientists haven’t been able to demonstrate qualia…

    Stuart
    That’s not my claim. That’s pretty much a fact.

    Adapt
    well, I think it isn’t a powerful argument to say that evolution of living creatures requires qualia that evolution of computer algorithms does not.

    Stuart
    First, we need to know what qualia actually are. Second, don’t evolutionary algorithms serve to optimize some fitness factor? What Shtetl is being optimized? I’m saying pleasure and avoidance of pain are the right Shtetl.

    In various forms of panpsychism now adopted by Koch, Tononi, Chalmers, Tegmark and many others, conscious pleasure would have been present before life began, and could have been its spark and driving force behind evolution. In my chapter ‘The quantum origin of life – Hw the brain evolved to feel good” I discuss how amphipathic biomolecules with non-polar pi resonance rings (very much like dopamine, the pleasure molecule) coalesced in Oparin micelles, and how simple OR proto-pleasurable moments could have occurred, and grown as the fitness function, optimizing pleasure, to drive the evolution of life. It all makes perfect sense.

    Cheers
    Stuart

  236. AdamT Says:

    Stuart #235,

    “dispense with it” was a piece of advice and the subsequent points were the reasons for that piece of advice.

    I don’t know that avoidance of pain and seeking pleasure isn’t the fitness function for biological life. Maybe it is, but I haven’t seen any conclusive case for that.

    I was merely pointing out – I guess the rather obvious point – that evolutionary mechanisms don’t necessarily require this as the fitness function. As example I gave the numerous evolutionary algorithms that have been used to great success in computer science applications. Sorry if this was too obvious.

  237. mjgeddes Says:

    Stuart,

    Well in many ways I agree with yourself and Roger…consciousness must be an intrinsic (fundamental) element of reality, it must have been there all along. Panpsychism does indeed seem to be a logical consequence of consciousness being intrinsic to the universe.

    Where I part company with yourself and Roger is with the specific mechanism proposed (OR). I do think I see where Roger went wrong, and it’s a shame I can’t talk to Roger about this, because I think I could persuade the old fellow to come around very quickly 😉

    Basically, believe it or not, I think Roger’s mistake was that he wasn’t *radical enough* 😀 That’s right, I think he had the right idea, but just didn’t think ‘outside the box’ enough.

    Again, to see why, you need to go back to Roger’s diagram of the ‘3 Worlds’ – I strongly believe this is the right idea, in that in gets across the point that there are 3 different fundamental modes of explanation (mathematical, mental and physical).

    However, I want to point out again that the ‘mental’ circle is separate from the ‘physical’ circle. This really does imply that an entirely *new* mode of explanation is required, and that you should *not* try to equate consciousness with physics action, which is the mistake I think OR makes.

    Again, look at the ‘Platonic’ (Mathematics) circle in Roger’s diagram for an analogy. We know that mathematics is just a *different* mode of explanation from the physics one. You simply can’t look for an explanation of mathematics in particle physics. And so it is with consciousness I think.

  238. Jay Says:

    >But survival behavior would also be optimizing pleasure.Survival feels good. Reproduction feels good.

    Delivery feels good. Oh well…

  239. Amit Says:

    Regarding the question – “If your neurons were to be replaced one-by-one, by functionally-equivalent silicon chips, is there some magical moment at which your consciousness would be extinguished?”

    This phrasing of the question conceals the truth, the answer is that as neurons are replaced one-by-one, your consciousness would diminish a proportional amount until gradually, there would be none left as the last neuron is removed.

  240. James Gallagher Says:

    Stuart Hameroff

    Have you thought of dropping Penrose’s “gravitationally induced collapse model” for a more simple “nature just does random collapses” type idea?

    Then I could kind of like your idea of evolution “orchestrating” what happens with that.

  241. Stuart Hameroff Says:

    Hi everyone

    Thanks for the comments on my post. Here are a few replies.

    Adamt #236
    I don’t know that avoidance of pain and seeking pleasure isn’t the fitness function for biological life. Maybe it is, but I haven’t seen any conclusive case for that.

    Stuart
    What evidence have you seen for survival without feelings as the fitness function?

    Adamt
    I was merely pointing out – I guess the rather obvious point – that evolutionary mechanisms don’t necessarily require this as the fitness function. As example I gave the numerous evolutionary algorithms that have been used to great success in computer science applications.

    Stuart
    Are these algorithms, or the hardware they run on, alive or conscious? In any case, they do have some optimizing factor, fitness function,….something, right? The optimal Shtetl? For life, I’m saying good feelings are the fitness function.

    But rather than algorithms, I think the critical activity is quantum vibrational computing.

    Jay 238 Says:
    Stuart (previously)
    But survival behavior would also be optimizing pleasure. Survival feels good. Reproduction feels good.

    Jay
    Delivery feels good. Oh well…

    Stuart
    Men go to war, women give birth, suffering temporarily for greater good feelings.

    Amit #239
    Regarding the question – “If your neurons were to be replaced one-by-one, by functionally-equivalent silicon chips, is there some magical moment at which your consciousness would be extinguished?”

    Stuart
    You have to first specify what functionally-equivalent means. If its standard Hodgkin-Huxley integrate-and-fire threshold logic neurons, then that doesn’t work because of spike threshold variability, gap junctions, cytoskeleton and multiscalar intra-neuronal dynamics. Like Buszaki says, the neuron is a Stradivarius.

    It’s a good question about critical threshold, which also relates to an earlier point about where in evolution consciousness appeared. OR is ubiquitous, so the question is when does sufficient ‘orchestration’ occur to turn quantum noise into conscious music.

    The other issue is the duration of the event. By E=h/t, for Orch OR conscious events at, say 40 hertz gamma synchrony, t is equal to 25 msec and E would require 2 x 10^10 tubulin subunit proteins in microtubules. This is what might be required to have 40 conscious moments per second

    About 540 million years ago was the ‘Cambrian explosion’ in evolution, when, during a brief ten million years, all animal phyla appeared, with fossils showing creatures similar to modern urchins with spiky axonemes, and nematodes. The modern urchin actinosphaerum has similar-looking axonemes which we now know are comprised of helical arrays of microtubules, a total of about 10^9 tubulins per axoneme, and perhaps hundreds of axonemes. C elegans, which bears a striking resemblance to Cambrian nematode fossils, has 302 neurons, each presumably with 10^9 tubules. In E=h/t, if E is 10^9 tubulins, t is then 500 msec, brief enough to be useful in cognitive tasks, predator-prey relations, etc, 2 conscious moments per second. So in a paper in 1998 I suggested consciousness caused the Cambrian explosion.

    Obviously this is speculation, but based on parameters of Orch OR, the only biological and physical theory for which such speculation can be entertained.

    Conscious moments exist across a spectrum. So as fewer and fewer neurons and microtubules were online, consciousness would slow down, kinda like HAL 9000 in Space Odyssey…”Daisy, Da-aa-ii-s-y”. Or are you guys too young to remember that? (Not that HAL was actually conscious.)

    Stuart

  242. David Pearce Says:

    Stuart, first, very many thanks for your extraordinary work over the years organising the “Toward a Science of Consciousness” conferences – and for your work with Roger Penrose in developing a theory of mind that actually makes novel and empirically falsifiable predictions. But I’ve a question.

    I’m not clear how Orch-OR deals with the phenomenal binding / combination problem. (cf. http://consc.net/papers/combination.pdf) Suppose that experiment spectacularly confirms Orch-OR’s predictions. Theoretical physics is rocked to its foundations: I think what would truly stun most physicists isn’t a demonstration of quantum processing in neuronal microtubules, but rather any experimentally-detected deviation from the unitary Schrödinger dynamics. If Orch-OR is experimentally vindicated, then why are 86 billion-odd neuronal “pixels” of experience in the CNS more than a micro-experiential zombie (Phil Goff’s term)? How could mere synchronous activation of membrane-bound neuronal “pixels” of experience generate a perceptual object (“local” binding) let alone the unity of perception (“global” binding)?

    This is where I get stuck. Binding seems classically impossible. (Almost) everyone I’ve spoken to regards (hypothetical) sub-femtosecond decoherence timescales of distributed feature-processors as the reductio ad absurdum of “no-collapse” versions of quantum mind, so I’m sane enough to recognise I’m probably mistaken. Yet it’s not clear to me that a “dynamical collapse” conjecture like Orch-OR – _even if_ Orch-OR is experimentally vindicated – solves the binding problem either. So I’m baffled.

    [On a sociological note, it’s noticeable that far more philosophers than physicists or neuroscientists seem exercised by the radical implications of the binding/combination problem. In recent years, there’s been a growth in the number of researchers willing to take property-dualist panpsychism and even non-materialist [“Strawsonian”] physicalism seriously. Yet without a solution to the binding problem, IMO our predicament is no better than someone who claims that he’s “explained” how the USA is really a pan-continental subject of experience by pointing to the rich interconnectivity of individual skull-bound American minds. No, he hasn’t – not without invoking spooky “strong” emergence – which is really no explanation at all.]

  243. James Cross Says:

    #241

    I was the one raising the questions about evolutionary beginnings. Earlier I too had pointed to the first bilaterian as the origin of consciousness. The body plan is a concentration of neural material near the mouth and a strand running along the digestive system. Clearly this is to control sensing of food, swallowing, and digestion.

    In one of your Huffington’s post you write: “However humans and animals appear to be driven by conscious feelings (e.g. ‘Epicurean delight’, Freud’s ‘pleasure principle’, ‘dopaminergic reward’).”

    The reference to Freud may be casual and not part of your argument but the pleasure principle is what drives the id and the unconscious, not consciousness.

    Sandor Ferenczi wrote in 1938:“And so we believe that the functioning of the organ of smell exhibits an analogy with thought which is so intensive and complete that smell may properly be considered the biological prototype thought. By means of smell the animal tests and tastes infinitesimal particles of food material in sniffing the volatile emanations therefrom before deciding to consume it as food;… What. however, is the function of the organ of thought. according to Freud? A testing out process. with a minimal expenditure of energy. And attention? A purposive periodic searching of the environment with the help of the sense organs whereby very small sources of stimulation become accessible to awareness. Organ of thought and sense of smell: both alike serve the reality function. this including. moreover. both the egoistic and the erotic reality function.”

    So consciousness would not be part of the pleasure principle but the reality principle.

    Freud understood there to be an opposing instinct to the pleasure principle – a so-called death instinct that produces a need to revert to earlier state – the inanimate before the animate. From this he tried to derive an explanation for the mixtures of pleasure and pain found in certain forms of eroticism. Point is the pleasure by itself is too simple as a driving force. Instead life and consciousness is driven by maintaining a dynamic balance between countervailing tendencies.

  244. Stuart Hameroff Says:

    Hi everyone

    Thanks for the comments….

    James Gallagher #240
    Stuart Hameroff
    Have you thought of dropping Penrose’s “gravitationally induced collapse model” for a more simple “nature just does random collapses” type idea? Then I could kind of like your idea of evolution “orchestrating” what happens with that.

    Stuart
    “gravitationally induced collapse” and “nature just does random collapses” are pretty much the same idea. OR (collapse) occurs at a threshold of E=h/t, which is an average (like radioactive decay), so with some randomness. Nature IS (quantum) gravity. Quantum gravity IS nature. The added feature is that a moment of subjective experience occurs with each such collapse, as opposed to subjective experience causing collapse (Copenhagen).

    David Pearce #242
    Stuart, first, very many thanks for your extraordinary work over the years organising the “Toward a Science of Consciousness” conferences – and for your work with Roger Penrose in developing a theory of mind that actually makes novel and empirically falsifiable predictions.

    Stuart
    You;re very welcome. After 20 years we dropped the ‘Toward a..’, and its now just ‘The Science of Consciousness’ conferences. And thanks for your recognition of Orch OR.

    David
    But I’ve a question. 
I’m not clear how Orch-OR deals with the phenomenal binding / combination problem. (cf. http://consc.net/papers/combination.pdf)
    Suppose that experiment spectacularly confirms Orch-OR’s predictions. Theoretical physics is rocked to its foundations: I think what would truly stun most physicists isn’t a demonstration of quantum processing in neuronal microtubules, but rather any experimentally-detected deviation from the unitary Schrödinger dynamics. If Orch-OR is experimentally vindicated, then why are 86 billion-odd neuronal “pixels” of experience in the CNS more than a micro-experiential zombie (Phil Goff’s term)? How could mere synchronous activation of membrane-bound neuronal “pixels” of experience generate a perceptual object (“local” binding) let alone the unity of perception (“global” binding)?
This is where I get stuck. Binding seems classically impossible.

    Stuart
    Agreed. That’s why you need quantum coherence and entanglement to bind together components of a conscious moment or scene. As you say, this is the ‘combination problem’ in panpsychism. Whitehead addressed it also in his process approach, where simple ‘monotonous’ occasions are combined into full, rich complex ones. It’s easy with quantum coherence and entanglement. Think of a Bose-Einstein condensate where the components lose their identity and are part of one unified entity.

    David
    (Almost) everyone I’ve spoken to regards (hypothetical) sub-femtosecond decoherence timescales of distributed feature-processors as the reductio ad absurdum of “no-collapse” versions of quantum mind, so I’m sane enough to recognise I’m probably mistaken.

    Stuart
    There is experimental evidence for quantum coherence up to 0.1 msec in microtubules.

    David
    Yet it’s not clear to me that a “dynamical collapse” conjecture like Orch-OR – _even if_ Orch-OR is experimentally vindicated – solves the binding problem either.

    Stuart
    When OR (or Orch OR) does occur, it is instantaneous, and involves entangled, features which get bound into the conscious moment. Binding/combination problem solved.

    Cheers
    Stuart

  245. fred Says:

    Stuart #213

    “One might imagine that if such separations were to continue, each such curvature would evolve its own universe, fulfilling the multiple worlds interpretation. However Roger proposed such separations would be unstable, and undergo reduction to a single state at an objective threshold, hence objective reduction, OR. […] OR will occur, accompanied by a moment of subjective experience.”

    That does sound like Scott’s Teleportation conundrum!
    Somehow, it’s the “killing” (collapse/reduction) of one of the two “copies” (MW split) which produces consciousness!

  246. fred Says:

    Does anyone thinking that digital computers can never be consciousness like we are believe that this would actually limit how well they could simulate (and overtake) a human mind?

    I was just watching some impressive new application of deep learning:
    https://www.youtube.com/watch?v=QUwsAPO15_U
    They want to create a system that can read boring technical manuals for a user and then answer his/her questions.

  247. James Gallagher Says:

    Stuart @245

    Thanks for the reply, but sorry, deterministic gravitational collapse is not at all the same as random collapse, it fails to violate a Bell Inequality for a start.

    I don’t suppose Scott wants a huge debate about these ideas here so I won’t post again, but I do think your ideas about where evolution might have made interesting use of quantum mechanics are worthy of consideration, the weak point is Penrose’s contribution with his interpretation of QM.

  248. Kit Adams Says:

    James Gallagher #247 said:
    “Thanks for the reply, but sorry, deterministic gravitational collapse is not at all the same as random collapse, it fails to violate a Bell Inequality for a start.”

    Care to elaborate on that?

    As far as I can see, Gravitational OR means that one of your entangled pair of photons (or electrons) has its polarization (or spin) measured, resulting in immediate OR, causing the state of the other particle to be constrained, thus violating Bell’s Inequalities with needing to, say, split off a new universe or have a conscious observer involved.

  249. Kit Adams Says:

    I meant to say:

    As far as I can see, Gravitational OR means that one of your entangled pair of photons (or electrons) has its polarization (or spin) measured, resulting in immediate OR, causing the state of the other particle to be constrained, thus violating Bell’s Inequalities without needing to, say, split off a new universe or have a conscious observer involved.

  250. mjgeddes Says:

    I have to agree with the others on the physics for this one Stuart. You may have something with the microtubules idea (that’s not my area of knowledge so I can’t really comment), but the physics that Penrose is pushing is most definitely flat-out wrong, I can assure you of that.

    Lubos Motl (who is a physics savant) did an analysis of the gravitational collapse idea on his blog a while back – OR can’t work because it violates the principle of locality.

    Scott Aaronson is the expert on computation, and I see no reason to doubt him: the brain is most definitely a computer for the simple reason that the whole physical world must be computable according to best current theories.

    In my view the Koch-Tononi-Chalmers viewpoint (panpsychism + integrated information) is sufficiently elegant and simple to make me think it’s most probably on the right-track and will eventually get to the answer with some further development. No quantum-gravity needed, just good old fashioned computation 😉

  251. Stuart Hameroff Says:

    Hi everyone

    Thanks for the comments. Please see replies below.

    James Cross #243
    In one of your Huffington’s post you write: “However humans and animals appear to be driven by conscious feelings (e.g. ‘Epicurean delight’, Freud’s ‘pleasure principle’, ‘dopaminergic reward’).”
The reference to Freud may be casual and not part of your argument but the pleasure principle is what drives the id and the unconscious, not consciousness.

    Stuart
    I think feelings and pleasure are conscious, but if subconscious feelings drive behavior, the end result is the same. The question is what causes the feelings.

    James
    So consciousness would not be part of the pleasure principle but the reality principle.


    Stuart
    Some (Harald Atmanspacher at ETH Zurich, for example) say awareness of reality is the optimization. But such awareness may be pleasurable. Suffice to say conscious states drive behavior favoring evolution.

    fred #245 
Stuart #213
“One might imagine that if such separations were to continue, each such curvature would evolve its own universe, fulfilling the multiple worlds interpretation. However Roger proposed such separations would be unstable, and undergo reduction to a single state at an objective threshold, hence objective reduction, OR. […] OR will occur, accompanied by a moment of subjective experience.”
That does sound like Scott’s Teleportation conundrum!
Somehow, it’s the “killing” (collapse/reduction) of one of the two “copies” (MW split) which produces consciousness!

    Stuart
    Really. I hadn’t thought of alternative possibilities in a superposition as copies, in fact they aren’t identical, but alternate states of one particle, coexisting simultaneously. But in OR, when one is ‘killed’, as Scott suggests, consciousness does occur. But is it murder, or suicide? Does an external observer kill the unchosen state (and where does he/she come from?), or does it self-collapse, killing itself, by E=h/t?

    fred #246
    Does anyone thinking that digital computers can never be consciousness like we are believe that this would actually limit how well they could simulate (and overtake) a human mind?
I was just watching some impressive new application of deep learning:
https://www.youtube.com/watch?v=QUwsAPO15_U
They want to create a system that can read boring technical manuals for a user and then answer his/her questions.

    Stuart
    Christian Szegedy from Google, who developed DeepDream, Inception and other convolutional networks spoke at the Tucson conference in April. These are based on network hidden layers which are high density and low energy, giving extremely interesting psychedelic effects. I asked him if, in the brain, the hidden, deeper layers could be the microtubules within each neuron. He said, could be…

    James Gallagher #247 

    Thanks for the reply, but sorry, deterministic gravitational collapse is not at all the same as random collapse, it fails to violate a Bell Inequality for a start.

    Stuart
    OR occurs by E=h/t, but 1) particular states chosen in each OR event are susceptible to non-computable influence, and 2) time t is an average, like radioactive decay (and thus also susceptible to non-computable influences). Unless you know the non-computable influences, I don’t see how this is deterministic.

    James
    I don’t suppose Scott wants a huge debate about these ideas here so I won’t post again, but I do think your ideas about where evolution might have made interesting use of quantum mechanics are worthy of consideration, the weak point is Penrose’s contribution with his interpretation of QM.

    Stuart
    You’re entitled to your opinion, but Penrose OR seems to me the only QM interpretation that works, introduces consciousness, avoids multiple worlds, and fits the extensive experimental evidence in QM. .

    When you say “I do think your ideas about where evolution might have made interesting use of quantum mechanics are worthy of consideration,” I appreciate the compliment, but you left out the key point. Quantum mechanics (Penrose OR) provides the conscious feelings which drive evolution.

    Kit Adams #249
    As far as I can see, Gravitational OR means that one of your entangled pair of photons (or electrons) has its polarization (or spin) measured, resulting in immediate OR, causing the state of the other particle to be constrained, thus violating Bell’s Inequalities without needing to, say, split off a new universe or have a conscious observer involved.

    Stuart
    If no observer, how do you define ‘measured’, as in ‘has its polarization measured’….by whom, or what? You need the observer. Actually, in Penrose OR superpositions self-measure at E=h/t, resulting in a moment of consciousness. No external observer needed

    cheers
    Stuart

  252. James Cross Says:

    Stuart

    “I think feelings and pleasure are conscious, but if subconscious feelings drive behavior, the end result is the same. The question is what causes the feelings.”

    Me

    Definitely rise into consciousness some (most?) of the time.

    Stuart

    “Some (Harald Atmanspacher at ETH Zurich, for example) say awareness of reality is the optimization. But such awareness may be pleasurable. Suffice to say conscious states drive behavior favoring evolution.”

    Me

    Definitely agree with that.

    I know a lot of people don’t want to buy into the whole Freud thing even though a lot of people really don’t understand it.Many people have heard others dismiss it and don’t think it worth their time. Mark Solms in particular has done a lot of recent work trying to blend Freud with modern neuroscience.

    All of that aside. We have plenty of evidence independently from Freud that the brain does a lot of work that never rises to consciousness. Almost all discussion of consciousness seems to leave out of the discussion the relationship

  253. James Cross Says:

    continuation of previous. Posted before I meant to

    Almost all discussion of consciousness seems to leave out of the discussion of the relationship between the unconscious parts of the brain/psyche and the conscious parts. Focusing just on consciousness is much like trying to understand an iceberg from the part of it above the sea. Why does anything need to be conscious? Why are we just “smart” biological robots instead of conscious organisms?

    I certainly think you are onto something with pleasure/pain/feelings but I think those arise from the lower unconscious levels and bubble up to consciousness when needed. Hunger arises from unconscious processes but consciousness decides it is better to eat an apple than a brick. Consciousness is there to mediate internal states with the external world.

  254. Stuart Hameroff Says:

    Hi again everyone

    mjgeddes #250
    I have to agree with the others on the physics for this one Stuart. You may have something with the microtubules idea (that’s not my area of knowledge so I can’t really comment), but the physics that Penrose is pushing is most definitely flat-out wrong, I can assure you of that.
Lubos Motl (who is a physics savant) did an analysis of the gravitational collapse idea on his blog a while back – OR can’t work because it violates the principle of locality.


    Stuart
    Thanks for the support on microtubules, but with whom/what, exactly, are you agreeing ‘on the physics’? And what exactly is Lubos Motl’s objection to OR? He’s a string theorist, and string theory is one of the ‘fashionable fantasies’ Roger Penrose takes down in his new book. Don’t quantum mechanics and Bell’s theorem violate the principle of locality?

    Mjgeddes
    Scott Aaronson is the expert on computation, and I see no reason to doubt him: the brain is most definitely a computer for the simple reason that the whole physical world must be computable according to best current theories.

    Stuart
    Unless its non-computable. Scott and most people assume the brain is a computer, and maybe in some sense it is, but what is the evidence that consciousness is a computation? Particularly a computation among neurons? I don’t think there is any, just an assumption.

    Mjgeddes
    In my view the Koch-Tononi-Chalmers viewpoint (panpsychism + integrated information) is sufficiently elegant and simple to make me think it’s most probably on the right-track and will eventually get to the answer with some further development.

    Stuart
    Scott, among others, have shown how ridiculous Tononi’s Phi/IIT is. Adding panpsychism without solving the combination problem, or specifying the level of matter at which mental properties occur is pretty much useless. And Chalmers has moved on to quantum approaches. Koch is still stuck on axonal firings mediating consciousness, and that’s clearly wrong. Tononi is vague on that question. Another problem is, as Koch showed years ago, visual area 1 (V1) is not conscious, yet has a high Phi.

    Mjgeddes
    No quantum-gravity needed, just good old fashioned computation

    Stuart
    You’re entitled to your opinion, but what’s testable about that statement?

    James Cross #252
    I know a lot of people don’t want to buy into the whole Freud thing even though a lot of people really don’t understand it. Many people have heard others dismiss it and don’t think it worth their time. Mark Solms in particular has done a lot of recent work trying to blend Freud with modern neuroscience.
    All of that aside. We have plenty of evidence independently from Freud that the brain does a lot of work that never rises to consciousness. Almost all discussion of consciousness seems to leave out of the discussion the relationship

    Stuart
    I agree. However we should distinguish several types of unconscious processing. For example under anesthesia the brain is quite active, but with no consciousness. On the other hand in an awake brain there are unconscious processes which are capable of becoming conscious (or are ‘pre-conscious’). It’s the latter that relates to Freud, and could involve quantum superpositions.

    I agree with Mark Solms that Freud is relevant. Mark has spoken at the Tucson conferences and has a chapter in a forthcoming book ‘The Biophysics of Consciousness – Foundational Approaches’ in which Sir Roger and I have updated our Orch OR review paper. Travis Craddock, Jack Tuszynski and I also have another chapter in it called ‘The quantum underground – where life and consciousness originate’.

    cheers
    Stuart

  255. fred Says:

    “Quantum mechanics (Penrose OR) provides the conscious feelings which drive evolution. ”

    My problem with this is that if consciousness were really a stand-alone causal force, you’d think that individuals (the units of consciousness) would be at the center of evolution, not the species.
    The same (fictitious) evolutionary forces seem at play at many levels – the genome, the cells within the body, the organism, the species, ideas and ideologies, civilizations, the earth as a biosphere itself (life and intelligence is nothing but the earth evolving a mechanism to preserve itself in the long run)… it’s just not clear why consciousness would be a special ingredient at just one level of this hierarchy (e.g. a beehive could be conscious).

    Also, we just can’t ignore the amazing nature of the “evolution” of digital information systems.
    Progress has been so fast that we can’t even guess how things would be in 50 years, … so imagine giving it an extra 1000 years (somehow each generation of scientists think they live at the pinnacle of human progress).
    Intelligent digital machines could very well be the next step in the evolution of life on earth. At the very least, we’re bound to see a tight integration of the organic and the digital.
    It’s only a matter of time before “meat-minds” find a way to make themselves more resilient, adaptable, replicable.

  256. kirk Says:

    So if it happens to be the case that human consciousness, down to and including qualia, is the result of things going on in microtubules affected by non-computable values, does this imply that something like human consciousness that experiences qualia can *only* be the result of such things? Would this necessarily rule out the same results being produced by digital computing devices?

    Here’s a possibly very simplistic analogy that hopefully will make the point of the question clear: merge sort and radix sort both sort lists of numbers. But radix sort doesn’t compare numbers directly to each other and doesn’t need to.

    The analog of human-like consciousness here is “sorting lists of numbers representable with k digits”, and the analog of quantum effects out of the reach of digital computers is comparing a pair of numbers. Both sorting algorithms accomplish the relevant sorting effect, but radix sort doesn’t use the special tool of pair comparison required for merge sort to function as it does. Still, it accomplishes an identical result with the given input, ignoring differences like possibly efficiency.

    Both things in this analogy are already discrete algorithms, but hopefully it’s not so hard to distill the bits of it that are relevant to my question.

    To rephrase: might it be that digital computers can reproduce humanlike consciousness, but not *everything* that could be effected with quantum effects in microtubules?

  257. Stuart Hameroff Says:

    Hi everyone

    Happy father’s Day to all the dads out there.

    fred #255
    (Stuart previously) “Quantum mechanics (Penrose OR) provides the conscious feelings which drive evolution. ”


    Fred
    My problem with this is that if consciousness were really a stand-alone causal force, you’d think that individuals (the units of consciousness) would be at the center of evolution, not the species.


    Stuart
    First, I think you’re ‘looking a gift horse in the mouth’. OR is the only specific mechanism ever proposed for subjectivity. And collapse/reduction IS causal, selecting one particular state from among others.

    Species are composed of many individuals, and individuals are composed of many, many units of consciousness, e.g. millions per second.

    Fred
    The same (fictitious) evolutionary forces seem at play at many levels – the genome, the cells within the body, the organism, the species, ideas and ideologies, civilizations, the earth as a biosphere itself (life and intelligence is nothing but the earth evolving a mechanism to preserve itself in the long run)… it’s just not clear why consciousness would be a special ingredient at just one level of this hierarchy (e.g. a beehive could be conscious).


    Stuart
    I’m not denying classical emergence effects occur in all kinds of systems. And there may be non-local quantum effects across scale. But they could be classical too.

    You are shrugging off the problem of how quantum biology can occur in ‘warm, wet, noisy’ systems, the fight we’ve been fighting for 20 years and have now solved with the Meyer-Overton quantum underground. In other words, consciousness occurs in these regions and NOT (in an organized, orchestrated way) willy-nilly across scales.

    So I’m not saying the earth is conscious as a whole. Are you?

    Fred
    Also, we just can’t ignore the amazing nature of the “evolution” of digital information systems.
Progress has been so fast that we can’t even guess how things would be in 50 years, … so imagine giving it an extra 1000 years (somehow each generation of scientists think they live at the pinnacle of human progress).
Intelligent digital machines could very well be the next step in the evolution of life on earth. At the very least, we’re bound to see a tight integration of the organic and the digital.
It’s only a matter of time before “meat-minds” find a way to make themselves more resilient, adaptable, replicable.

    Stuart
    These are all hand-waving category errors in my opinion. I’m not doubting that AI will develop super-smart machines, and better brain-mind interfaces will ensue (at least when we get to interfacing at the microtubule level). But none of that implies consciousness.

    kirk #256
    So if it happens to be the case that human consciousness, down to and including qualia, is the result of things going on in microtubules affected by non-computable values, does this imply that something like human consciousness that experiences qualia can *only* be the result of such things?

    Stuart
    No. the critical feature is orchestrated objective reduction (Orch OR) which could conceivably happen in other systems. OR gives the qualia and non-computable values, Orch gives the cognitive meaning and content. You could possibly do it in fullerenes.

    kirk
    Would this necessarily rule out the same results being produced by digital computing devices?


    Stuart
    In my opinion, yes, unless they included quantum superpositions which could be orchestrated and undergo OR.

    kirk
    Here’s a possibly very simplistic analogy that hopefully will make the point of the question clear: merge sort and radix sort both sort lists of numbers. But radix sort doesn’t compare numbers directly to each other and doesn’t need to. 
The analog of human-like consciousness here is “sorting lists of numbers representable with k digits”, and the analog of quantum effects out of the reach of digital computers is comparing a pair of numbers. Both sorting algorithms accomplish the relevant sorting effect, but radix sort doesn’t use the special tool of pair comparison required for merge sort to function as it does. Still, it accomplishes an identical result with the given input, ignoring differences like possibly efficiency.
Both things in this analogy are already discrete algorithms, but hopefully it’s not so hard to distill the bits of it that are relevant to my question.

    Stuart
    Sounds like Searle’s Chinese room experiment. In your story ‘quantum effects’ could be a lot of things, but if you were to mean OR, then you’d be saying
    ‘processing with consciousness is the same as processing without consciousness’. But it’s not.

    
Fred
    To rephrase: might it be that digital computers can reproduce humanlike consciousness, but not *everything* that could be effected with quantum effects in microtubules?

    Stuart
    Unless you’re talking about random proto-conscious events that occur in the woodwork, no. But cheer up, quantum computers with sufficient mass superposition might be conscious by Orch OR.

    Hell, Google’s D-Wave computer may already BE conscious!

    Cheers
    Stuart

  258. Martin Says:

    I have claimed for some time that the strange loop(s) of consciousness is a cybernetic ontological question.

    There are 3 strange loops of consciousness…

    1/ Not only in terms of possible quantum retro-causality in the CNS (the client – strange loop Number #1), but also in a possible physical panpsychism (the server).

    2/ This server provides a “Penrose Platonic Substrate” (PPS), emergent from the Planck scale underpinning all spacetime. Thus, enabling the emergence of an information “bus” of a Jungian Collective Unconsciousness to the clients. The strange loop #2 needed here supports the read/write binding to the client-server architecture (the bus).

    3/ Plus, there’s another strange loop #3 to cosmologically recursively connect the PPS (that evolves by a Quantum Dawinism process) from its eventual Omega Point (a complex of cosmic consciousness) to the Planck Era Alpha Point. This “bootstrapping” is a cybernetic ontological loop

    This is a physical proposition leading to an integrated client-server panpsychism. No magical “pixie dust” required. No “explanatory gaps” left unfilled. It is analytically sound.

    It is experimentally falsifiable as a hypothesis, indeed some aspects have empirical support.

    This proposal is what I call the NNOrch SR theory of consciousness, featuring the strange loops #1, 2 and 3.

    See… https://facebook.com/notes/martin-ciupa/consciousness-solved-neural-network-orchestration-subjective-reduction-nnorch-sr/10154239863013599/

  259. Stuart Hameroff Says:

    Hi everyone

    Martin #258
    I have claimed for some time that the strange loop(s) of consciousness is a cybernetic ontological question.

    Stuart
    I like strange loops, hierarchical systems in which dynamics up or down in scale can wind up at the same place (like Escher drawings). Doug Hofstadter wrote about them for consciousness in his book “I am a strange loop”. Bandyopadhyay’s temporal hierarchies in microtubule dynamics are very similar.

    Martin
    There are 3 strange loops of consciousness…
    1/ Not only in terms of possible quantum retro-causality in the CNS (the client – strange loop Number #1), but also in a possible physical panpsychism (the server).

    Stuart
    Hmmmm. I like backward time effects in the brain, e.g. as demonstrated by Libet in 1979, but I don’t call them ‘retro-causal’ because as a quantum effect they are merely ’correlations’ (though still useful). Roger Penrose’s idea that the universe is cyclical, and heat death is followed by a near infinite change in scale to result in a new ‘big bang’ is similar also. Panpsychism? Maybe, I guess, but the devil is in the details. Where exactly do the qualia appear?

    Martin
    2/ This server provides a “Penrose Platonic Substrate” (PPS), emergent from the Planck scale underpinning all spacetime. Thus, enabling the emergence of an information “bus” of a Jungian Collective Unconsciousness to the clients. The strange loop #2 needed here supports the read/write binding to the client-server architecture (the bus).

    Stuart
    OK, but again, the devil is in the details. What quantum biology in the ‘client’ connects to the ‘server’, and how? This is where Roger’s OR comes in.

    Martin
    3/ Plus, there’s another strange loop #3 to cosmologically recursively connect the PPS (that evolves by a Quantum Dawinism process) from its eventual Omega Point (a complex of cosmic consciousness) to the Planck Era Alpha Point. This “bootstrapping” is a cybernetic ontological loop

    Stuart
    Again, the devil is in the details. What is quantum Darwinism?

    Martin
    This is a physical proposition leading to an integrated client-server panpsychism. No magical “pixie dust” required. No “explanatory gaps” left unfilled. It is analytically sound. It is experimentally falsifiable as a hypothesis, indeed some aspects have empirical support.

    Stuart
    I find these claims difficult to believe. There’s not enough science there to test, as far as I can see. What exactly are you claiming is falsifiable? I think you need some pixie dust.

    Martin
    This proposal is what I call the NNOrch SR theory of consciousness, featuring the strange loops #1, 2 and 3.
    See… https://facebook.com/notes/martin-ciupa/consciousness-solved-neural-network-orchestration-subjective-reduction-nnorch-sr/10154239863013599/

    Stuart
    You say in there:
    “Despite similarities to/inspiration from the Hameroff and Penrose Orch-OR, the ideas here are based on Neural Network Orchestration – with Subjective Reduction (NNOrch-SR). There are common emergent features of: Microtubule Qubits and access to the PPS.”

    Thanks for the acknowledgement, and imitation is the sincerest form of flattery. But, again the devil is in the details, and you seem to have eliminated all the details. Like,
    1) What is subjective reduction? (Sounds like dualist Copenhagen to me)
    2) What is the Penrose Platonic substrate? In our view it is spacetime geometry, and of course Roger is THE expert.
    3) What are your microtubule qubits? How do you solve/avoid decoherence (premature OR)?

    I’ll stick with ‘old-fashioned’ Orch OR, thanks

    Cheers
    Stuart

  260. Martin Says:

    Hello Stuart,

    Thanks for responding so promptly, I appreciate you’re a busy chap.

    Stuart (#256)
    I like strange loops, hierarchical systems in which dynamics up or down in scale can wind up at the same place (like Escher drawings). Doug Hofstadter wrote about them for consciousness in his book “I am a strange loop”. Bandyopadhyay’s temporal hierarchies in microtubule dynamics are very similar.

    Martin
    Yes, I am not suggesting priority on the concept of a “strange loop”. I enjoyed Hofstadter’s book and am familiar with Bandyopthyay’s hierarchies. My hierarchies/loops (3 of them) are:

    a) within the CNS, albeit non-locally in spacetime;
    b) between the CNS and what I call the PPS underlying realm;
    c) a cosmological ontological loop from Omega Point, as per Pierre Teilhard de Chardin’s notion of a complex of cosmic consciousness, to Alpha point, Planck Era (similar in effect to Wheeler’s Participatory Anthropic Principle).

    This may be novel I suggest in that detail.

    Stuart
    Hmmmm. I like backward time effects in the brain, e.g. as demonstrated by Libet in 1979, but I don’t call them ‘retro-causal’ because as a quantum effect they are merely ’correlations’ (though still useful). Roger Penrose’s idea that the universe is cyclical, and heat death is followed by a near infinite change in scale to result in a new ‘big bang’ is similar also. Panpsychism? Maybe, I guess, but the devil is in the details. Where exactly do the qualia appear?

    Martin
    I refer to the Quantum Retrocausality of Eraser experiments. As a reminder..
    #1 http://www.anu.edu.au/news/all-news/experiment-confirms-quantum-theory-weirdness

    A conscious future observation of a past decision recorded in the narrative of the mind, might activate Neural Network pattern recognition, causing coherence/collapse of those past quantum states. Future choice in observation may activate physical processes that can cause a consistent past event to be. This is a form of Agent causation (as opposed to Event causation).

    I am not opposed to Penrose’s CCC, the notion of Entropic Big Freeze, interesting ideas.

    As for Panpsychism – Qualia, as common experiences may become encoded into the geometry of spacetime and become available to Whitehead prehension (this maps to the function of a Jungian Collective Unconsciousness), in this sense Qualia are Jungian Archetypes. I will be mentioning Quantum Darwinism again a little later to support that notion.

    Stuart
    OK, but again, the devil is in the details. What quantum biology in the ‘client’ connects to the ‘server’, and how? This is where Roger’s OR comes in.

    Martin
    The devil is always in the details, yes! And with my note being a research agenda in progress, I freely admit more detail is required (I say that in my note upfront!).

    But to answer your question as best I can at this time… I think the quantum biology is in the Microtuble assemblies across the CNS. This PPS is the foundational layer in spacetime, it is the emergent geometry tiling the plane. Hence why I attribute it as the “Penrose-Platonic Substrate”. Remember this substrate is everywhere-when at the Planck Scale. It is there within the Microtubules. Binding can occur by scalar/conformance invariance of quantum entanglement. This feature “creates” spacetime, and it is conceivable Microtubule qubits entangle with this PPS

    The resulting architecture is a client-server one. The client is the human CNS and the server is the PPS. In Pauli-Jung terms there is an “acausal connection principle” (Synchronicity) between them, which in actuality is a binding of quantum states.

    # 2 http://www.nature.com/news/the-quantum-source-of-space-time-1.18797

    Stuart
    Again, the devil is in the details. What is quantum Darwinism?

    Martin
    Quantum Darwinism is a relatively recent concept see #3. In line with Constructal Theory of Adrian Bejan, see #4. In a nutshell it considers how random fluctuation in the superimposed Planck Scale of spacetime might form stable structures, and self-replicating ones, that perpetuate and evolve. The concept of negative entropy flow applies, whereby these structures for the motes of Boltzmann Brains and an emergent Physical Panpsychism. As interactions between the client and server accumulate, archetypal universal forms are fractally reinforced, these are in part the visceral root of consciousness – Qualia.

    #3 http://www.nature.com/nphys/journal/v5/n3/full/nphys1202.html
    #4 http://mems.duke.edu/bejan-constructal-theory

    Stuart
    I find these claims difficult to believe. There’s not enough science there to test, as far as I can see. What exactly are you claiming is falsifiable? I think you need some pixie dust.

    Martin
    No. I don’t need “pixie dust” (it would be a fail if I do).  See David Pearce on this point, he is quite explicit, and he specifies a proposed test #5 (he has commented on this thread). Your experimental work, and that inspired by you, on Microtubules apply to the client component. Indeed, Dean Radin’s work on conscious interaction with fringe patterns in double slits apply as I appreciate you are aware. As a PHYSICAL panpsychism it is only workable through physics. Pixie dust is toxic to it! 

    #5 http://www.physicalism.com/

    As for tests at the Planck Level, these are also being proposed, see, #6
    #6 https://arxiv.org/abs/1605.04167

    Stuart
    You say in there (referring to my research note):
    “Despite similarities to/inspiration from the Hameroff and Penrose Orch-OR, the ideas here are based on Neural Network Orchestration – with Subjective Reduction (NNOrch-SR). There are common emergent features of: Microtubule Qubits and access to the PPS.” Thanks for the acknowledgement, and imitation is the sincerest form of flattery. But, again the devil is in the details, and you seem to have eliminated all the details. Like,
    1) What is subjective reduction? (Sounds like dualist Copenhagen to me)
    2) What is the Penrose Platonic substrate? In our view it is spacetime geometry, and of course Roger is THE expert.
    3) What are your microtubule qubits? How do you solve/avoid decoherence (premature OR)?

    Martin
    I have been inspired by your work with Penrose, and acknowledge that, but it is not so much imitation as a new take in certain areas.

    1) The Subjective Reduction is standard Quantum Decoherence in a Many Minds Interpretation of QM (that’s a variant of MWI) #7, I did mention that in the note detail.

    #7 http://plato.stanford.edu/entries/qm-manyworlds/

    2) I do describe the PPS in the note, and have done so here again. Yes, it is spacetime geometry, and that is why the first “P” is for Penrose. Yes, Penrose is indeed the expert there.
    3) I do not think there is a proof of OR by Quantum Gravity. Till we have experimental proof, I don’t have to address a process that may not occur. As for my understanding microtubule’s take on Quantum Digital States, i.e., Qubits. Given the vast number of Microtubules within a Neuron, some will lose coherence others will maintain it, there is redundancy. Memories do degrade, in time, sometimes new memories replace the old ones, by way of patching and interpolating perhaps.

    It’s only natural for you to “stick to” Orch OR, until you can see a better path (I offer mine to you). I think Orch-OR has undergone revisions and updates in the past after all right?

    Given your questions I have moved forward my understanding of NNOrch-SR, I am grateful for that, it’s not that I am certain this is the right idea, I do dare to think it has some value nevertheless.

    Happy to work with you further. 

    Regards
    Martin

  261. Martin Says:

    PS: the referenced message number is in error my last post, it relates to Stuart’s #259

  262. Martin Says:

    Stuart,

    I found this explanation of Quantum Darwinism accessible and useful…

  263. Peter Gerdes Says:

    Just a quick remark on how weird consciousness (or actually qualia) truly are.

    Either you believe in panpsychism (there is something which it’s like to be a skin cell which is weird enough by itself but immediately raises the question of why you aren’t actually the being consisting of both your left hemisphere processes and those of your dog (or some unconscious part of your own brain).

    In contrast, as I take you to do, you must believe there is something which distinguishes the computation done by your conscious brain from say the computation being done by your blogging software. Since it’s not a logical fact that a certain functional fact must produce qualia (you can coherently imagine a world where you behave in exactly the same ways but it doesn’t feel like anything to be you despite your protestations to the contrary) this must be the result of a fundamental physical (well epi-physical since it is an epiphenomenon …. even if this notion doesn’t really make sense as it is dependent on choice of representation for your laws) law.

    But fundamental laws of physics/epi-physics really shouldn’t apply to things at the level of brains. That’s like a special 5-th force that only applied to toasters and couldn’t be represented as the combination of many forces applying to it’s pieces.

    Either way is really weird.

  264. Ben Standeven Says:

    @Peter Gerdes, #263:

    What is like to be a thing that it isn’t like anything to be?

  265. Ben Standeven Says:

    @Peter Gerdes, #263:

    Come to think of it, there’s another hole in this argument: It isn’t a logical fact that hitting the brakes on a car causes to stop; and it also isn’t a fundamental law of physics. But not every object has brakes. So are brakes and cars really weird?

  266. mjgeddes Says:

    Hi Peter,

    I believe I have solved the ‘hard problem’ of consciousness: no doubt or ambiguity remains.

    The idea itself is really very simple and crystal clear – it can be described in less than 400 words. It is only hard to grok at first because it is totally outside the range of ‘normal thinking’ – it is just something right out of ‘left field’.

    I already tried to explain it earlier in the thread, but apparently no one ‘got it’, so I’ve carefully reworded the explanation and shortened and simplified it as much as possible. Again, if readers would just carefully it through and reflect carefully on the idea, they should see that it’s really very simple and clear.

    Here then is the solution to the ‘hard problem’ of consciousness once again:

    Abstract: ‘Solution To The Hard Problem Of Consciousness’

    I propose that there are 3 ontologically distinct properties in reality: information, consciousness and matter

    I take the proposed 3 elements of reality (information, consciousness and matter) and *combine* them into pairs to form a composite property. Then for each pairing, I define the remaining 3rd element to be equivalent to (composed of) this composite, leading to an infinite regress (a circular loop where each element ‘contains’ the other 2). Then we have;

    Information = {consciousness, matter}
    Consciousness = {information, matter}
    Matter = {consciousness, information}

    Each element of reality can be viewed in *two* different ways: *one* way is to view it is as something that really exists ‘out there’ as an objective ‘thing’ in itself (a holistic viewpoint, the left hand-side of the equations above). And it *is* that! The *other* way is to view it as something that not a real thing in itself, but is built out of a combination of the other elements – we can break it into parts (a reductionist viewpoint, the right-hand side of the equations above). And it *is* that to!

    To avoid paradox, I propose that reality can be perceived in 3 different, but equally valid ways, according to the equations above (so the idea is that how reality is defined is relative to an observer). But only 2 elements can be consistency used together at the same time by a given observer to describe reality (the elements on the right-hand side of the equations). The element on the left-hand side of the equations vanishes or dissolves to the observer that carves up reality according to the equation, and this ensures that there’s no paradox.

    To summarize, what I’m proposing is a totally circular, infinite regress, where there are 3 ontologically distinct elements of reality (information, consciousness, matter), and each element contains (or is composed of) the other 2. I avoid paradox by stipulating that how reality is defined is relative to the observer; there are 3 distinct equally valid viewpoints, but only 2 elements can be used at once for each viewpoint. This allows each element to play a double-role: it can function as *either* an objectivity existing property that exists externally to the observer, *or* something that is partially subjective and a construct of the observer.

  267. The Cloned-Consciousness-as-Continuous-Consciousness Fallacy » Exolymph Says:

    […] of Aaronson’s relevant post is about quantum physics’ implications on the nature of consciousness, which I thoroughly do not understand. But then there’s an idea within the larger context […]

  268. fred Says:

    Stuart,

    “However Roger proposed such separations would be unstable, and undergo reduction to a single state at an objective threshold, hence objective reduction, OR. […]
    OR puts consciousness into reality, avoids multiple worlds, and turns Copenhagen upside down. Rather than consciousness causing collapse, collapse causes consciousness (or is equivalent).”

    What do you mean by “avoids MW”? If those OR events didn’t occur, then mutiple worlds would exist?
    I.e. if a big portion of space is devoided of consciousness/OR, would it evolve according to MW?
    But once OR enters the picture, is it self-reinforcing (self-preserving)? I.e. amongst all the possible immediate futures, is the collapse always favoring the ones sustaining consciousness/OR? Something like this would explain why life/consciousness “just happens”, eventually?
    But then what happens if two regions of space that have been separated and independently saw the rise of consciousness suddenly come in contact? What if they’ve been collapsing different branches of the MW tree? (not sure if that question makes much sense..)

  269. Douglas Knight Says:

    Correction to 223. The original connectome did have the orientation for 80% of connections. I think that’s a pretty small number. But I was probably confusing that with its lack of determination of which were excitatory and which inhibitory. In 2012 Janelia published some new data which may include weights, or at least signs.

  270. Stuart Hameroff Says:

    Hi everyone

    Thanks to Martin for his comments to which I shan’t respond.
    He’s basically morselizing and declawing Orch OR and replacing it with a watered-down version, in my opinion.

    I asked about Quantum Darwinism and he sent this link of Zurek
    https://www.youtube.com/watch?v=27zMdaBgt6g

    I know Zurek is great and all, but this is silly. Nature seeks the most stable quanutm states which survive? So then thermodynamics is Darwininian too? What isn’t? Aren;t you trivializing Darwin?

    I also think decoherence isn’t real, just mistaken for OR.

    Stuart (previous)
    “However Roger proposed such separations would be unstable, and undergo reduction to a single state at an objective threshold, hence objective reduction, OR. […]
    OR puts consciousness into reality, avoids multiple worlds, and turns Copenhagen upside down. Rather than consciousness causing collapse, collapse causes consciousness (or is equivalent).”

    Fred
    What do you mean by “avoids MW”? If those OR events didn’t occur, then multiple worlds would exist?

    Stuart
    Yes. Some collapse mechanism is needed to avoid MW.

    OR describes superposition as separations in space-time geometry. Were these to continue, MW would result. MW desperately needs spacetime separation but doesn’t acknowledge it.

    Fred
    I.e. if a big portion of space is devoided of consciousness/OR, would it evolve according to MW?

    Stuart
    That’s the problem with the observer effect, not OR which occurs everywhere.

    Fred
    But once OR enters the picture,

    Stuart
    OR was always in the picture. Life came along later.

    Fred
    is it self-reinforcing (self-preserving)? I.e. amongst all the possible immediate futures, is the collapse always favoring the ones sustaining consciousness/OR? Something like this would explain why life/consciousness “just happens”, eventually?

    Stuart
    Life formed and evolved to access and optimize OR feelings present in the universe. So yes.

    Fred
    But then what happens if two regions of space that have been separated and independently saw the rise of consciousness suddenly come in contact?

    Stuart
    They don’t evolve separately after OR/consciousness occurs.

    Fred
    What if they’ve been collapsing different branches of the MW tree? (not sure if that question makes much sense..)

    Stuart
    It doesn’t. Forget MW. Its fantasy.

    cheers
    Stuart

  271. Martin Says:

    Stuart (#270),

    With regards to some of your objections to the MWI of QM, please see…

    http://www.preposterousuniverse.com/blog/2015/02/19/the-wrong-objections-to-the-many-worlds-interpretation-of-quantum-mechanics/

    The idea of “world branching”, and unnecessary complexity, are passe objections. Coherence/Decoherence to the single paths in Everettian Universal World Function is the correct way to see it.

    Furthermore that Many Minds Interpretation (a variant of the MWI) just has as many “coherent worlds” as observers (one to each unique observer), it’s parsimonious in that respect.

    Thanks.

  272. fred Says:

    Stuart, thanks for your answers!

  273. Stuart Hameroff Says:

    Hi everyone

    Thanks Fred for your acknowledgement.

    Martin defends MWI by saying it doesn’t imply multiple worlds at all. But his mistake (and Sean Carroll’s) is taking Hilbert space as something real, rather than a mathematical convenience.

    Carroll’s piece concludes:

    “…Despite my efforts and those of others, it’s certainly possible that we don’t have the right understanding of probability in the theory, or why it’s a theory of probability at all. Similarly, despite the efforts of Zurek and others, we don’t
    have an absolutely airtight understanding of why we see apparent collapses into certain states and not others.”

    Stuart
    This all points to the wisdom and explanatory power of Penrose OR which solves the measurement problem, introduces consciousness into the universe, connects QM and GR, and avoids multiple worlds.

    cheers
    Stuart

  274. Martin Says:

    Hi Stuart (#275)

    In general you can talk about the mathematical models of Hilbert space, or Minkowski space or Penrose diagrams. They are all mathematical conveniences. Some map reality better than others.

    The Everettian Universal Wavefunction can be mathematically modeled many ways. If may be a “real” (and as Sean says with appropriate scientific humility it may not be). The key is the UWF may be simply a fact. It may be possible (as I have said earlier) that it naturally emerges as an image, fractally represented in the physical panpsychism from Planck Level physics, through a Quantum Darwinism process, to become a cosmic consciousness. And as a client-server architecture we prehend it in part and gain aspects of our holistic consciousness from it accordingly (one of the 3 strange loops I’ve mentioned earlier).

    None of that requires new radical physics by the way, just an integration.

    As such it’s a viable hypothesis. It is experimentally viable to investigate what structures exist or don’t exist at that level. It’s falsifiable. It is on the agenda of physicists to examine the Planck level (I earlier gave a link to some agendas in that respect). We shall see.

    You have faith in your Orch OR model, and the hunch that Penrose is correct (though it is not a mainstream model at this time – it is fringe). His model is an analytical one, maybe correct, maybe not – there’s no experimental evidence as yet to my knowledge to make that decision.

    But, it needs new radical physics by the way, and a radical redrawing of how we see Quantum Mechanics/GTR.

    My suggestion is based on taking …

    1/ the valuable pioneering work you have done on the Quantum Biology of microtubles.
    2/ Also the valuable work done by neuroscience to support the possible orchestration of a Subjective Reduction.
    3/ And the valuable work done by a very large number of physicists who are also experts in MWI Quantum Mechanics and the GTR, that may provide the basis for a physical panpsychism offering qualia prehension and a means to a collective unconscious.

    The result the NNOrch-SR theory of consciousness.

    Perhaps it is wrong. Perhaps you are, for not crediting the possibility it is right.

    I am not your antagonist, I support much of the work you’ve done. In all humility I offer to you my notions to improve the model.

    After this post I won’t pursue this thread any longer. Though I’m happy to take it up directly with you if you like.

    Regards – Martin

  275. Martin Says:

    PS: The fractal imaging of the Everettian UWF into a physical panpsychism may be a way of integrating a MWI and Bohmian Mechanics perspective.

    A pilot wave of Bohmian Mechanics might be the “deliverable” of a non-local cosmic consciousness, it is a kind of implicate order that David Bohm talked about.

    If so, and this cosmic consciousness (server) is a fractal of the UWF, as it interacts with individual consciousness (clients). Then holistically it can be mathematically modeled as a MMI (a variant of a MWI) – whereby each observer has a unique “World” view supported by the cosmic consciousness.

  276. mjgeddes Says:

    Stuart,

    Anything is possible in this strange universe, so its great that people such as yourself and Roger are trying to think outside the box and come up with new ways of thinking.

    But as I said earlier in thread, I feel that Roger has tried to “force fit” things that have *no* physical interpretation into a physical picture, and it just doesn’t work.

    The ‘wave function’ is not a physical entity, it’s pure information. And ‘consciousness’ isn’t a physical property either in my view, so there’s no reason why it should have a literal physics interpretation. Regarding R (wave function collapse), again, Roger’s thinking is too concrete in my view…he searched for a literal physics meaning for something that isn’t physical in my view, but only takes place in the mind of the observer.

    Once you realize that neither U (unitary evolution), nor R (reduction) are physical, the Many-Worlds-Interpretation (in fact, a variation of MWI known as Many-Minds actually) falls out quite naturally.

    There is no ‘objective reduction’ my friend.

    All interpretations other than MWI are based on the same misconceptions I mention above. They try to “force-fit” things that have *no* physical interpretation into a physical picture, and the result (unsurprisingly) is nonsense.

    Cheers

  277. Ben Standeven Says:

    @mjgeddes, #276:

    Not quite; because the mind in the “Platonist” picture is neither physical nor mathematical, the interpretation that falls out of it is not a “Many Minds Intepretation” but a “Many Descriptions Interpretation.” Or, as it is better known, the Copenhagen Interpretation.

  278. mjgeddes Says:

    Ben,

    The Copenhagen style interpretations make exactly the *opposite* mistake to the physical interpretations, and are every bit as flawed. They are certainly not a Platonist picture.

    Copenhagen-style interpretations agree that the wave-functions and collapse aren’t physical, but try to say they are entirely subjective – here the wave function is just regarded as a purely calculational device representing our subjective state of knowledge, and isn’t ontologically ‘real’. These interpretations are incoherent because they don’t let us talk about objective reality, everything is described from the reference point of ‘observers’, but the universe existed long before ‘observers’ did.

    By contrast, the Platonist picture takes the wave-function to be a mathematical object (of pure information) that is objectively real (and exists outside the minds of observers).

    Minds still play an essential role in the sense that they are needed to *interpret* the information in the wave-function, but the information itself is external to minds (whereas in Copenhagen the information is within minds).

  279. Ben Standeven Says:

    @mjgeddes, #278:

    You’re right; this is really the von Neumann setup rather than the Heisenberg/Bohr one. I always confuse the two, but a Platonist wouldn’t.

  280. Stuart Hameroff Says:

    Hi everyone

    Martin #275
    
In general you can talk about the mathematical models of Hilbert space, or Minkowski space or Penrose diagrams. They are all mathematical conveniences. Some map reality better than others.


    Stuart
    The distinction between mathematical and geometric descriptions of reality is the difference between the map (mathematics) and the territory (geometry). Roger Penrose has contributed the bulk of ideas of reality, spacetime geometry as spin networks, quantum gravity, twistors etc.

    Martin
    The Everettian Universal Wavefunction can be mathematically modeled many ways. If may be a “real” (and as Sean says with appropriate scientific humility it may not be).

    Stuart
    Penrose takes the wavefunction to be real, specifically as evolving separations in the geometry of spacetime.

    Martin
    The key is the UWF may be simply a fact. It may be possible (as I have said earlier) that it naturally emerges as an image, fractally represented in the physical panpsychism from Planck Level physics, through a Quantum Darwinism process, to become a cosmic consciousness.

    Stuart
    Sorry, but as I said before, the devil is in the details. This sounds like hand-waving ‘woo-woo’. Orch OR is based on neurobiology, spacetime geometry, and OR by E=h/t (Planck level physics).

    I remain unimpressed by quantum Darwinism, even if you capitalize it.

    Martin
    
My suggestion is based on taking …
1/ the valuable pioneering work you have done on the Quantum Biology of microtubles.
2/ Also the valuable work done by neuroscience to support the possible orchestration of a Subjective Reduction.


    Stuart
    What work, pray tell, is that???? Subjective reduction seems to mean Copenhagen, with consciousness coming in from somewhere to collapse wavfunctions without saying how or why. This is a step backwards, not forwards.

    Martin
    3/ And the valuable work done by a very large number of physicists who are also experts in MWI Quantum Mechanics and the GTR,

    Stuart
    Penrose OR merges GTR with QM to avoid MWI. None of your experts do that.

    Martin
    that may provide the basis for a physical panpsychism offering qualia prehension and a means to a collective unconscious. 
The result the NNOrch-SR theory of consciousness. 


    Stuart
    This makes no sense to me. At what level does physical panpsychism act? Molecules, atoms, sub-atomic particles…? What about the quantum/classical border? And I still don’t know what SR means other than Copenhagen. And what does ‘Orch’ mean? (other than ripping off Orch OR?).

    mjgeddes #276
    Stuart,
Anything is possible in this strange universe, so its great that people such as yourself and Roger are trying to think outside the box and come up with new ways of thinking.

    Stuart
    Those inside the box have failed to explain the brain, measurement and just about everything else.

    mjgeddes
    But as I said earlier in thread, I feel that Roger has tried to “force fit” things that have *no* physical interpretation into a physical picture, and it just doesn’t work.
The ‘wave function’ is not a physical entity, it’s pure information.

    Stuart
    I don’t see how you can have information without it being attached to some physical entity. The wave function is a process in the structure of spacetime geometry, below the level of matter. It may not be material, but it is indeed physical.

    Mjgeddes
    And ‘consciousness’ isn’t a physical property either in my view, so there’s no reason why it should have a literal physics interpretation.

    Stuart
    Its not a property, it’s a physical process, a sequence of events.

    mjgeddes
    Regarding R (wave function collapse), again, Roger’s thinking is too concrete in my view…he searched for a literal physics meaning for something that isn’t physical in my view, but only takes place in the mind of the observer.


    Stuart
    Which isn’t real either, according to you, so ….”Nothing is real, there’s nothing to get hung up about…Strawberry fields forever” (old Beatles song).

    Mjgeddes
    Once you realize that neither U (unitary evolution), nor R (reduction) are physical, the Many-Worlds-Interpretation (in fact, a variation of MWI known as Many-Minds actually) falls out quite naturally.


    Stuart
    …the last refuge of the bewildered.

    Mjgeddes
    There is no ‘objective reduction’ my friend.


    Stuart
    So here is you, a sequence of orchestrated objective reductions of the microtubules in your brain, telling me there are no objective reductions.

    Mjgeddes
    All interpretations other than MWI are based on the same misconceptions I mention above. They try to “force-fit” things that have *no* physical interpretation into a physical picture, and the result (unsurprisingly) is nonsense.
Cheers

    Stuart
    OK, you’re saying that if we get rid of physical reality, and consciousness, then we have MWI. I’d rather have physical reality and consciousness with OR. You’re choosing ‘Strawberry fields forever’, rejecting a proper, plausible explanation (OR) based on Planck scale physics which you reject as ‘force fit’ though its based on the uncertainty principle, a staple of quantum mechanics and reality.

    cheers
    Stuart

  281. Martin Says:

    Stuart (#282).

    I won’t address the misunderstandings in your response since I have already, and it is just becoming a exercise in recursion.

    A parting reference to help you (and any others how may benefit) come to terms with Quantum Darwinism. See…

    http://arxiv.org/abs/1001.0745

    As for “woo woo”, you should note that the Physics community by and large thinks that OR is “woo woo”.

    Scott Aaronson points out in his main article, from which these comments are made that Penrose’s conception is a radical view on Physics. I don’t think you’ve said anything to persuade a Quantum Physicist that it is anything else, or challenge their positive attitude towards MWI (or the MMI variant).

    Unless experiments show OR has any validity Orch-OR is not going to get traction, then something like NNOrch-SR may be the only way forward, and thus research on it will continue.

    You are a hostage to the fortune that Penrose’s radical ideas will pan out (since I think from your responses that the quantum aspects of Orch-OR are delegated in full to Penrose, and there is no other avenue/will for consideration of alternatives). It’s a kind of faith in Sir Roger – good luck!

    And with that said – I’ll sign out! 🙂

    Regards – Martin

  282. Stuart Hameroff Says:

    Hi everyone

    Martin #281 

    Stuart (#282).
I won’t address the misunderstandings in your response since I have already, and it is just becoming a exercise in recursion.


    Stuart
    I’ve not misunderstood you. I just don’t agree with you.

    Martin
    A parting reference to help you (and any others how may benefit) come to terms with Quantum Darwinism. See…
http://arxiv.org/abs/1001.0745


    Stuart
    I stand by my previous comment that Quantum Darwinism is silly. The link says that everything depends on the properties of the entangled environment when we don’t understand entanglement. Basically every physical process driven by energy minimization or entropy would qualify. How does this help in any way?

    Martin
    As for “woo woo”, you should note that the Physics community by and large thinks that OR is “woo woo”.

    Stuart
    I do note that. And I also note that the physics community by and large swallows MWI because they ignore consciousness and reality. MWI isn’t testable.

    Martin
    Scott Aaronson points out in his main article, from which these comments are made that Penrose’s conception is a radical view on Physics.

    Stuart
    That doesn’t mean its wrong. The non-radical view leaves out consciousness, can’t reconcile QM and GR and is forced to postulate multiple worlds which, now you say, aren’t even multiple worlds. The non-radical view is like the view that the earth is flat and the sun revolves around it. Yes, Roger is Galileo (though he’s much too modest to make such a claim). I suggest you buy his new book when it comes out, Fashion, Faith And Fantasy in Modern Physics.

    Martin
    I don’t think you’ve said anything to persuade a Quantum Physicist that it is anything else, or challenge their positive attitude towards MWI (or the MMI variant).


    Stuart
    If you dont want to consider consciousness, reality, QM/GR reconciliation, MWI (or its non MW variant) are fine. But that leaves you nowhere. I’m reminded of yet another Beatles song, ‘Nowhere Man’.

    Martin
    Unless experiments show OR has any validity Orch-OR is not going to get traction, then something like NNOrch-SR may be the only way forward, and thus research on it will continue.


    Stuart
    OR is testable. MWI is not testable. I still don’t know what NNOrch-SR actually is but I’d prefer you rename it so nobody mistakenly believes it has something o do with Orch OR.

    Martin
    You are a hostage to the fortune that Penrose’s radical ideas will pan out (since I think from your responses that the quantum aspects of Orch-OR are delegated in full to Penrose, and there is no other avenue/will for consideration of alternatives). It’s a kind of faith in Sir Roger – good luck!
And with that said – I’ll sign out!

    Stuart
    You keep promising to do so. But let me correct one thing. The ‘quantum aspects of Orch OR’ include the essential notion that microtubules can exist in quantum states for biologically relevant time scales. When we proposed this in 1995/1996 we were ridiculed because quantum computers were constructed at absolute zero temperature to avoid decoherence and the brain was too ‘warm, wet and noisy’ for seemingly delicate quantum states. Tegmark claimed that microtubules would decohere in 10^-13 seconds, but was off by 9 orders of magnitude theoretically and experimentally.

    And now we (myself, Jack Tuszynski and Travis Craddock, based on Anirban Bandyopadhyay’s experiments) have figured out HOW this occurs. The clue came from the mechanism by which anesthetics prevent consciousness in non-polar, hydrophobic regions of proteins, specifically microtubules as shown by the Meyer-Overton correlation, leading us to propose the ‘Meyer-Overton quantum underground’ as the source of quantum biology. Penrose OR is the icing on the cake, but first we had to explain quantum biology. And we have. For example see http://www.ncbi.nlm.nih.gov/pubmed/25714379

    We’re making progress.

    Stuart

  283. David Pearce Says:

    Stuart, thanks for clarifying Orch-OR and binding. When most people consider Orch-OR, I suspect they think it’s (just) all about how Gödel-unprovable results are supposedly provable by human mathematicians – certainly, Gödelian arguments were central to Roger Penrose’s earlier work that was most older readers introduction to Orch-OR. Leave aside the technical arguments for a moment. The obvious “general” objection is why should evolution via natural selection care about the cognitive capacities of a few abnormally smart human mathematicians? Are the rest of us somehow less conscious? By contrast, any theory that can explain phenomenal binding explains an ancient and ubiquitous adaptation in the vertebrate lineage and beyond. Unusual neurological syndromes where phenomenal binding partially breaks down (e.g. “motion blindness”, simultanagnosia, florid schizophrenia, etc) hint at its extraordinary and unexplained computational power. If (1) phenomenal binding really is classically impossible, and (2) Chalmersian dualism, McGinn’s “mysterianism”, spooky “strong” emergence and so forth are a counsel of despair, then organic minds like us have been quantum computers since the early Cambrian – or when whenever phenomenal binding in the CNS first evolved. Which leaves two explanatory options…

    (1) “No-collapse” QM. Schlosshauer’s (cf. http://faculty.up.edu/schlosshauer/publications/annals.pdf) calculations of credible decoherence times for neuronal superpositions in the CNS are even more rapid than Tegmark’s: attoseconds or less. Attosecond superpositions of distributed neuronal feature-processors are an “obviously” ludicrous explanation of perceptual unity. So why a plea for experimental (dis)confirmation rather than acceptance of the obvious reductio ad absurdum? After all, evolution operates over millennia, and synaptic neurotransmission operates over milliseconds, not femtoseconds or attoseconds. Such a dynamical timescale is hopelessly wrong for neuronal superpositions to be a viable candidate for the necessary perfect structural match between phenomenology and (ultimately) physics that monistic physicalism entails. The reason that this loophole needs to be closed experimentally IMO rather than “philosophically” is that quantum Darwinism (i.e. the decoherence program in post-Everett QM pioneered by Zeh, Zurek, et al.) yields an unremitting selection mechanism of almost unimaginable power – ceaselessly playing out on an almost unimaginably fine-grained timescale. Yes, I know that this line of thought sounds insane; but it’s neither more nor less insane than the no-collapse QM from which it springs. (cf. Robin Hanson on “mangled worlds” http://mason.gmu.edu/~rhanson/mangledworlds.html – though in this context I’d say “mangled world-simulations”.) If applied to the CNS, quantum Darwinism is _not_ just some tricksy metaphor, as we might naively suppose. I have to agree with Martin here. Perhaps see John Campbell’s “Quantum Darwinism as a Darwinian process”: https://arxiv.org/ftp/arxiv/papers/1001/1001.0745.pdf. In conjunction with non-materialist physicalism, quantum Darwinism offers a potential answer to the ostensible gross structural mismatch that drives David Chalmers to dualism. Sure, intuitively, all that next-generation interferometry probing the CNS is likely to discover [due to effective loss phase coherence to the environment] is functionally irrelevant “noise”, not a perfect structural match. Yet a “Schrödinger’s neurons” conjecture does precisely what Scott admonishes disbelievers in digital sentience to do [“…what I think all AI skeptics should do, which is: suggest some actual physical property of the brain that, if present, would make it qualitatively different from all existing computers, in the sense of violating the Church-Turing Thesis.”] Orch-OR does so too. Both need to be experimentally falsified.

    (2) A “dynamical collapse” conjecture. I confess that previously I’d simply assumed that any semi-classical approach like Orch-OR can’t explain why we’re not just patterns of Jamesian “mind-dust” – with or without quantum processing in microtubules. However_ if_ Orch-OR can really explain phenomenal binding, then this success would be an extremely strong selling-point for Orch-OR – certainly for anyone repelled by the implications of the unmodified and unsupplemented unitary dynamics. [It beats me how anyone can take Everett seriously and stay sane.] However, I’m not sure I understand how it will work. Stuart, have you got a URL so I can wrap my head around it?

    One reason I’m quite pessimistic about progress is that a surprising number of extremely intelligent people (e.g. Max Tegmark, Nick Bostrom, Scott(?)) view phenomenal binding as a mere puzzle, or simply lump binding together with the Hard Problem, or alternatively, don’t believe that phenomenal binding is a problem in the first instance. Why contemplate bizarre solutions to a non-existent mystery? Compare Max Tegmark’s brisk two-paragraph dismissal (4.4.3) in “Why the brain is probably not a quantum computer”: http://www.sciencedirect.com/science/article/pii/S0020025500000517. For technical (and “philosophical” – https://www.quora.com/Why-does-anything-exist-1) reasons, I’m still sceptical that a dynamical collapse theory will work. Critically, however – unlike most of its rivals – Orch-OR is experimentally falsifiable, still (I hope) the hallmark of good science.

  284. Stuart Hameroff Says:

    Hi everyone

    David Pearce #283
    Stuart, thanks for clarifying Orch-OR and binding. When most people consider Orch-OR, I suspect they think it’s (just) all about how Gödel-unprovable results are supposedly provable by human mathematicians – certainly, Gödelian arguments were central to Roger Penrose’s earlier work that was most older readers introduction to Orch-OR. Leave aside the technical arguments for a moment. The obvious “general” objection is why should evolution via natural selection care about the cognitive capacities of a few abnormally smart human mathematicians? Are the rest of us somehow less conscious?

    Stuart
    I think these arguments miss the point. In addition to Godel-unprovable results and binding, Orch OR ‘solves’ the hard problem by stipulating that collapse includes subjective qualia, or feelings. Random environmental OR events have random, unbound, non-cognitive (‘proto-conscious’) qualia. Orchestration provides cognition, meaning and (through entanglement and coherence) binding as well as Godel-unprovable results – consciousness.

    If every OR event (which in our view replaces ‘decoherence’), then these random proto-conscious feelings are ubiquitous, and have been in the universe all along, including prior to the origin and evolution of life. Roger and I hint at this in our recent Orch OR review/update, and I’ve developed the idea further in a chapter in press –The quantum origin of life – How the brain evolved to feel good’. The idea is that in the primordial soup, life began in non-polar micelle-like structures to optimize feelings, and behavior supporting evolution is driven not by survival of genes, but by optimizing pleasure. This solves a lot of problems and makes more sense.Heresy to be sure, but more logical than behavior without reward.

    David
    By contrast, any theory that can explain phenomenal binding explains an ancient and ubiquitous adaptation in the vertebrate lineage and beyond. Unusual neurological syndromes where phenomenal binding partially breaks down (e.g. “motion blindness”, simultanagnosia, florid schizophrenia, etc) hint at its extraordinary and unexplained computational power. If (1) phenomenal binding really is classically impossible, and (2) Chalmersian dualism, McGinn’s “mysterianism”, spooky “strong” emergence and so forth are a counsel of despair, then organic minds like us have been quantum computers since the early Cambrian – or when whenever phenomenal binding in the CNS first evolved.

    Stuart
    I’d say right from the origin of life. All biological behavior is based on reward.

    David
    Which leaves two explanatory options…
(1) “No-collapse” QM. Schlosshauer’s (cf. http://faculty.up.edu/schlosshauer/publications/annals.pdf) calculations of credible decoherence times for neuronal superpositions in the CNS are even more rapid than Tegmark’s: attoseconds or less. Attosecond superpositions of distributed neuronal feature-processors are an “obviously” ludicrous explanation of perceptual unity. So why a plea for experimental (dis)confirmation rather than acceptance of the obvious reductio ad absurdum? After all, evolution operates over millennia, and synaptic neurotransmission operates over milliseconds, not femtoseconds or attoseconds. Such a dynamical timescale is hopelessly wrong for neuronal superpositions to be a viable candidate for the necessary perfect structural match between phenomenology and (ultimately) physics that monistic physicalism entails.

    Stuart
    Bandyopadhyay’s group has demonstrated quantum resonances and coherence times in microtubules in a set of frequency clusters in terahertz, gigahertz, megahertz and kilohertz. They have experimental evidence for microtubule coherence up to 0.1 milliseconds. How do microtubules do it? It happens in the non-polar pi resonance channels where anesthetics act to selectively erase consciousness, what we call the Meyer-Overton quantum underground. This accounts for quantum biology, including photosynthesis proteins.

    David
    The reason that this loophole needs to be closed experimentally IMO rather than “philosophically” is that quantum Darwinism (i.e. the decoherence program in post-Everett QM pioneered by Zeh, Zurek, et al.) yields an unremitting selection mechanism of almost unimaginable power – ceaselessly playing out on an almost unimaginably fine-grained timescale. Yes, I know that this line of thought sounds insane; but it’s neither more nor less insane than the no-collapse QM from which it springs. (cf. Robin Hanson on “mangled worlds” http://mason.gmu.edu/~rhanson/mangledworlds.html – though in this context I’d say “mangled world-simulations”.) If applied to the CNS, quantum Darwinism is _not_ just some tricksy metaphor, as we might naively suppose. I have to agree with Martin here. Perhaps see John Campbell’s “Quantum Darwinism as a Darwinian process”: https://arxiv.org/ftp/arxiv/papers/1001/1001.0745.pdf. In conjunction with non-materialist physicalism, quantum Darwinism offers a potential answer to the ostensible gross structural mismatch that drives David Chalmers to dualism.

    Stuart
    I don’t really see what problem this is solving. Orch OR avoids dualism.

    David
    Sure, intuitively, all that next-generation interferometry probing the CNS is likely to discover [due to effective loss phase coherence to the environment] is functionally irrelevant “noise”, not a perfect structural match. Yet a “Schrödinger’s neurons” conjecture does precisely what Scott admonishes disbelievers in digital sentience to do [“…what I think all AI skeptics should do, which is: suggest some actual physical property of the brain that, if present, would make it qualitatively different from all existing computers, in the sense of violating the Church-Turing Thesis.”] Orch-OR does so too.

    Stuart
    Orch OR does it. What else does it? But the evidence you seek is already out there in Bandyopadhyay’s findings.

    David
    Both need to be experimentally falsified.

    Stuart
    Anesthetics block quantum resonances in microtubules, resulting in loss of consciousness. See paper I sent in previous message. We’ve now extended the halothane results to all anesthetics.

    David
    
(2) A “dynamical collapse” conjecture. I confess that previously I’d simply assumed that any semi-classical approach like Orch-OR can’t explain why we’re not just patterns of Jamesian “mind-dust” – with or without quantum processing in microtubules. However_ if_ Orch-OR can really explain phenomenal binding, then this success would be an extremely strong selling-point for Orch-OR – certainly for anyone repelled by the implications of the unmodified and unsupplemented unitary dynamics. [It beats me how anyone can take Everett seriously and stay sane.] However, I’m not sure I understand how it will work. Stuart, have you got a URL so I can wrap my head around it?

    Stuart
    Here’s our 2014 Orch OR review. We’ve updated it in a chapter due out any day now.
    http://www.sciencedirect.com/science/article/pii/S1571064513001188

    
David
    One reason I’m quite pessimistic about progress is that a surprising number of extremely intelligent people (e.g. Max Tegmark, Nick Bostrom, Scott(?)) view phenomenal binding as a mere puzzle, or simply lump binding together with the Hard Problem, or alternatively, don’t believe that phenomenal binding is a problem in the first instance. Why contemplate bizarre solutions to a non-existent mystery? Compare Max Tegmark’s brisk two-paragraph dismissal (4.4.3) in “Why the brain is probably not a quantum computer”: http://www.sciencedirect.com/science/article/pii/S0020025500000517.

    Stuart
    That’s the same 10^-13 sec decoherence time for microtubules Tegmark published in Phys Rev E, using a superposition separation distance 7 orders of magnitude too large. When that paper came out, I was with Roger in Washington for a presentation to the Rand Corporation and I handed him the paper to look over. At breakfast he tossed it back and said, according to that, quantum superpositions of any kind are impossible. Scott Hagan, Jack Tuszynski and I used Tegmark’s formula, corrected the separation distance and a few other things and came up with coherence time of 10^-4 secs, 0.1 msec, which was later proven experimentally by Bandyopadhyay. We published in the same journal as Tegmark a year later. See http://www.ncbi.nlm.nih.gov/pubmed/12188753
    Of course nobody ever cites that, just the Tegmark bullshit piece.
    He may be smart about some things but knows zero about biology.

    David
    For technical (and “philosophical” – https://www.quora.com/Why-does-anything-exist-1) reasons,

    Stuart
    Good question –why is there anything? The answer could be that there has always been stuff, that the Big Bang was preceded by another aeon, as Roger suggests in his Cyclic Conformal Cosmology. That also makes more sense than everything springing out of nothingness. And we proposed that each big bang enables physical constants to evolve and mutate, to optimize consciousness, solving the anthropic principle dilemma.

    David
    I’m still sceptical that a dynamical collapse theory will work. Critically, however – unlike most of its rivals – Orch-OR is experimentally falsifiable, still (I hope) the hallmark of good science.

    Stuart
    Thanks for that. We may be unpopular, but we’re far and away the best theory of consciousness, collapse etc etc. beyond falsifiability, there’s more actual experimental validation of Orch OR than any other theory of consciousness. Not even close.

    Cheers
    Stuart

  285. mjgeddes Says:

    Dave #283

    Dualism is not ‘a counsel of despair’, on the contrary it offers the only hope of a complete explanation of reality (an answer to ‘Why does anything exist’?).

    The idea is that we want a completely closed (circular) explanation of reality; we don’t want any unexplained elements remaining.

    If you say that single property ‘x’ was all that existed, then where did that property come from? Monism cannot possibly provide answer, since there are no other elements of reality to which an explanation can refer to.

    To fully explain reality, we need more than one fundamental property (element) so we can split reality into an ‘object’ (the things being described) and a ‘subject’ (the descriptor or description). If a single property ‘x’ was all that existed, then you would simply have a ‘description’ without an object – it’s incoherent.

    In fact to get a completely circular loop, we need 3 elements, because we can only use 2 elements at once to consistently describe reality (each element needs to play a double role – it needs to act as *either* an object, *or* a subject – but not both at the same time).

    That’s why I extended Chalmer’s dualism into a ‘trialism’ (Triple-Aspect Theory) and postulated 3 elements, Physical (Mass-Energy), Mental (Consciousness) and Mathematical (Information).

    If you want a simple, 3-word summary of what I think consciousness is, here’s my answer:

    CONSCIOUSNESS=”ARROW OF TIME” !

    Consciousness is an intrinsic property of reality and exists at (almost) all levels of organization- it is the arrow of time itself!

    At the most basic level you have a little bit of consciousness in almost everything (panpsychism), which occurs where-ever there is a ‘causal relation’ (cause-and-effect). This is analogous to a sort of ‘cosmic microwave background radiation’ of conscious experience.

    Then at higher levels of organization, ever more complex forms of consciousness emerge, defining ‘higher-order’ causalities.

    Human consciousness is a fairly high-level manifestation of ‘the arrow of time’. It is a planning system that integrates multiple values into a coherent narrative that sets a ‘direction’ for agents to move forward in time.

    In the Japanese manga ‘Hikaru No Go’, Hikaru was asked ‘Why Do You Play Go?’.
    His answer: “To link the far past and the far future, that is why I am here”

    If ‘consciousness’ could talk, it would give you the same answer.

  286. Martin Says:

    mjegeddes (#289)

    You mention a “trialism.” This has been discussed before. By Pennrose. And he thinks it is in fact a Monism

    On discussing his representation of reality as a Penrose triangle of Physical; Mental and Platonic (Mathematical) Realm

    Sir Roger Penrose: “Well they are not independent because each one has a relationship to the next even though each relationship I regard as a mystery, maybe we will understand these relationships better in the future. I dont necessarily regard this as a sort of ultimate picture, in some sense they are different aspects of reality and the true reality encompasses the whole thing. In fact I tend to draw the picture in a deliberately paradoxical way, if you look at it has the sort of feeling of an impossible triangle – Escher like. It’s deliberately impossible: there is another mystery hiding behind there so that all three possible worlds can exist in some deeper reality that is not expressed in that picture.”

    #1 Source: https://www.quora.com/Sir-Roger-Penrose-argues-that-mathematics-literally-exists-in-the-Platonic-realm-Does-this-disprove-strong-AI

    PS: Despite it’s title it does not address a Strong AI proof point.

    You can also find interesting discussion on this by Piet Hut, Mark Alford, Max Tegmark

    “We discuss the nature of reality in the ontological context of Penrose’s math-matter-mind triangle.”

    #2 https://arxiv.org/abs/physics/0510188

    The point is… whilst you can conceptualize dualism and trialism concepts, most researchers in the field look to a monist solution to resolve the binding problem of otherwise non overlapping “magisteria”.

    The positive contribution of a physical panpsychism that David Pearce proposes is it provides such a monism. It also offers up to a Hameroff and Penrose Orch-OR model a means for Agent Causation (without necessarily the invention of Objective Reduction). Solving the Hard Problem of Chalmers.

  287. mjgeddes Says:

    Martin #286

    Yes, the Penrose diagram was the theory I was working off. By ‘dualism’, I don’t mean to say that there are literally 3 separate worlds, just there are 3 different valid modes of explanation that describe the *same* world. But there are still 3 distinct properties, so this is technically ‘property dualism’ (or trialism).

    As I was saying, if you only have one property (for example ‘physical’) in your model of reality, then it would simply be impossible to ever explain why this property exists, since there would be no other properties to which your explanation could refer.

    The only way David Pearce’s question (‘Why does anything exist?’) could possibly be answerable is if there exists more than one property. In that case, the existence of each property can be explained by referring to the others, in a closed circular loop.

    As I proposed earlier in this thread:

    Split reality into 3 properties: matter, information, consciousness,

    then a total explanation to the question ‘Why Does Anything Exist?’ becomes possible in the following way:

    (1) Matter is explained by (information and consciousness)
    (2) Information is explained by (matter and consciousness)
    (3) Consciousness is explained by (matter and information)

    See the infinite regress? It’s a closed loop, and in theory you can use it to explain everything!

    Property dualism (or trialism in this case) is essential for this sort of circular explanation of reality to work.

  288. Martin Says:

    mjgeddes (#286)

    I can see the point you make. But as Penrose says..

    “there is another mystery hiding behind there so that all three possible worlds can exist in some deeper reality that is not expressed in that picture”

    Such “mysteries”, should not be seen as unfathomable though and unattainable. If so, it is metaphysics, and science is going to have a hard stop.

    Physical Panpsychism, being “physical”, is a theory that needs to be free of “pixie dust” or it fails. It needs to develop a mechanism that produces the aspects of:

    a) Platonic/Mathematical/Information realm;
    b) Mental realm and
    c) Physical realm.

    Can it do that – demonstrably? If so it will be a leading candidate for solving the underlying mystery.

    Perhaps. Consider:

    1/ It is PHYSICAL, and is a linear conception/integration of current Physics thought. It does not need radical new Physics.

    2/ It can conjecture that a Platonic Realm emerges from the Planck Level quantum foam, and evolves by a Quantum Darwinism process, to become a cosmic consciousness. Within it’s structure are the Platonic Universal Forms, including Ideal Reason and Qualia (it perhaps achieves these through ongoing process interaction with conscious biological entities). The PLATONIC aspect of experience emerges via evolution.

    3/ It offers to other conscious entities access via (in Jungian terms) acausal connection, collective unconscious principle, synchronicity and archetypes. In physical terms this connection is via the root cause of space-time: quantum entanglement, perhaps via ER=EPR connection directly to the Quantum Biology of Microtubules as per Hammeroff and Penrose (though perhaps without OR, but with Neural Network Subjective/Self-aware Orchestration of the reduction of the Wavefunction). Thus it is supportive of the MENTAL aspect of experience.

    Hence, it is a candidate for a Penrose triangle monism, as he hoped for. Entirely Physical. Perhaps one can progress from the other aspects to the same conclusion. In that respect there would be one “true” reality with three aspects of that “truth”.

    Furthermore, such a Physical Panpsychism can be experimentally verified by Planck Level observation of structure. In particular if that structure can be observed to be influenced by focused consciousness and vice versa. Such experiments are viable.

    Consider…

    a) http://arxiv.org/pdf/1602.06251.pdf
    b) http://deanradin.com/evidence/RadinPhysicsEssays2016.pdf

    This will not be easy. It will need careful protocols and sophisticated setups. But, if we do see structure, and it can be interacted with by consciousness, well that would be an important result for Physical Panpsychism I think. We would have taken significant steps to supporting a monist solution to the Hard problem of consciousness.

    I like to think of this situation as analogous to Client-Server architectures in MMORPG games. The client (us) has significant levels of consciousness, it renders the unique single person view (wavefunction) of reality, but suffers from resolving the Hard problem of consciousness. The server (the cosmic consciousness) provides a coherent integration of the worldviews (a Universal Wavefunction – UWF), and provide the visceral forms of qualia to resolve the Hard problem.

    It’s neat solution.

    PS1: For Philosophers of Mind: Dualism (rather than trialism) should be asserted, as you can argue the Mental and Platonic realm are aspects of Idealism and the Realism. A Dual Aspect Monist would say that you can proceed bottom up from Realism or top-down from Idealism. But, each progression needs to be able to account for the other. They are constrained to be compatible. Hence theories of say “Eliminative Materialism”, that seek to outlaw one perspective by definition are not to be posed within it.

    PS2: For those that are not persuaded by an Everttian Many Worlds Interpretation, as I mentioned in one of my earlier comments, the possibility is opened for a Bohmian Mechanics interpretation, whereby the UWF is a fractal form/image that underpins the Pilot Wave concept, the Implicate Order of David Bohm. I like that, a neat solution for those unenamoured by the multiplicity of worlds of MWI.

  289. Martin Says:

    (sorry the reference should be #291 in my comment above)

  290. David Pearce Says:

    Mjgeddes, monistic physicalism – including non-materialist physicalism – may turn out to be false. But at the risk of sounding like a naive Popperian, can you think of any novel and precise experimentally falsifiable predictions that could put your alternative trialism conjecture to test? (This isn’t a rhetorical question.) Maybe M-theorists working on the Planckian energy regime have a legitimate scientific excuse for a lack of predictivity, or calling retrodictions “predictions”; the rest of us don’t. Not least, quantum mind theories that do – or don’t! – predict any collapse-like deviations from the unitary Schrödinger dynamics have testable consequences for the CNS, the “riskier” (in Popper’s sense) the better.

    Scott, forgive me if I’ve missed it, but have you written anywhere on the phenomenal binding/combination problem? Recall its intractability is what drives David Chalmers to dualism. _Even if_ non-materialist physicalism is true, binding seems classically impossible; and quantum-theoretic approaches seem wildly implausible, albeit not (yet!) demonstrably false.

    [Stuart, many thanks for your comments; I am still ruminating.]

  291. fred Says:

    It’s interesting that in discussions on the nature of consciousness, often very little effort is put on analyzing the experience of consciousness itself.
    Just because it’s subjective doesn’t mean that we can’t come up with interesting observations through repeatable experiments.

    E.g. the field of anesthesia is quite fascinating in that regard (I’ve gone through 6 full anesthesia during this last year so I often wonder about all this).

    Not all anesthesia are the same:
    Some drugs make you lose total consciousness during the procedure.
    Some drugs do not make you lose total consciousness during the procedure, they just put you in a relaxed state and you’re still aware. But what happens is that those drugs affect the brain’s ability to retain any medium/long term memory of the experience. As your memories of the experience dissolve you think you were never conscious through it.

    So the subjective experience (post-surgery) is the same: you’re given a drug, within seconds you’re drifting away, then you’re waking up in the recovery room.

    So, there are two “equivalent” versions of a surgery: one where you’re unconscious, one where you’re conscious but your memories are erased after the operation.

    There is a bit of a parallel with the two “equivalent” versions of a teleportation: one where you’re truly teleported, one where you’re cloned then the original is destroyed.

    Another fascinating area is meditation, with interesting point of views:

    Thoughts and feelings aren’t who we are, there are just constructs coming from the brain that we can observe. That capacity of observation is consciousness (the nature of “being”).

    And the nature of the self and the perception of reality (from https://en.wikipedia.org/wiki/The_Miracle_of_Mindfulness)
    “By realizing oneness with ourselves and the outside of ourselves, we experience liberation from the fear and anxiety of the discriminating view of the world, one that fractures reality into separate, unchanging units, brings. When we perceive the illusion of the isolated, unchanging self, we reach a level of wisdom the author calls “non-discrimination mind”. This state of seeing shows us that there is nothing out there to get, nothing to strive for, nothing to fear, as all these are an illusion springing from a fractured perception of reality. Reality is already perfect, unified, and our struggle is to see that truth, and be liberated from our chains of false perception that fractures everything into separate units. There is, in other words, no difference between the perceiver and the object being perceived; there is no separation between the self and the world.”

  292. Ben Standeven Says:

    @David Pierce:

    Have you got any examples of testable consequences from OR-type theories? I’ve seen a lot of claims that theories have testable consequences, but not any actual examples.

  293. Martin Says:

    Ben (#296)

    Interjecting.

    Experiments undertaken and results supporting Microtubule Quantum Biology and their link to consciousnes (via Anesthetic intervention).

    #1 http://www.sciencedirect.com/science/article/pii/S157106451300153X
    #2 https://www.scopus.com/record/display.uri?eid=2-s2.0-84876329005&origin=inward&txGid=0
    #3 http://scitation.aip.org/content/aip/journal/apl/102/12/10.1063/1.4793995
    #4 http://www.ncbi.nlm.nih.gov/pubmed/25714379

    I’ve mentioned in my earlier comments results of tests indicating Quantum interactions by Focused Attention. (PS: Not necessarily allied to Orch-OR, but Quantum Consciousness in general)

    #5 http://deanradin.com/evidence/RadinPhysicsEssays2016.pdf

    Tests done for Quantum Darwinsim (PS: Not allied to Orch-OR, but Quantum Consciousness in general)

    #6 http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.104.176801

    There are tests being planned for Planck Level effects (PS: Not necessarily allied to Orch-OR, but Physical Panpsychism)

    #7 http://arxiv.org/pdf/1602.06251.pdf

  294. David Pearce Says:

    Ben, like all “dynamical collapse” conjectures, Orch-OR is in principle experimentally falsifiable via interferometry: is the superposition principle of QM ever going to break down? (cf. “Toward Quantum Superposition of Living Organisms” http://arxiv.org/pdf/0909.1469v3.pdf) The technical challenge for experimentalists will be distinguishing between true collapse and mere suppression of interference on account of decoherence. Perhaps see e.g. 8.4.5 Experimental Tests of Collapse Models in Maximilian Schlosshauer’s “Decoherence and the Quantum-to-Classical Transition”
    https://www.amazon.com/Decoherence-Classical-Transition-Frontiers-Collection/dp/3540357734.

    As well as lending weight to Orch-OR, the slightest experimentally-detected deviation from the unitary Schrödinger dynamics will also falsify the existence of sub-femtosecond neuronal superpositions of distributed feature-processors – the only way I know to explain phenomenal binding without abandoning monistic physicalism and the unity of science.

  295. Lorraine Ford Says:

    I more or less agree with mjgeddes #287: both the information and qualia content of subjective experience IS physical reality, where the information content of the fundamental-level subjective experience of particles is what we represent with law-of-nature mathematical equations.

    But our rigorous knowledge of reality, and more specifically Physics’ rigorous knowledge of reality, will always have 3 inherent limitations. The limitations are highlighted by the known discontinuities that have occurred, and continue to occur, in the universe: the discontinuities in parameter value that are found to have occurred at “quantum decoherence”, and the discontinuities that occur when new or changed laws-of-nature appear in the universe. The outcome of the discontinuities is physical reality that is representable by mathematical equations, including mathematical equations that represent parameter values. But other aspects relating to the discontinuities are not representable by mathematical equations:

    1. Creativity/Free Will/”Choice”/Cause – The creative acts that occur in the universe, i.e. the causal aspects of reality that result in the fundamental-level discontinuities in the universe that we represent with mathematical equations, are not representable as mathematical equations;

    2. Subjective Experience/Consciousness/Qualia/Knowledge – The subjective experiences that occur in the universe, i.e. the knowledge aspects of reality whereby the universe somehow apprehends/senses the fundamental-level reality that we represent with mathematical equations, are not representable as mathematical equations; and

    3. Subjectivity – What causes and what knows is not representable by mathematical equations.

    I am contending that there are aspects of reality that are unprovable because they can’t be represented by mathematical equations. So at the beginning of the universe, what created, what “knew about”, the fundamental-level reality that we now represent with law-of-nature mathematical equations and their parameter values? The only possible contender, the only known contender, is particles. (Particles are not representable as mathematical equations: only information about particles is representable as mathematical equations.)

    Particles, and thereby atoms, molecules, and single- and multi-celled living things including human beings, are subjects: the repositories of all the fundamental- through to executive-level knowledge in the universe, and the original source of all the fundamental- through to executive-level creativity in the universe. (Note that, OBVIOUSLY, computers and robots don’t have the appropriate top down/bottom up structure, molecules, molecular flexibility or molecular interconnectivity for this sort of thing!)

  296. This is the best scientific argument that your brain isn't a computer | IOT POST Says:

    […] Penrose (who, at 84, is responsible for a substantial chunk of our understanding of the shape of the universe) has argued since the 1980s that conventional computer science and physics can not explain the human mind. He laid out his argument in a pair of books published in the late ’80s and early ’90s, and more recently in a debate with Aaronson at a conference in Minnesota. (Unfortunately, no complete transcript of that debate exists, but Aaronson summarizes it thoroughly on his blog.) […]

  297. Future Hi » This is the best scientific argument that your brain isn’t a computer Says:

    […] Penrose (who, at 84, is responsible for a substantial chunk of our understanding of the shape of the universe) has argued since the 1980s that conventional computer science and physics can not explain the human mind. He laid out his argument in a pair of books published in the late ’80s and early ’90s, and more recently in a debate with Aaronson at a conference in Minnesota. (Unfortunately, no complete transcript of that debate exists, but Aaronson summarizes it thoroughly on his blog.) […]

  298. Eli Says:

    Scott, #222…

    Sorry to say, but the “more copies made” thing is not a motivation for the “replicator” (organism) itself. This is *why* such a thing as pleasure and pain evolved: it gives a highly approximated proxy for the organism’s delta-free-energy. Evolution designed organisms that feel pleasure from capturing environmental energy as more organism (through consumption or reproduction), and feel pain from losing the mass, energy, and entropy organized when things are harmed, things like its own body, its conspecifics, and even its social bonds.

    The core free-energy minimization principle behind how life and brains work (a la Karl Friston, Jacob Hohwy, and Andy Clark, whom I’m getting this from) then directly entails that pain should be avoided (is bad) and pleasure sought-out (is good). Both these principles, by the way, are likewise moderated by the need to take a long-term view: you oughtn’t eat until you get heart-disease nor mate until you’ve got so many children you all starve to death, for instance.

    Evolution thus gives organisms supervision signals about their interests, and builds the organisms to learn from those supervision signals and seek out their best interests as they can understand them (based on the exteroceptive, interoceptive, and evaluative information available through their embodiments).

    Compatibilist free will then comes out as something of a trivial consequence of the active-inference theory of decision making.

  299. Fj Says:

    I’m very late to the party, but I want to put three objections, for my own benefit at the least.

    1) The whole thing seems inspired by a tentative belief that the Universe has morals, and morals surprisingly similar to those that a bunch of meatbag murder monkeys developed evolutionarily. So when we discover that those morals can’t cope with cloning of consciousness, we might expect that there’s a physical law that prevents that.

    That’s kinda silly so you should acknowledge that explicitly if that’s where you’re coming from, instead of being vague about why exactly you want the answers to those questions that are not “the Universe doesn’t run on _any_ ethics, have a good day”.

    2) There’s a contradiction in your arguments. On one hand you argue that human minds that currently exist in superposition are not really conscious so it’s OK to collapse the wavefunction and making that branch of a person entangled with the rest of the universe. On the other hand you argue that all those superpositions that exist in a human brain are really important so collapsing them and copying the classical state doesn’t copy the person. You argue against classical duplication using an appeal to quantum phenomena, and against quantum duplication using an appeal to classical phenomena.

    3) The arbitrariness of it all is made worse by your own objection to Penrose: where Penrose says “consciousness is mysterious and I don’t understand why I experience the world, so there’s a hypothetical physical thing that bestows internal experiences on certain objects” you don’t touch the internal or external manifestations of consciousness at all, it’s just “we get rid of this ethical paradox by declaring that these objects are not real people because they are quantum copies” and “we get rid of that ethical paradox by declaring that those objects are not real people because they are classical copies”.

    It’s all about finding _unambiguous_ solutions to some ethical paradoxes (as if being unambiguous were the main thing we care about in our ethics, above even consistency), not about the mystery of the consciousness at all. It doesn’t say that quantum copies don’t experience consciousness, it only says that it would be very convenient to declare that they do not.

  300. fred Says:

    Maybe the question is – would a thinking machine ever come up with the concept of qualia (or meditation) on its own?

  301. Links & misc #5 | Hypermagical Ultraomnipotence Says:

    […] Scott Aaronson v. Roger Penrose on conscious computers. Fave paragraph: “Similarly, a biologist asked how I could possibly have any confidence that […]

  302. Consciousness and Empathy | BLABLA.id NEWS Says:

    […] So why is phenomenal consciousness so deeply associated with empathy and moral standing? Moral theory emphasizes value and experienced happiness and the avoidance of pain that only phenomenal consciousness can provide (see Kahane & Savulescu, 2009; Siewert 1998). Views based on rationality may be dissociated from this, according to CAD, but the connection between consciousness and moral value cannot be accidental. Phenomenal consciousness is necessary for moral standing because of its empathic role and intrinsic value. (Interestingly, few authors define consciousness in terms of moral value—but Scott Aaronson does, and he is a computer scientist! See his blog post on computers and consciousness.) […]