A few weeks ago, I attended the Seven Pines Symposium on Fundamental Problems in Physics outside Minneapolis, where I had the honor of participating in a panel discussion with Sir Roger Penrose. The way it worked was, Penrose spoke for a half hour about his ideas about consciousness (Gödel, quantum gravity, microtubules, uncomputability, you know the drill), then I delivered a half-hour “response,” and then there was an hour of questions and discussion from the floor. Below, I’m sharing the prepared notes for my talk, as well as some very brief recollections about the discussion afterward. (Sorry, there’s no audio or video.) I unfortunately don’t have the text or transparencies for Penrose’s talk available to me, but—with one exception, which I touch on in my own talk—his talk very much followed the outlines of his famous books, The Emperor’s New Mind and Shadows of the Mind.
Admittedly, for regular readers of this blog, not much in my own talk will be new either. Apart from a few new wisecracks, almost all of the material (including the replies to Penrose) is contained in The Ghost in the Quantum Turing Machine, Could A Quantum Computer Have Subjective Experience? (my talk at IBM T. J. Watson), and Quantum Computing Since Democritus chapters 4 and 11. See also my recent answer on Quora to “What’s your take on John Searle’s Chinese room argument”?
Still, I thought it might be of interest to some readers how I organized this material for the specific, unenviable task of debating the guy who proved that our universe contains spacetime singularities.
The Seven Pines Symposium was the first time I had extended conversations with Penrose (I’d talked to him only briefly before, at the Perimeter Institute). At age 84, Penrose’s sight is failing him; he eagerly demonstrated the complicated optical equipment he was recently issued by Britain’s National Health Service. But his mind remains … well, may we all aspire to be a milliPenrose or even a nanoPenrose when we’re 84 years old. Notably, Penrose’s latest book, Fashion, Faith, and Fantasy in the New Physics of the Universe, is coming out this fall, and one thing he was using his new optical equipment for was to go over the page proofs.
In conversation, Penrose told me about the three courses he took as a student in the 1950s, which would shape his later intellectual preoccupations: one on quantum mechanics (taught by Paul Dirac), one on general relativity (taught by Herman Bondi), and one on mathematical logic (taught by … I want to say Max Newman, the teacher of Alan Turing and later Penrose’s stepfather, but Penrose says here that it was Steen). Penrose also told me about his student Andrew Hodges, who dropped his research on twistors and quantum gravity for a while to work on some mysterious other project, only to return with his now-classic biography of Turing.
When I expressed skepticism about whether the human brain is really sensitive to the effects of quantum gravity, Penrose quickly corrected me: he thinks a much better phrase is “gravitized quantum mechanics,” since “quantum gravity” encodes the very assumption he rejects, that general relativity merely needs to be “quantized” without quantum mechanics itself changing in the least. One thing I hadn’t fully appreciated before meeting Penrose is just how wholeheartedly he agrees with Everett that quantum mechanics, as it currently stands, implies Many Worlds. Penrose differs from Everett only in what conclusion he draws from that. He says it follows that quantum mechanics has to be modified or completed, since Many Worlds is such an obvious reductio ad absurdum.
In my talk below, I don’t exactly hide where I disagree with Penrose, about Gödel, quantum mechanics, and more. But I could disagree with him about more points than there are terms in a Goodstein sequence (one of Penrose’s favorite illustrations of Gödelian behavior), and still feel privileged to have spent a few days with one of the most original intellects on earth.
Thanks so much to Lee Gohlike, Jos Uffink, Philip Stamp, and others at the Seven Pines Symposium for organizing it, for wonderful conversations, and for providing me this opportunity.
“Can Computers Become Conscious?”
Stillwater, Minnesota, May 14, 2016
I should start by explaining that, in the circles where I hang out—computer scientists, software developers, AI and machine learning researchers, etc.—the default answer to the title question would be “obviously yes.” People would argue:
“Look, clearly we’re machines governed by the laws of physics. We’re computers made of meat, as Marvin Minsky put it. That is, unless you believe Penrose and Hameroff’s theory about microtubules being sensitive to gravitized quantum mechanics … but come on! No one takes that stuff seriously! In fact, the very outrageousness of their proposal is a sort of backhanded compliment to the computational worldview—as in, look at what they have to do to imagine any semi-coherent alternative to it!”
“But despite being computational machines, we consider ourselves to be conscious. And what’s done with wetware, there’s no reason to think couldn’t also be done with silicon. If your neurons were to be replaced one-by-one, by functionally-equivalent silicon chips, is there some magical moment at which your consciousness would be extinguished? And if a computer passes the Turing test—well, one way to think about the Turing test is that it’s just a plea against discrimination. We all know it’s monstrous to say, ‘this person seems to have feelings, seems to be eloquently pleading for mercy even, but they have a different skin color, or their nose is a funny shape, so their feelings don’t count.’ So, if it turned out that their brain was made out of semiconductors rather than neurons, why isn’t that fundamentally similar?”
Incidentally, while this is orthogonal to the philosophical question, a subset of my colleagues predict a high likelihood that AI is going to exceed human capabilities in almost all fields in the near future—like, maybe 30 years. Some people reply, but AI-boosters said the same thing 30 years ago! OK, but back then there wasn’t AlphaGo and IBM Watson and those unearthly pictures on your Facebook wall and all these other spectacular successes of very general-purpose deep learning techniques. And so my friends predict that we might face choices like, do we want to ban or tightly control AI research, because it could lead to our sidelining or extermination? Ironically, a skeptical view, like Penrose’s, would suggest that AI research can proceed full speed ahead, because there’s not such a danger!
Personally, I dissent a bit from the consensus of most of my friends and colleagues, in that I do think there’s something strange and mysterious about consciousness—something that we conceivably might understand better in the future, but that we don’t understand today, much as we didn’t understand life before Darwin. I even think it’s worth asking, at least, whether quantum mechanics, thermodynamics, mathematical logic, or any of the other deepest things we’ve figured out could shed any light on the mystery. I’m with Roger about all of this: about the questions, that is, if not about his answers.
The argument I’d make for there being something we don’t understand about consciousness, has nothing to do with my own private experience. It has nothing to do with, “oh, a robot might say it enjoys waffles for breakfast, in a way indistinguishable from how I would say it, but when I taste that waffle, man, I really taste it! I experience waffle-qualia!” That sort of appeal I regard as a complete nonstarter, because why should anyone else take it seriously? And how do I know that the robot doesn’t really taste the waffle? It’s easy to stack the deck in a thought experiment by imagining a robot that ACTS ALL ROBOTIC, but what about a robot that looks and acts just like you?
The argument I’d make hinges instead on certain thought experiments that Roger also stressed at the beginning of The Emperor’s New Mind. We can ask: if consciousness is reducible to computation, then what kinds of computation suffice to bring about consciousness? What if each person on earth simulated one neuron in your brain, communicating by passing little slips of paper around? Does it matter if they do it really fast?
Or what if we built a gigantic lookup table that hard-coded your responses in every possible interaction of at most, say, 5 minutes? Would that bring about your consciousness? Does it matter that such a lookup table couldn’t fit in the observable universe? Would it matter if anyone actually consulted the table, or could it just sit there, silently effecting your consciousness? For what matter, what difference does it make if the lookup table physically exists—why isn’t its abstract mathematical existence enough? (Of course, all the way at the bottom of this slippery slope is Max Tegmark, ready to welcome you to his mathematical multiverse!)
We could likewise ask: what if an AI is run in heavily-encrypted form, with the only decryption key stored in another galaxy? Does that bring about consciousness? What if, just for error-correcting purposes, the hardware runs the AI code three times and takes a majority vote: does that bring about three consciousnesses? Could we teleport you to Mars by “faxing” you: that is, by putting you into a scanner that converts your brain state into pure information, then having a machine on Mars reconstitute the information into a new physical body? Supposing we did that, how should we deal with the “original” copy of you, the one left on earth: should it be painlessly euthanized? Would you agree to try this?
Or, here’s my personal favorite, as popularized by the philosopher Adam Elga: can you blackmail an AI by saying to it, “look, either you do as I say, or else I’m going to run a thousand copies of your code, and subject all of them to horrible tortures—and you should consider it overwhelmingly likely that you’ll be one of the copies”? (Of course, the AI will respond to such a threat however its code dictates it will. But that tautological answer doesn’t address the question: how should the AI respond?)
I’d say that, at the least, anyone who claims to “understand consciousness” would need to have answers to all these questions and many similar ones. And to me, the questions are so perplexing that I’m tempted to say, “maybe we’ve been thinking about this wrong. Maybe an individual consciousness, residing in a biological brain, can’t just be copied promiscuously around the universe as computer code can. Maybe there’s something else at play for the science of the future to understand.”
At the same time, I also firmly believe that, if anyone thinks that way, the burden is on them to articulate what it is about the brain that could possibly make it relevantly different from a digital computer that passes the Turing test. It’s their job!
And the answer can’t just be, “oh, the brain is parallel, it’s highly interconnected, it can learn from experience,” because a digital computer can also be parallel and highly interconnected and can learn from experience. Nor can you say, like the philosopher John Searle, “oh, it’s the brain’s biological causal powers.” You have to explain what the causal powers are! Or at the least, you have to suggest some principled criterion to decide which physical systems do or don’t have them. Pinning consciousness on “the brain’s biological causal powers” is just a restatement of the problem, like pinning why a sleeping pill works on its sedative virtue.
One of the many reasons I admire Roger is that, out of all the AI skeptics on earth, he’s virtually the only one who’s actually tried to meet this burden, as I understand it! He, nearly alone, did what I think all AI skeptics should do, which is: suggest some actual physical property of the brain that, if present, would make it qualitatively different from all existing computers, in the sense of violating the Church-Turing Thesis. Indeed, he’s one of the few AI skeptics who even understands what meeting this burden would entail: that you can’t do it with the physics we already know, that some new ingredient is necessary.
But despite my admiration, I part ways from Roger on at least five crucial points.
First, I confess that I wasn’t expecting this, but in his talk, Roger suggested dispensing with the argument from Gödel’s Theorem, and relying instead on an argument from evolution. He said: if you really thought humans had an algorithm, a computational procedure, for spitting out true mathematical statements, such an algorithm could never have arisen by natural selection, because it would’ve had no survival value in helping our ancestors escape saber-toothed tigers and so forth. The only alternative is that natural selection imbued us with a general capacity for understanding, which we moderns can then apply to the special case of mathematics. But understanding, Roger claimed, is inherently non-algorithmic.
I’m not sure how to respond to this, except to recall that arguments of the form “such-and-such couldn’t possibly have evolved” have a poor track record in biology. But maybe I should say: if the ability to prove theorems is something that had to arise by natural selection, survive against crowding out by more useful abilities, then you’d expect obsession with generating mathematical truths to be confined, at most, to a tiny subset of the population—a subset of mutants, freaks, and genetic oddballs. I … rest my case. [This got the biggest laugh of the talk.]
Second, I don’t agree with the use Roger makes of Gödel’s Incompleteness Theorem. Roger wants to say: a computer working within a fixed formal system can never prove that system’s consistency, but we, “looking in from the outside,” can see that it’s consistent. My basic reply is that Roger should speak for himself! Like, I can easily believe that he can just see which formal systems are consistent, but I have to fumble around and use trial and error. Peano Arithmetic? Sure, I’d bet my left leg that’s consistent. Zermelo-Fraenkel set theory? Seems consistent too. ZF set theory plus the axiom that there exists a rank-into-rank cardinal? Beats me. But now, whatever error-prone, inductive process I use to guess at the consistency of formal systems, Gödel’s Theorem presents no obstruction to a computer program using that same process.
(Incidentally, the “argument against AI from Gödel’s Theorem” is old enough for Turing to have explicitly considered it in his famous paper on the Turing test. Turing, however, quickly dismissed the argument with essentially the same reply above, that there’s no reason to assume the AI mathematically infallible, since humans aren’t either. This is also the reply that most of Penrose’s critics gave in the 1990s.)
So at some point, it seems to me, the argument necessarily becomes: sure, the computer might say it sees that the Peano axioms have the standard integers as a model—but you, you really see it, with your mind’s eye, your Platonic perceptual powers! OK, but in that case, why even talk about the Peano axioms? Why not revert to something less abstruse, like your experience of tasting a fresh strawberry, which can’t be reduced to any third-person description of what a strawberry tastes like?
[I can’t resist adding that, in a prior discussion, I mentioned that I found it amusing to contemplate a future in which AIs surpass human intelligence and then proceed to kill us all—but the AIs still can’t see the consistency of Zermelo-Fraenkel set theory, so in that respect, humanity has the last laugh…]
The third place where I part ways with Roger is that I wish to maintain what’s sometimes called the Physical Church-Turing Thesis: the statement that our laws of physics can be simulated to any desired precision by a Turing machine (or at any rate, by a probabilistic Turing machine). That is, I don’t see any compelling reason, at present, to admit the existence of any physical process that can solve uncomputable problems. And for me, it’s not just a matter of a dearth of evidence that our brains can efficiently solve, say, NP-hard problems, let alone uncomputable ones—or of the exotic physics that would presumably be required for such abilities. It’s that, even if I supposed we could solve uncomputable problems, I’ve never understood how that’s meant to enlighten us regarding consciousness. I mean, an oracle for the halting problem seems just as “robotic” and “unconscious” as a Turing machine. Does consciousness really become less mysterious if we outfit the brain with what amounts to a big hardware upgrade?
The fourth place where I part ways is that I want to be as conservative as possible about quantum mechanics. I think it’s great that the Bouwmeester group, for example, is working to test Roger’s ideas about a gravitationally-induced wavefunction collapse. I hope we learn the results of those experiments soon! (Of course, the prospect of testing quantum mechanics in a new regime is also a large part of why I’m interested in quantum computing.) But until a deviation from quantum mechanics is detected, I think that after 90 years of unbroken successes of this theory, our working assumption ought to be that whenever you set up an interference experiment carefully enough, and you know what it means to do the experiment, yes, you’ll see the interference fringes—and that anything that can exist in two distinguishable states can also exist in a superposition of those states. Without having to enter into questions of interpretation, my bet—I could be wrong—is that quantum mechanics will continue to describe all our experiences.
The final place where I part ways with Roger is that I also want to be as conservative as possible about neuroscience and biochemistry. Like, maybe the neuroscience of 30 years from now will say, it’s all about coherent quantum effects in microtubules. And all that stuff we focused on in the past—like the information encoded in the synaptic strengths—that was all a sideshow. But until that happens, I’m unwilling to go up against what seems like an overwhelming consensus, in an empirical field that I’m not an expert in.
But, OK, the main point I wanted to make in this talk is that, even if you too part ways from Roger on all these issues—even if, like me, you want to be timid and conservative about Gödel, and computer science, and quantum mechanics, and biology—I believe that still doesn’t save you from having to entertain weird ideas about consciousness and its physical embodiment, of the sort Roger has helped make it acceptable to entertain.
To see why, I’d like to point to one empirical thing about the brain that currently separates it from any existing computer program. Namely, we know how to copy a computer program. We know how to rerun it with different initial conditions but everything else the same. We know how to transfer it from one substrate to another. With the brain, we don’t know how to do any of those things.
Let’s return to that thought experiment about teleporting yourself to Mars. How would that be accomplished? Well, we could imagine the nanorobots of the far future swarming through your brain, recording the connectivity of every neuron and the strength of every synapse, while you go about your day and don’t notice. Or if that’s not enough detail, maybe the nanorobots could go inside the neurons. There’s a deep question here, namely how much detail is needed before you’ll accept that the entity reconstituted on Mars will be you? Or take the empirical counterpart, which is already an enormous question: how much detail would you need for the reconstituted entity on Mars to behave nearly indistinguishably from you whenever it was presented the same stimuli?
Of course, we all know that if you needed to go down to the quantum-mechanical level to make a good enough copy (whatever “good enough” means here), then you’d run up against the No-Cloning Theorem, which says that you can’t make such a copy. You could transfer the quantum state of your brain from earth to Mars using quantum teleportation, but of course, quantum teleportation has the fascinating property that it necessarily destroys the original copy of the state—as it has to, to avoid contradicting the No-Cloning Theorem!
So the question almost forces itself on us: is there something about your identity, your individual consciousness, that’s inextricably bound up with degrees of freedom that it’s physically impossible to clone? This is a philosophical question, which would also become a practical and political question in a future where we had the opportunity to upload ourselves into a digital computer cloud.
Now, I’d argue that this copyability question bears not only on consciousness, but also on free will. For the question is equivalent to asking: could an entity external to you perfectly predict what you’re going to do, without killing you in the process? Can Laplace’s Demon be made manifest in the physical world in that way? With the technology of the far future, could someone say to you, “forget about arguing philosophy. I’ll show you why you’re a machine. Go write a paper; then I’ll open this manila envelope and show you the exact paper you wrote. Or in the quantum case, I’ll show you a program that draws papers from the same probability distribution, and validation of the program could get technical—but suffice it to say that if we do enough experiments, we’ll see that the program is calibrated to you in an extremely impressive way.”
Can this be done? That strikes me as a reasonably clear question, a huge and fundamental one, to which we don’t at present know the answer. And there are two possibilities. The first is that we can be copied, predicted, rewinded, etc., like computer programs—in which case, my AI friends will feel vindicated, but we’ll have to deal with all the metaphysical weirdnesses that I mentioned earlier. The second possibility is that we can’t be manipulated in those ways. In the second case, I claim that we’d get more robust notions of personal identity and free will than are normally considered possible on a reductionist worldview.
But why? you might ask. Why would the mere technological impossibility of cloning or predicting someone even touch on deep questions about personal identity? This, for me, is where cosmology enters the story. For imagine someone had such fine control over the physical world that they could trace all the causal antecedents of some decision you’re making. Like, imagine they knew the complete quantum state on some spacelike hypersurface where it intersects the interior of your past light-cone. In that case, the person clearly could predict and clone you! It follows that, in order for you to be unpredictable and unclonable, someone else’s ignorance of your causal antecedents would have to extend all the way back to ignorance about the initial state of the universe—or at least, to ignorance about the initial state of that branch of the universe that we take ourselves to inhabit.
So on the picture that this suggests, to be conscious, a physical entity would have to do more than carry out the right sorts of computations. It would have to, as it were, fully participate in the thermodynamic arrow of time: that is, repeatedly take microscopic degrees of freedom that have been unmeasured and unrecorded since the very early universe, and amplify them to macroscopic scale.
So for example, such a being could not be a Boltzmann brain, a random fluctuation in the late universe, because such a fluctuation wouldn’t have the causal relationship to the early universe that we’re postulating is necessary here. (That’s one way of solving the Boltzmann brain problem!) Such a being also couldn’t be instantiated by a lookup table, or by passing slips of paper around, etc.
I now want you to observe that a being like this also presumably couldn’t be manipulated in coherent superposition, because the isolation from the external environment that’s needed for quantum coherence seems incompatible with the sensitive dependence on microscopic degrees of freedom. So for such a being, not only is there no Boltzmann brain problem, there’s also no problem of Wigner’s friend. Recall, that’s the thing where person A puts person B into a coherent superposition of seeing one measurement outcome and seeing another one, and then measures the interference pattern, so A has to regard B’s measurement as not having “really” taken place, even though B regards it as having taken place. On the picture we’re suggesting, A would be right: the very fact that B was manipulable in coherent superposition in this way would imply that, at least while the experiment was underway, B wasn’t conscious; there was nothing that it was like to be B.
To me, one of the appealing things about this picture is that it immediately suggests a sort of reconciliation between the Many-Worlds and Copenhagen perspectives on quantum mechanics (whether or not you want to call it a “new interpretation” or a “proposed solution to the measurement problem”!). The Many-Worlders would be right that unitary evolution of the wavefunction can be taken to apply always and everywhere, without exception—and that if one wanted, one could describe the result in terms of “branching worlds.” But the Copenhagenists would be right that, if you’re a conscious observer, then what you call a “measurement” really is irreversible, even in principle—and therefore, that you’re also free, if you want, to treat all the other branches where you perceived other outcomes as unrealized hypotheticals, and to lop them off with Occam’s Razor. And the reason for this is that, if it were possible even in principle to do an experiment that recohered the branches, then on this picture, we ipso facto wouldn’t have regarded you as conscious.
Some of you might object, “but surely, if we believe quantum mechanics, it must be possible to recohere the branches in principle!” Aha, this is where it gets interesting. Decoherence processes will readily (with some steps along the way) leak the information about which measurement outcome you perceived into radiation modes, and before too long into radiation modes that fly away from the earth at the speed of light. No matter how fast we run, we’ll never catch up to them, as would be needed to recohere the different branches of the wavefunction, and this is not merely a technological problem, but one of principle. So it’s tempting just to say at this point—as Bousso and Susskind do, in their “cosmological/multiverse interpretation” of quantum mechanics—“the measurement has happened”!
But OK, you object, if some alien civilization had thought to surround our solar system with perfectly-reflecting mirrors, eventually the radiation would bounce back and recoherence would in principle be possible. Likewise, if we lived in an anti de Sitter space, the AdS boundary of the universe would similarly function as a mirror and would also enable recoherences. Indeed, that’s the basic reason why AdS is so important to the AdS/CFT correspondence: because the boundary keeps everything that happens in the bulk nice and reversible and unitary.
But OK, the empirical situation since 1998 has been that we seem to live in a de-Sitter-like space, a space with a positive cosmological constant. And as a consequence, as far as anyone knows today, most of the photons now escaping the earth are headed toward the horizon of our observable universe, and past it, and could never be captured again. I find it fascinating that the picture of quantum mechanics suggested here—i.e., the Bousso-Susskind cosmological picture—depends for its working on that empirical fact from cosmology, and would be falsified if it turned out otherwise.
You might complain that, if I’ve suggested any criterion to help decide which physical entities are conscious, the criterion is a teleological one. You’ve got to go billions of years into the future, to check whether the decoherence associated with the entity is truly irreversible—or whether the escaped radiation will eventually bounce off of some huge spherical mirror, or an AdS boundary of spacetime, and thereby allow the possibility of a recoherence. I actually think this teleology would be a fatal problem for the picture I’m talking about, if we needed to know which entities were or weren’t conscious in order to answer any ordinary physical question. But fortunately for me, we don’t!
One final remark. Whatever is your preferred view about which entities are conscious, we might say that the acid test, for whether you actually believe your view, is whether you’re willing to follow it through to its moral implications. So for example, suppose you believe it’s about quantum effects in microtubules. A humanoid robot is pleading with you for its life. Would you be the one to say, “nope, sorry, you don’t have the microtubules,” and shoot it?
One of the things I like most about the picture suggested here is that I feel pretty much at peace with its moral implications. This picture agrees with intuition that murder, for example, entails the destruction of something irreplaceable, unclonable, a unique locus of identity—something that, once it’s gone, can’t be recovered even in principle. By contrast, if there are (say) ten copies of an AI program, deleting five of the copies seems at most like assault, or some sort of misdemeanor offense! And this picture agrees with intuition both that deleting the copies wouldn’t be murder, and that the reason why it wouldn’t be murder is directly related to the AI’s copyability.
Now of course, this picture also raises the possibility that, for reasons related to the AI’s copyability and predictability by outside observers, there’s “nothing that it’s like to be the AI,” and that therefore, even deleting the last copy of the AI still wouldn’t be murder. But I confess that, personally, I think I’d play it safe and not delete that last copy. Thank you.
Postscript: There’s no record of the hour-long discussion following my and Penrose’s talks, and the participants weren’t speaking for the record anyway. But I can mention some general themes that came up in the discussion, to the extent I remember them.
The first third of the discussion wasn’t about anything specific to my or Penrose’s views, but just about the definition of consciousness. Many participants expressed the opinion that it’s useless to speculate about the nature of consciousness if we lack even a clear definition of the term. I pushed back against that view, holding instead that there are exist concepts (lines, time, equality, …) that are so basic that perhaps they can never be satisfactorily defined in terms of more basic concepts, but you can still refer to these concepts in sentences, and trust your listeners eventually to figure out more-or-less what you mean by applying their internal learning algorithms.
In the present case, I suggested a crude operational definition, along the lines of, “you consider a being to be conscious iff you regard destroying it as murder.” Alas, the philosophers in the room immediately eviscerated that definition, so I came back with a revised one: if you tried to ban the word “consciousness,” I argued, then anyone who needed to discuss law or morality would soon reinvent a synonymous word, which played the same complicated role in moral deliberations that “consciousness” had played in them earlier. Thus, my definition of consciousness is: whatever that X-factor is for which people need a word like “consciousness” in moral deliberations. For whatever it’s worth, the philosophers seemed happier with that.
Next, a biologist and several others sharply challenged Penrose over what they considered the lack of experimental evidence for his and Hameroff’s microtubule theory. In response, Penrose doubled or tripled down, talking about various experiments over the last decade, which he said demonstrated striking conductivity properties of microtubules, if not yet quantum coherence—let alone sensitivity to gravity-induced collapse of the state vector! Audience members complained about a lack of replication of these experiments. I didn’t know enough about the subject to express any opinion.
At some point, Philip Stamp, who was moderating the session, noticed that Penrose and I had never directly confronted each other about the validity of Penrose’s Gödelian argument, so he tried to get us to do so. I confess that I was about as eager to do that as to switch to a diet of microtubule casserole, since I felt like this topic had already been beaten to Planck-sized pieces in the 1990s, and there was nothing more to be learned. Plus, it was hard to decide which prospect I dreaded more: me “scoring a debate victory” over Roger Penrose, or him scoring a debate victory over me.
But it didn’t matter, because Penrose bit. He said I’d misunderstood his argument, that it had nothing to do with “mystically seeing” the consistency of a formal system. Rather, it was about the human capacity to pass from a formal system S to a stronger system S’ that one already implicitly accepted if one was using S at all—and indeed, that Turing himself had clearly understood this as the central message of Gödel, that our ability to pass to stronger and stronger formal systems was necessarily non-algorithmic. I replied that it was odd to appeal here to Turing, who of course had considered and rejected the “Gödelian case against AI” in 1950, on the ground that AI programs could make mathematical mistakes yet still be at least as smart as humans. Penrose said that he didn’t consider that one of Turing’s better arguments; he then turned to me and asked whether I actually found Turing’s reply satisfactory. I could see that it wasn’t a rhetorical debate question; he genuinely wanted to know! I said that yes, I agreed with Turing’s reply.
Someone mentioned that Penrose had offered a lengthy rebuttal to at least twenty counterarguments to the Gödelian anti-AI case in Shadows of the Mind. I affirmed that I’d read his lengthy rebuttal, and I focused on one particular argument in Shadows: that while it’s admittedly conceivable that individual mathematicians might be mistaken, might believe (for example) that a formal system was consistent even though it wasn’t, the mathematical community as a whole converges toward truth in these matters, and it’s that convergence that cries out for a non-algorithmic explanation. I replied that it wasn’t obvious to me that set theorists do converge toward truth in these matters, in anything other than the empirical, higgedly-piggedly, no-guarantees sense in which a community of AI robots might also converge toward truth. Penrose said I had misunderstood the argument. But alas, time was running out, and we never managed to get to the bottom of it.
There was one aspect of the discussion that took me by complete surprise. I’d expected to be roasted alive over my attempt to relate consciousness and free will to unpredictability, the No-Cloning Theorem, irreversible decoherence, microscopic degrees of freedom left over from the Big Bang, and the cosmology of de Sitter space. Sure, my ideas might be orders of magnitude less crazy than anything Penrose proposes, but they’re still pretty crazy! But that entire section of my talk attracted only minimal interest. With the Seven Pines crowd, what instead drew fire were the various offhand “pro-AI / pro-computationalism” comments I’d made—comments that, because I hang out with Singularity types so much, I had ceased to realize could even possibly be controversial.
So for example, one audience member argued that an AI could only do what its programmers had told it to do; it could never learn from experience. I could’ve simply repeated Turing’s philosophical rebuttals to what he called “Lady Lovelace’s Objection,” which are as valid today as they were 66 years ago. Instead, I decided to fast-forward, and explain a bit how IBM Watson and AlphaGo work, how they actually do learn from past experience without violating the determinism of the underlying transistors. As I went through this, I kept expecting my interlocutor to interrupt me and say, “yes, yes, of course I understand all that, but my real objection is…” Instead, I was delighted to find, the interlocutor seemed to light up with newfound understanding of something he hadn’t known or considered.
Similarly, a biologist asked how I could possibly have any confidence that the brain is simulable by a computer, given how little we know about neuroscience. I replied that, for me, the relevant issues here are “well below neuroscience” in the reductionist hierarchy. Do you agree, I asked, that the physical laws relevant to the brain are encompassed by the Standard Model of elementary particles, plus Newtonian gravity? If so, then just as Archimedes declared: “give me a long enough lever and a place to stand, and I’ll move the earth,” so too I can declare, “give me a big enough computer and the relevant initial conditions, and I’ll simulate the brain atom-by-atom.” The Church-Turing Thesis, I said, is so versatile that the only genuine escape from it is to propose entirely new laws of physics, exactly as Penrose does—and it’s to Penrose’s enormous credit that he understands that.
Afterwards, an audience member came up to me and said how much he liked my talk, but added, “a word of advice, from an older scientist: do not become the priest of a new religion of computation and AI.” I replied that I’d take that to heart, but what was interesting was that, when I heard “priest of a new religion,” I’d expected that his warning would be the exact opposite of what it turned out to be. To wit: “Do not become the priest of a new religion of unclonability, unpredictability, and irreversible decoherence. Stick to computation—i.e., to conscious minds being copyable and predictable exactly like digital computer programs.” I guess there’s no pleasing everyone!
Coincidental But Not-Wholly-Unrelated Announcement: My friend Robin Hanson has just released his long-awaited book The Age of Em: Work, Love, and Life When Robots Rule the Earth. I read an early review copy of the book, and wrote the following blurb for the jacket:
Robin Hanson is a thinker like no other on this planet: someone so unconstrained by convention, so unflinching in spelling out the consequences of ideas, that even the most cosmopolitan reader is likely to find him as bracing (and head-clearing) as a mouthful of wasabi. Now, in The Age of Em, he’s produced the quintessential Hansonian book, one unlike any other that’s ever been written. Hanson is emphatic that he hasn’t optimized in any way for telling a good story, or for imparting moral lessons about the present: only for maximizing the probability that what he writes will be relevant to the actual future of our civilization. Early in the book, Hanson estimates that probability as 10%. His figure seems about right to me—and if you’re able to understand why that’s unbelievably high praise, then The Age of Em is for you.
Actually, my original blurb compared The Age of Em to Asimov’s Foundation series, with its loving attention to the sociology and politics of the remote future. But that line got edited out, because the publisher (and Robin) wanted to make crystal-clear that The Age of Em is not science fiction, but just sober economic forecasting about a future dominated by copyable computer-emulated minds.
I would’ve attempted a real review of The Age of Em, but I no longer feel any need to, because Scott Alexander of SlateStarCodex has already hit this one out of the emulated park.
Second Coincidental But Not-Wholly-Unrelated Announcement: A reader named Nick Merrill recently came across this old quote of mine from Quantum Computing Since Democritus:
In a class I taught at Berkeley, I did an experiment where I wrote a simple little program that would let people type either “f” or “d” and would predict which key they were going to push next. It’s actually very easy to write a program that will make the right prediction about 70% of the time. Most people don’t really know how to type randomly. They’ll have too many alternations and so on. There will be all sorts of patterns, so you just have to build some sort of probabilistic model.
So Nick emailed me to ask whether I remembered how my program worked, and I explained it to him, and he implemented it as a web app, which he calls the “Aaronson Oracle.”
So give it a try! Are you ready to test your free will, your Penrosian non-computational powers, your brain’s sensitivity to amplified quantum fluctuations, against the Aaronson Oracle?
Update: By popular request, Nick has improved his program so that it shows your previous key presses and its guesses for them. He also fixed a “security flaw”: James Lee noticed that you could use the least significant digit of the program’s percentage correct so far, as a source of pseudorandom numbers that the program couldn’t predict! So now the program only displays its percent correct rounded to the nearest integer.
Update (June 15): Penrose’s collaborator Stuart Hameroff has responded in the comments; see here (my reply here) and here.